title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Parse String to Float or Int | 379,906 | 1,130 | 2008-12-19T01:52:26Z | 17,815,252 | 26 | 2013-07-23T16:00:21Z | [
"python",
"string",
"parsing",
"floating-point",
"integer"
] | In Python, how can I parse a numeric string like `"545.2222"` to its corresponding float value, `542.2222`? Or parse the string `"31"` to an integer, `31`?
I just want to know how to parse a float string to a float, and (separately) an int string to an int. | ## Localization and commas
You should consider the possibility of commas in the string representation of a number, for cases like `float("545,545.2222")` which throws an exception. Instead, use methods in `locale` to convert the strings to numbers and interpret commas correctly. The `locale.atof` method converts to a float in one step once the locale has been set for the desired number convention.
**Example 1 -- United States number conventions**
In the United States and the UK, commas can be used as a thousands separator. In this example with American locale, the comma is handled properly as a separator:
```
>>> import locale
>>> a = u'545,545.2222'
>>> locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
'en_US.UTF-8'
>>> locale.atof(a)
545545.2222
>>> int(locale.atof(a))
545545
>>>
```
**Example 2 -- European number conventions**
In the [majority of countries of the world](https://en.wikipedia.org/wiki/Decimal_mark), commas are used for decimal marks instead of periods. In this example with French locale, the comma is correctly handled as a decimal mark:
```
>>> import locale
>>> b = u'545,2222'
>>> locale.setlocale(locale.LC_ALL, 'fr_FR')
'fr_FR'
>>> locale.atof(b)
545.2222
```
The method `locale.atoi` is also available, but the argument should be an integer. |
Parse String to Float or Int | 379,906 | 1,130 | 2008-12-19T01:52:26Z | 20,929,983 | 246 | 2014-01-05T04:15:39Z | [
"python",
"string",
"parsing",
"floating-point",
"integer"
] | In Python, how can I parse a numeric string like `"545.2222"` to its corresponding float value, `542.2222`? Or parse the string `"31"` to an integer, `31`?
I just want to know how to parse a float string to a float, and (separately) an int string to an int. | ## Python method to check if a string is a float:
```
def isfloat(value):
try:
float(value)
return True
except:
return False
```
## What is, and is not a float in [Python](http://en.wikipedia.org/wiki/Python_%28programming_language%29) may surprise you:
```
Command to parse isFloat? Note
------------------------------------ -------- --------------------------------
print(isfloat("")) False Blank string
print(isfloat("127")) True Passed string
print(isfloat(True)) True Pure sweet Truth
print(isfloat("True")) False Vile contemptible lie
print(isfloat(False)) True So false it becomes true
print(isfloat("123.456")) True Decimal
print(isfloat(" -127 ")) True Spaces trimmed
print(isfloat("\t\n12\r\n")) True whitespace ignored
print(isfloat("NaN")) True Not a number
print(isfloat("NaNanananaBATMAN")) False I am Batman
print(isfloat("-iNF")) True Negative infinity
print(isfloat("123.E4")) True Exponential notation
print(isfloat(".1")) True mantissa only
print(isfloat("1,234")) False Commas gtfo
print(isfloat(u'\x30')) True Unicode is fine.
print(isfloat("NULL")) False Null is not special
print(isfloat(0x3fade)) True Hexidecimal
print(isfloat("6e7777777777777")) True Shrunk to infinity
print(isfloat("1.797693e+308")) True This is max value
print(isfloat("infinity")) True Same as inf
print(isfloat("infinityandBEYOND")) False Extra characters wreck it
print(isfloat("12.34.56")) False Only one dot allowed
print(isfloat(u'å')) False Japanese '4' is not a float.
print(isfloat("#56")) False Pound sign
print(isfloat("56%")) False Percent of what?
print(isfloat("0E0")) True Exponential, move dot 0 places
print(isfloat(0**0)) True 0___0 Exponentiation
print(isfloat("-5e-5")) True Raise to a negative number
print(isfloat("+1e1")) True Plus is OK with exponent
print(isfloat("+1e1^5")) False Fancy exponent not interpreted
print(isfloat("+1e1.3")) False No decimals in exponent
print(isfloat("-+1")) False Make up your mind
print(isfloat("(1)")) False Parenthesis is bad
```
You think you know what numbers are? You are not so good as you think! Not big surprise. |
Parse String to Float or Int | 379,906 | 1,130 | 2008-12-19T01:52:26Z | 25,299,501 | 8 | 2014-08-14T03:21:37Z | [
"python",
"string",
"parsing",
"floating-point",
"integer"
] | In Python, how can I parse a numeric string like `"545.2222"` to its corresponding float value, `542.2222`? Or parse the string `"31"` to an integer, `31`?
I just want to know how to parse a float string to a float, and (separately) an int string to an int. | If you aren't averse to third-party modules, you could check out the [fastnumbers](https://pypi.python.org/pypi/fastnumbers) module. It provides a function called [fast\_real](http://pythonhosted.org//fastnumbers/fast.html#fast-real) that does exactly what this question is asking for and does it faster than a pure-Python implementation:
```
>>> from fastnumbers import fast_real
>>> fast_real("545.2222")
545.2222
>>> type(fast_real("545.2222"))
float
>>> fast_real("31")
31
>>> type(fast_real("31"))
int
``` |
How do I respond to mouse clicks on sprites in PyGame? | 380,420 | 5 | 2008-12-19T08:54:57Z | 393,256 | 10 | 2008-12-25T22:46:33Z | [
"python",
"pygame"
] | What is the canonical way of making your sprites respond to mouse clicks in PyGame ?
Here's something simple, in my event loop:
```
for event in pygame.event.get():
if event.type == pygame.QUIT:
exit_game()
[...]
elif ( event.type == pygame.MOUSEBUTTONDOWN and
pygame.mouse.get_pressed()[0]):
for sprite in sprites:
sprite.mouse_click(pygame.mouse.get_pos())
```
Some questions about it:
1. Is this the best way of responding to mouse clicks ?
2. What if the mouse stays pressed on the sprite for some time ? How do I make a single event out of it ?
3. Is this a reasonable way to notify all my sprites of the click ?
Thanks in advance | I usually give my clickable objects a click function, like in your example. I put all of those objects in a list, for easy iteration when the click functions are to be called.
when checking for which mousebutton you press, use the button property of the event.
```
import pygame
from pygame.locals import * #This lets you use pygame's constants directly.
for event in pygame.event.get():
if event.type == MOUSEBUTTONDOWN: #Better to seperate to a new if statement aswell, since there's more buttons that can be clicked and makes for cleaner code.
if event.button == 1:
for object in clickableObjectsList:
object.clickCheck(event.pos)
```
I would say this is the recommended way of doing it. The click only registers once, so it wont tell your sprite if the user is "dragging" with a button. That can easily be done with a boolean that is set to true with the MOUSEBUTTONDOWN event, and false with the MOUSEBUTTONUP. The have "draggable" objects iterated for activating their functions... and so on.
However, if you don't want to use an event handler, you can let an update function check for input with:
```
pygame.mouse.get_pos()
pygame.mouse.get_pressed().
```
This is a bad idea for larger projects, since it can create hard to find bugs. Better just keeping events in one place. Smaller games, like simple arcade games might make more sense using the probing style though. |
How do I get the full XML or HTML content of an element using ElementTree? | 380,603 | 5 | 2008-12-19T10:33:52Z | 380,717 | 11 | 2008-12-19T11:21:52Z | [
"python",
"xml",
"api",
"elementtree"
] | That is, all text and subtags, without the tag of an element itself?
Having
```
<p>blah <b>bleh</b> blih</p>
```
I want
```
blah <b>bleh</b> blih
```
element.text returns "blah " and etree.tostring(element) returns:
```
<p>blah <b>bleh</b> blih</p>
``` | ElementTree works perfectly, you have to assemble the answer yourself. Something like this...
```
"".join( [ "" if t.text is None else t.text ] + [ xml.tostring(e) for e in t.getchildren() ] )
```
Thanks to JV amd PEZ for pointing out the errors.
---
Edit.
```
>>> import xml.etree.ElementTree as xml
>>> s= '<p>blah <b>bleh</b> blih</p>\n'
>>> t=xml.fromstring(s)
>>> "".join( [ t.text ] + [ xml.tostring(e) for e in t.getchildren() ] )
'blah <b>bleh</b> blih'
>>>
```
Tail not needed. |
How to do this - python dictionary traverse and search | 380,734 | 7 | 2008-12-19T11:33:06Z | 380,769 | 12 | 2008-12-19T11:44:56Z | [
"python",
"dictionary",
"parsing"
] | I have nested dictionaries:
```
{'key0': {'attrs': {'entity': 'p', 'hash': '34nj3h43b4n3', 'id': '4130'},
u'key1': {'attrs': {'entity': 'r',
'hash': '34njasd3h43b4n3',
'id': '4130-1'},
u'key2': {'attrs': {'entity': 'c',
'hash': '34njasd3h43bdsfsd4n3',
'id': '4130-1-1'}}},
u'key3': {'attrs': {'entity': 'r',
'hash': '34njasasasd3h43b4n3',
'id': '4130-2'},
u'key4': {'attrs': {'entity': 'c',
'hash': '34njawersd3h43bdsfsd4n3',
'id': '4130-2-1'}},
u'key5': {'attrs': {'entity': 'c',
'hash': '34njawersd3h43bdsfsd4n3',
'id': '4130-2-2'}}}},
'someohterthing': 'someothervalue',
'something': 'somevalue'}
```
given an `id` - one of all the `ids` like `4130` to `4130-2-2`.
whats the easiest way to navigate to the correct dictionary?
Like if the given `id` is `4130-2-1` then it should reach the dictionary with `key=key5`
non xml approaches please.
**Edit(1):** The nesting is between `1` to `4` levels, but I know the nesting before I parse.
**Edit(2)**: Fixed the code.
\*\*Edit(3):\*\*Fixed code again for string values of `ids`. Please excuse for the confusion created. This is final I hope :) | If you want to solve the problem in a general way, no matter how many level of nesting you have in your dict, then create a recursive function which will traverse the tree:
```
def traverse_tree(dictionary, id=None):
for key, value in dictionary.items():
if key == 'id':
if value == id:
print dictionary
else:
traverse_tree(value, id)
return
>>> traverse_tree({1: {'id': 2}, 2: {'id': 3}}, id=2)
{'id': 2}
``` |
How to do this - python dictionary traverse and search | 380,734 | 7 | 2008-12-19T11:33:06Z | 380,874 | 12 | 2008-12-19T12:45:28Z | [
"python",
"dictionary",
"parsing"
] | I have nested dictionaries:
```
{'key0': {'attrs': {'entity': 'p', 'hash': '34nj3h43b4n3', 'id': '4130'},
u'key1': {'attrs': {'entity': 'r',
'hash': '34njasd3h43b4n3',
'id': '4130-1'},
u'key2': {'attrs': {'entity': 'c',
'hash': '34njasd3h43bdsfsd4n3',
'id': '4130-1-1'}}},
u'key3': {'attrs': {'entity': 'r',
'hash': '34njasasasd3h43b4n3',
'id': '4130-2'},
u'key4': {'attrs': {'entity': 'c',
'hash': '34njawersd3h43bdsfsd4n3',
'id': '4130-2-1'}},
u'key5': {'attrs': {'entity': 'c',
'hash': '34njawersd3h43bdsfsd4n3',
'id': '4130-2-2'}}}},
'someohterthing': 'someothervalue',
'something': 'somevalue'}
```
given an `id` - one of all the `ids` like `4130` to `4130-2-2`.
whats the easiest way to navigate to the correct dictionary?
Like if the given `id` is `4130-2-1` then it should reach the dictionary with `key=key5`
non xml approaches please.
**Edit(1):** The nesting is between `1` to `4` levels, but I know the nesting before I parse.
**Edit(2)**: Fixed the code.
\*\*Edit(3):\*\*Fixed code again for string values of `ids`. Please excuse for the confusion created. This is final I hope :) | Your structure is unpleasantly irregular. Here's a version with a **Visitor** function that traverses the `attrs` sub-dictionaries.
```
def walkDict( aDict, visitor, path=() ):
for k in aDict:
if k == 'attrs':
visitor( path, aDict[k] )
elif type(aDict[k]) != dict:
pass
else:
walkDict( aDict[k], visitor, path+(k,) )
def printMe( path, element ):
print path, element
def filterFor( path, element ):
if element['id'] == '4130-2-2':
print path, element
```
You'd use it like this.
```
walkDict( myDict, filterFor )
```
This can be turned into a generator instead of a **Visitor**; it would `yield path, aDict[k]` instead of invoking the visitor function.
You'd use it in a for loop.
```
for path, attrDict in walkDictIter( aDict ):
# process attrDict...
``` |
How to do this - python dictionary traverse and search | 380,734 | 7 | 2008-12-19T11:33:06Z | 380,987 | 9 | 2008-12-19T13:37:03Z | [
"python",
"dictionary",
"parsing"
] | I have nested dictionaries:
```
{'key0': {'attrs': {'entity': 'p', 'hash': '34nj3h43b4n3', 'id': '4130'},
u'key1': {'attrs': {'entity': 'r',
'hash': '34njasd3h43b4n3',
'id': '4130-1'},
u'key2': {'attrs': {'entity': 'c',
'hash': '34njasd3h43bdsfsd4n3',
'id': '4130-1-1'}}},
u'key3': {'attrs': {'entity': 'r',
'hash': '34njasasasd3h43b4n3',
'id': '4130-2'},
u'key4': {'attrs': {'entity': 'c',
'hash': '34njawersd3h43bdsfsd4n3',
'id': '4130-2-1'}},
u'key5': {'attrs': {'entity': 'c',
'hash': '34njawersd3h43bdsfsd4n3',
'id': '4130-2-2'}}}},
'someohterthing': 'someothervalue',
'something': 'somevalue'}
```
given an `id` - one of all the `ids` like `4130` to `4130-2-2`.
whats the easiest way to navigate to the correct dictionary?
Like if the given `id` is `4130-2-1` then it should reach the dictionary with `key=key5`
non xml approaches please.
**Edit(1):** The nesting is between `1` to `4` levels, but I know the nesting before I parse.
**Edit(2)**: Fixed the code.
\*\*Edit(3):\*\*Fixed code again for string values of `ids`. Please excuse for the confusion created. This is final I hope :) | This kind of problem is often better solved with proper class definitions, not generic dictionaries.
```
class ProperObject( object ):
"""A proper class definition for each "attr" dictionary."""
def __init__( self, path, attrDict ):
self.path= path
self.__dict__.update( attrDict )
def __str__( self ):
return "path %r, entity %r, hash %r, id %r" % (
self.path, self.entity, self.hash, self.id )
masterDict= {}
def builder( path, element ):
masterDict[path]= ProperObject( path, element )
# Use the Visitor to build ProperObjects for each "attr"
walkDict( myDict, builder )
# Now that we have a simple dictionary of Proper Objects, things are simple
for k,v in masterDict.items():
if v.id == '4130-2-2':
print v
```
Also, now that you have Proper Object definitions, you can do the following
```
# Create an "index" of your ProperObjects
import collections
byId= collections.defaultdict(list)
for k in masterDict:
byId[masterDict[k].id].append( masterDict[k] )
# Look up a particular item in the index
print map( str, byId['4130-2-2'] )
``` |
Python: single instance of program | 380,870 | 66 | 2008-12-19T12:42:52Z | 380,876 | 16 | 2008-12-19T12:46:38Z | [
"python",
"locking"
] | Is there a Pythonic way to have only one instance of a program running?
The only reasonable solution I've come up with is trying to run it as a server on some port, then second program trying to bind to same port - fails. But it's not really a great idea, maybe there's something more lightweight than this?
(Take into consideration that program is expected to fail sometimes, i.e. segfault - so things like "lock file" won't work)
**Update**: the solutions offered are much more complex and less reliant than just having a port occupied with a non-existent server, so I'd have to go with that one. | I don't know if it's pythonic enough, but in the Java world listening on a defined port is a pretty widely used solution, as it works on all major platforms and doesn't have any problems with crashing programs.
Another advantage of listening to a port is that you could send a command to the running instance. For example when the users starts the program a second time, you could send the running instance a command to tell it to open another window (that's what Firefox does, for example. I don't know if they use TCP ports or named pipes or something like that, 'though). |
Python: single instance of program | 380,870 | 66 | 2008-12-19T12:42:52Z | 384,493 | 25 | 2008-12-21T14:02:47Z | [
"python",
"locking"
] | Is there a Pythonic way to have only one instance of a program running?
The only reasonable solution I've come up with is trying to run it as a server on some port, then second program trying to bind to same port - fails. But it's not really a great idea, maybe there's something more lightweight than this?
(Take into consideration that program is expected to fail sometimes, i.e. segfault - so things like "lock file" won't work)
**Update**: the solutions offered are much more complex and less reliant than just having a port occupied with a non-existent server, so I'd have to go with that one. | Simple, ~~cross-platform~~ solution, found in **[another question](http://stackoverflow.com/questions/220525/ensuring-a-single-instance-of-an-application-in-linux#221159)** by [zgoda](http://stackoverflow.com/users/12138/zgoda):
```
import fcntl, sys
pid_file = 'program.pid'
fp = open(pid_file, 'w')
try:
fcntl.lockf(fp, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
# another instance is running
sys.exit(0)
```
A lot like S.Lott's suggestion, but with the code. |
Python: single instance of program | 380,870 | 66 | 2008-12-19T12:42:52Z | 1,265,445 | 56 | 2009-08-12T10:45:24Z | [
"python",
"locking"
] | Is there a Pythonic way to have only one instance of a program running?
The only reasonable solution I've come up with is trying to run it as a server on some port, then second program trying to bind to same port - fails. But it's not really a great idea, maybe there's something more lightweight than this?
(Take into consideration that program is expected to fail sometimes, i.e. segfault - so things like "lock file" won't work)
**Update**: the solutions offered are much more complex and less reliant than just having a port occupied with a non-existent server, so I'd have to go with that one. | The following code should do the job, it is cross-platform and runs on Python 2.4-3.2. I tested it on Windows, OS X and Linux.
```
from tendo import singleton
me = singleton.SingleInstance() # will sys.exit(-1) if other instance is running
```
The latest code version is available [singleton.py](https://github.com/pycontribs/tendo/blob/master/tendo/singleton.py). Please [file bugs here](https://github.com/pycontribs/tendo/issues).
You can install tend using one of the following methods:
* `easy_install tendo`
* `pip install tendo`
* manually by getting it from <http://pypi.python.org/pypi/tendo> |
Python: single instance of program | 380,870 | 66 | 2008-12-19T12:42:52Z | 1,662,504 | 19 | 2009-11-02T17:10:48Z | [
"python",
"locking"
] | Is there a Pythonic way to have only one instance of a program running?
The only reasonable solution I've come up with is trying to run it as a server on some port, then second program trying to bind to same port - fails. But it's not really a great idea, maybe there's something more lightweight than this?
(Take into consideration that program is expected to fail sometimes, i.e. segfault - so things like "lock file" won't work)
**Update**: the solutions offered are much more complex and less reliant than just having a port occupied with a non-existent server, so I'd have to go with that one. | This code is Linux specific ( it uses 'abstract' UNIX domain sockets ) but it is simple and won't leave stale lock files around. I prefer it to the solution above because it doesn't require a specially reserved TCP port.
```
try:
import socket
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
## Create an abstract socket, by prefixing it with null.
s.bind( '\0postconnect_gateway_notify_lock')
except socket.error, e:
error_code = e.args[0]
error_string = e.args[1]
print "Process already running (%d:%s ). Exiting" % ( error_code, error_string)
sys.exit (0)
```
The unique string `postconnect_gateway_notify_lock` can be changed to allow multiple programs that need a single instance enforced. |
Comparing massive lists of dictionaries in python | 382,466 | 11 | 2008-12-19T22:49:15Z | 382,707 | 18 | 2008-12-20T01:01:00Z | [
"python"
] | I never actually thought I'd run into speed-issues with python, but I have. I'm trying to compare really big lists of dictionaries to each other based on the dictionary values. I compare two lists, with the first like so
```
biglist1=[{'transaction':'somevalue', 'id':'somevalue', 'date':'somevalue' ...}, {'transactio':'somevalue', 'id':'somevalue', 'date':'somevalue' ...}, ...]
```
With 'somevalue' standing for a user-generated string, int or decimal. Now, the second list is pretty similar, except the id-values are always empty, as they have not been assigned yet.
```
biglist2=[{'transaction':'somevalue', 'id':'', 'date':'somevalue' ...}, {'transactio':'somevalue', 'id':'', 'date':'somevalue' ...}, ...]
```
So I want to get a list of the dictionaries in biglist2 that match the dictionaries in biglist1 for all other keys *except* id.
I've been doing
```
for item in biglist2:
for transaction in biglist1:
if item['transaction'] == transaction['transaction']:
list_transactionnamematches.append(transaction)
for item in biglist2:
for transaction in list_transactionnamematches:
if item['date'] == transaction['date']:
list_transactionnamematches.append(transaction)
```
... and so on, not comparing id values, until I get a final list of matches. Since the lists can be really big (around 3000+ items each), this takes quite some time for python to loop through.
I'm guessing this isn't really how this kind of comparison should be done. Any ideas? | Index on the fields you want to use for lookup. O(n+m)
```
matches = []
biglist1_indexed = {}
for item in biglist1:
biglist1_indexed[(item["transaction"], item["date"])] = item
for item in biglist2:
if (item["transaction"], item["date"]) in biglist1_indexed:
matches.append(item)
```
This is probably thousands of times faster than what you're doing now. |
In what contexts do programming languages make real use of an Infinity value? | 382,603 | 25 | 2008-12-19T23:54:04Z | 382,605 | 34 | 2008-12-19T23:55:12Z | [
"python",
"ruby",
"language-agnostic",
"idioms",
"infinity"
] | So in Ruby there is a trick to specify infinity:
```
1.0/0
=> Infinity
```
I believe in Python you can do something like this
```
float('inf')
```
These are just examples though, I'm sure most languages have infinity in some capacity. When would you actually use this construct in the real world? Why would using it in a range be better than just using a boolean expression? For instance
```
(0..1.0/0).include?(number) == (number >= 0) # True for all values of number
=> true
```
To summarize, what I'm looking for is a real world reason to use Infinity.
**EDIT**: I'm looking for real world code. It's all well and good to say this is when you "could" use it, when have people *actually* used it. | Off the top of the head, it can be useful as an initial value when searching for a minimum value.
For example:
```
min = float('inf')
for x in somelist:
if x<min:
min=x
```
Which I prefer to setting `min` initially to the first value of `somelist`
Of course, in Python, you should just use the min() built-in function in most cases. |
In what contexts do programming languages make real use of an Infinity value? | 382,603 | 25 | 2008-12-19T23:54:04Z | 382,628 | 11 | 2008-12-20T00:07:03Z | [
"python",
"ruby",
"language-agnostic",
"idioms",
"infinity"
] | So in Ruby there is a trick to specify infinity:
```
1.0/0
=> Infinity
```
I believe in Python you can do something like this
```
float('inf')
```
These are just examples though, I'm sure most languages have infinity in some capacity. When would you actually use this construct in the real world? Why would using it in a range be better than just using a boolean expression? For instance
```
(0..1.0/0).include?(number) == (number >= 0) # True for all values of number
=> true
```
To summarize, what I'm looking for is a real world reason to use Infinity.
**EDIT**: I'm looking for real world code. It's all well and good to say this is when you "could" use it, when have people *actually* used it. | In some physics calculations you can normalize irregularities (ie, infinite numbers) of the same order with each other, canceling them both and allowing a approximate result to come through.
When you deal with limits, calculations like (infinity / infinity) -> approaching a finite a number could be achieved. It's useful for the language to have the ability to overwrite the regular divide-by-zero error. |
In what contexts do programming languages make real use of an Infinity value? | 382,603 | 25 | 2008-12-19T23:54:04Z | 382,674 | 34 | 2008-12-20T00:40:24Z | [
"python",
"ruby",
"language-agnostic",
"idioms",
"infinity"
] | So in Ruby there is a trick to specify infinity:
```
1.0/0
=> Infinity
```
I believe in Python you can do something like this
```
float('inf')
```
These are just examples though, I'm sure most languages have infinity in some capacity. When would you actually use this construct in the real world? Why would using it in a range be better than just using a boolean expression? For instance
```
(0..1.0/0).include?(number) == (number >= 0) # True for all values of number
=> true
```
To summarize, what I'm looking for is a real world reason to use Infinity.
**EDIT**: I'm looking for real world code. It's all well and good to say this is when you "could" use it, when have people *actually* used it. | Dijkstra's Algorithm typically assigns infinity as the initial edge weights in a graph. This doesn't *have* to be "infinity", just some arbitrarily constant but in java I typically use Double.Infinity. I assume ruby could be used similarly. |
In what contexts do programming languages make real use of an Infinity value? | 382,603 | 25 | 2008-12-19T23:54:04Z | 382,686 | 8 | 2008-12-20T00:46:51Z | [
"python",
"ruby",
"language-agnostic",
"idioms",
"infinity"
] | So in Ruby there is a trick to specify infinity:
```
1.0/0
=> Infinity
```
I believe in Python you can do something like this
```
float('inf')
```
These are just examples though, I'm sure most languages have infinity in some capacity. When would you actually use this construct in the real world? Why would using it in a range be better than just using a boolean expression? For instance
```
(0..1.0/0).include?(number) == (number >= 0) # True for all values of number
=> true
```
To summarize, what I'm looking for is a real world reason to use Infinity.
**EDIT**: I'm looking for real world code. It's all well and good to say this is when you "could" use it, when have people *actually* used it. | [Alpha-beta pruning](http://en.wikipedia.org/wiki/Alpha-beta_pruning) |
In what contexts do programming languages make real use of an Infinity value? | 382,603 | 25 | 2008-12-19T23:54:04Z | 382,736 | 10 | 2008-12-20T01:29:23Z | [
"python",
"ruby",
"language-agnostic",
"idioms",
"infinity"
] | So in Ruby there is a trick to specify infinity:
```
1.0/0
=> Infinity
```
I believe in Python you can do something like this
```
float('inf')
```
These are just examples though, I'm sure most languages have infinity in some capacity. When would you actually use this construct in the real world? Why would using it in a range be better than just using a boolean expression? For instance
```
(0..1.0/0).include?(number) == (number >= 0) # True for all values of number
=> true
```
To summarize, what I'm looking for is a real world reason to use Infinity.
**EDIT**: I'm looking for real world code. It's all well and good to say this is when you "could" use it, when have people *actually* used it. | Use `Infinity` and `-Infinity` when implementing a mathematical algorithm calls for it.
In Ruby, `Infinity` and `-Infinity` have nice comparative properties so that `-Infinity` < `x` < `Infinity` for any real number `x`. For example, `Math.log(0)` returns `-Infinity`, extending to `0` the property that `x > y` implies that `Math.log(x) > Math.log(y)`. Also, `Infinity * x` is `Infinity` if x > 0, `-Infinity` if x < 0, and 'NaN' (not a number; that is, undefined) if x is 0.
For example, I use the following bit of code in part of the calculation of some [log likelihood ratios](http://en.wikipedia.org/wiki/Likelihood_ratio). I explicitly reference `-Infinity` to define a value even if `k` is `0` or `n` AND `x` is `0` or `1`.
```
Infinity = 1.0/0.0
def Similarity.log_l(k, n, x)
unless x == 0 or x == 1
k * Math.log(x.to_f) + (n-k) * Math.log(1.0-x)
end
-Infinity
end
end
``` |
In what contexts do programming languages make real use of an Infinity value? | 382,603 | 25 | 2008-12-19T23:54:04Z | 383,348 | 17 | 2008-12-20T14:03:13Z | [
"python",
"ruby",
"language-agnostic",
"idioms",
"infinity"
] | So in Ruby there is a trick to specify infinity:
```
1.0/0
=> Infinity
```
I believe in Python you can do something like this
```
float('inf')
```
These are just examples though, I'm sure most languages have infinity in some capacity. When would you actually use this construct in the real world? Why would using it in a range be better than just using a boolean expression? For instance
```
(0..1.0/0).include?(number) == (number >= 0) # True for all values of number
=> true
```
To summarize, what I'm looking for is a real world reason to use Infinity.
**EDIT**: I'm looking for real world code. It's all well and good to say this is when you "could" use it, when have people *actually* used it. | There seems to be an implied "Why does this functionality even exist?" in your question. And the reason is that Ruby and Python are just giving access to the full range of values that one can specify in floating point form as specified by IEEE.
This page seems to describe it well:
<http://steve.hollasch.net/cgindex/coding/ieeefloat.html>
As a result, you can also have NaN (Not-a-number) values and -0.0, while you may not immediately have real-world uses for those either. |
In what contexts do programming languages make real use of an Infinity value? | 382,603 | 25 | 2008-12-19T23:54:04Z | 399,013 | 8 | 2008-12-29T23:05:55Z | [
"python",
"ruby",
"language-agnostic",
"idioms",
"infinity"
] | So in Ruby there is a trick to specify infinity:
```
1.0/0
=> Infinity
```
I believe in Python you can do something like this
```
float('inf')
```
These are just examples though, I'm sure most languages have infinity in some capacity. When would you actually use this construct in the real world? Why would using it in a range be better than just using a boolean expression? For instance
```
(0..1.0/0).include?(number) == (number >= 0) # True for all values of number
=> true
```
To summarize, what I'm looking for is a real world reason to use Infinity.
**EDIT**: I'm looking for real world code. It's all well and good to say this is when you "could" use it, when have people *actually* used it. | I use it to specify the mass and inertia of a static object in physics simulations. Static objects are essentially unaffected by gravity and other simulation forces. |
Is @measured a standard decorator? What library is it in? | 382,624 | 2 | 2008-12-20T00:04:41Z | 382,666 | 13 | 2008-12-20T00:37:25Z | [
"python",
"decorator"
] | In [this blog article](http://abstracthack.wordpress.com/2007/09/05/multi-threaded-map-for-python/) they use the construct:
```
@measured
def some_func():
#...
# Presumably outputs something like "some_func() is finished in 121.333 s" somewhere
```
This `@measured` directive doesn't seem to work with raw python. What is it?
UPDATE: I see from Triptych that `@something` is valid, but is where can I find `@measured`, is it in a library somewhere, or is the author of this blog using something from his own private code base? | `@measured` decorates the some\_func() function, using a function or class named `measured`. The `@` is the decorator syntax, `measured` is the decorator function name.
Decorators can be a bit hard to understand, but they are basically used to either wrap code around a function, or inject code into one.
For example the measured function (used as a decorator) is probably implemented like this...
```
import time
def measured(orig_function):
# When you decorate a function, the decorator func is called
# with the original function as the first argument.
# You return a new, modified function. This returned function
# is what the to-be-decorated function becomes.
print "INFO: This from the decorator function"
print "INFO: I am about to decorate %s" % (orig_function)
# This is what some_func will become:
def newfunc(*args, **kwargs):
print "INFO: This is the decorated function being called"
start = time.time()
# Execute the old function, passing arguments
orig_func_return = orig_function(*args, **kwargs)
end = time.time()
print "Function took %s seconds to execute" % (end - start)
return orig_func_return # return the output of the original function
# Return the modified function, which..
return newfunc
@measured
def some_func(arg1):
print "This is my original function! Argument was %s" % arg1
# We call the now decorated function..
some_func(123)
#.. and we should get (minus the INFO messages):
This is my original function! Argument was 123
# Function took 7.86781311035e-06 to execute
```
The decorator syntax is just a shorter and neater way of doing the following:
```
def some_func():
print "This is my original function!"
some_func = measured(some_func)
```
There are some decorators included with Python, for example [`staticmethod`](http://docs.python.org/library/functions.html#staticmethod) - but `measured` is not one of them:
```
>>> type(measured)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'measured' is not defined
```
Check the projects `import` statements to see where the function or class is coming from. If it uses `from blah import *` you'll need to check all of those files (which is why `import *` is discouraged), or you could just do something like `grep -R def measured *` |
Django: How can I use my model classes to interact with my database from outside Django? | 383,073 | 13 | 2008-12-20T07:48:15Z | 383,089 | 15 | 2008-12-20T08:18:44Z | [
"python",
"django",
"django-models"
] | I'd like to write a script that interacts with my DB using a Django app's model. However, I would like to be able to run this script from the command line or via cron. What all do I need to import to allow this? | You need to set up the Django environment variables. These tell Python where your project is, and what the name of the settings module is (the project name in the settings module is optional):
```
import os
os.environ['PYTHONPATH'] = '/path/to/myproject'
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings'
```
Now you should be able to access the models:
```
from myproject.models import MyModel
all_my_models = MyModel.objects.all()
``` |
Django: How can I use my model classes to interact with my database from outside Django? | 383,073 | 13 | 2008-12-20T07:48:15Z | 383,246 | 13 | 2008-12-20T12:06:39Z | [
"python",
"django",
"django-models"
] | I'd like to write a script that interacts with my DB using a Django app's model. However, I would like to be able to run this script from the command line or via cron. What all do I need to import to allow this? | The preferred way should be to add a [custom command](http://docs.djangoproject.com/en/dev/howto/custom-management-commands/) and then run it as any other `django-admin` (not to be confused with `django.contrib.admin`) command:
```
./manage.py mycustomcommand --customarg
```
Setting DJANGO\_SETTINGS\_MODULE should only be used when a custom command is not feasible. |
How to iterate over a list repeating each element in Python | 383,565 | 6 | 2008-12-20T18:23:52Z | 383,578 | 12 | 2008-12-20T18:36:27Z | [
"python",
"iterator"
] | I'm using Python to **infinitely** iterate over a list, repeating each element in the list a number of times. For example given the list:
```
l = [1, 2, 3, 4]
```
I would like to output each element two times and then repeat the cycle:
```
1, 1, 2, 2, 3, 3, 4, 4, 1, 1, 2, 2 ...
```
I've got an idea of where to start:
```
def cycle(iterable):
if not hasattr(cycle, 'state'):
cycle.state = itertools.cycle(iterable)
return cycle.next()
>>> l = [1, 2, 3, 4]
>>> cycle(l)
1
>>> cycle(l)
2
>>> cycle(l)
3
>>> cycle(l)
4
>>> cycle(l)
1
```
But how would I repeat each element?
**Edit**
To clarify this should iterate infinitely. Also I've used repeating the element twice as the shortest example -
**I would really like to repeat each element n times**.
**Update**
Will your solution lead me to what I was looking for:
```
>>> import itertools
>>> def ncycle(iterable, n):
... for item in itertools.cycle(iterable):
... for i in range(n):
... yield item
>>> a = ncycle([1,2], 2)
>>> a.next()
1
>>> a.next()
1
>>> a.next()
2
>>> a.next()
2
>>> a.next()
1
>>> a.next()
1
>>> a.next()
2
>>> a.next()
2
```
Thanks for the quick answers! | How about this:
```
import itertools
def bicycle(iterable, repeat=1):
for item in itertools.cycle(iterable):
for _ in xrange(repeat):
yield item
c = bicycle([1,2,3,4], 2)
print [c.next() for _ in xrange(10)]
```
EDIT: incorporated [bishanty's](http://stackoverflow.com/users/37522/bishanty) repeat count parameter and [Adam Rosenfield's list comprehension](http://stackoverflow.com/questions/383565/how-to-iterate-over-a-list-repeating-each-element-in-python#383574). |
How do you apply 'or' to all values of a list in Python? | 383,623 | 14 | 2008-12-20T19:05:12Z | 383,642 | 31 | 2008-12-20T19:16:37Z | [
"python",
"list",
"reduce"
] | How do you apply 'or' to all values of a list in Python? I'm thinking something like:
```
or([True, True, False])
```
or if it was possible:
```
reduce(or, [True, True, False])
``` | The built-in function `any` does what you want:
```
>>> any([True, True, False])
True
>>> any([False, False, False])
False
>>> any([False, False, True])
True
```
`any` has the advantage over `reduce` of shortcutting the test for later items in the sequence once it finds a true value. This can be very handy if the sequence is a generator with an expensive operation behind it. For example:
```
>>> def iam(result):
... # Pretend this is expensive.
... print "iam(%r)" % result
... return result
...
>>> any((iam(x) for x in [False, True, False]))
iam(False)
iam(True)
True
>>> reduce(lambda x,y: x or y, (iam(x) for x in [False, True, False]))
iam(False)
iam(True)
iam(False)
True
```
If your Python's version doesn't have `any()`, `all()` builtins then they are easily implemented as [Guido van Rossum suggested](http://www.artima.com/weblogs/viewpost.jsp?thread=98196):
```
def any(S):
for x in S:
if x:
return True
return False
def all(S):
for x in S:
if not x:
return False
return True
``` |
How do you apply 'or' to all values of a list in Python? | 383,623 | 14 | 2008-12-20T19:05:12Z | 383,668 | 7 | 2008-12-20T19:49:58Z | [
"python",
"list",
"reduce"
] | How do you apply 'or' to all values of a list in Python? I'm thinking something like:
```
or([True, True, False])
```
or if it was possible:
```
reduce(or, [True, True, False])
``` | No one has mentioned it, but "`or`" is available as a function in the operator module:
```
from operator import or_
```
Then you can use `reduce` as above.
Would always advise "`any`" though in more recent Pythons. |
104, 'Connection reset by peer' socket error, or When does closing a socket result in a RST rather than FIN? | 383,738 | 20 | 2008-12-20T21:04:42Z | 383,816 | 10 | 2008-12-20T22:18:15Z | [
"python",
"sockets",
"wsgi",
"httplib2",
"werkzeug"
] | We're developing a Python web service and a client web site in parallel. When we make an HTTP request from the client to the service, one call consistently raises a socket.error in socket.py, in read:
```
(104, 'Connection reset by peer')
```
When I listen in with wireshark, the "good" and "bad" responses look very similar:
* Because of the size of the OAuth header, the request is split into two packets. The service responds to both with ACK
* The service sends the response, one packet per header (HTTP/1.0 200 OK, then the Date header, etc.). The client responds to each with ACK.
* (Good request) the server sends a FIN, ACK. The client responds with a FIN, ACK. The server responds ACK.
* (Bad request) the server sends a RST, ACK, the client doesn't send a TCP response, the socket.error is raised on the client side.
Both the web service and the client are running on a Gentoo Linux x86-64 box running glibc-2.6.1. We're using Python 2.5.2 inside the same virtual\_env.
The client is a Django 1.0.2 app that is calling httplib2 0.4.0 to make requests. We're signing requests with the OAuth signing algorithm, with the OAuth token always set to an empty string.
The service is running Werkzeug 0.3.1, which is using Python's wsgiref.simple\_server. I ran the WSGI app through wsgiref.validator with no issues.
It seems like this should be easy to debug, but when I trace through a good request on the service side, it looks just like the bad request, in the socket.\_socketobject.close() function, turning delegate methods into dummy methods. When the send or sendto (can't remember which) method is switched off, the FIN or RST is sent, and the client starts processing.
"Connection reset by peer" seems to place blame on the service, but I don't trust httplib2 either. Can the client be at fault?
\*\* Further debugging - Looks like server on Linux \*\*
I have a MacBook, so I tried running the service on one and the client website on the other. The Linux client calls the OS X server without the bug (FIN ACK). The OS X client calls the Linux service with the bug (RST ACK, and a (54, 'Connection reset by peer')). So, it looks like it's the service running on Linux. Is it x86\_64? A bad glibc? wsgiref? Still looking...
\*\* Further testing - wsgiref looks flaky \*\*
We've gone to production with Apache and mod\_wsgi, and the connection resets have gone away. See my answer below, but my advice is to log the connection reset and retry. This will let your server run OK in development mode, and solidly in production. | I've had this problem. See [The Python "Connection Reset By Peer" Problem](http://www.itmaybeahack.com/homepage/iblog/architecture/C551260341/E20081031204203/index.html).
You have (most likely) run afoul of small timing issues based on the Python Global Interpreter Lock.
You can (sometimes) correct this with a `time.sleep(0.01)` placed strategically.
"Where?" you ask. Beats me. The idea is to provide some better thread concurrency in and around the client requests. Try putting it just *before* you make the request so that the GIL is reset and the Python interpreter can clear out any pending threads. |
104, 'Connection reset by peer' socket error, or When does closing a socket result in a RST rather than FIN? | 383,738 | 20 | 2008-12-20T21:04:42Z | 481,952 | 7 | 2009-01-27T00:37:29Z | [
"python",
"sockets",
"wsgi",
"httplib2",
"werkzeug"
] | We're developing a Python web service and a client web site in parallel. When we make an HTTP request from the client to the service, one call consistently raises a socket.error in socket.py, in read:
```
(104, 'Connection reset by peer')
```
When I listen in with wireshark, the "good" and "bad" responses look very similar:
* Because of the size of the OAuth header, the request is split into two packets. The service responds to both with ACK
* The service sends the response, one packet per header (HTTP/1.0 200 OK, then the Date header, etc.). The client responds to each with ACK.
* (Good request) the server sends a FIN, ACK. The client responds with a FIN, ACK. The server responds ACK.
* (Bad request) the server sends a RST, ACK, the client doesn't send a TCP response, the socket.error is raised on the client side.
Both the web service and the client are running on a Gentoo Linux x86-64 box running glibc-2.6.1. We're using Python 2.5.2 inside the same virtual\_env.
The client is a Django 1.0.2 app that is calling httplib2 0.4.0 to make requests. We're signing requests with the OAuth signing algorithm, with the OAuth token always set to an empty string.
The service is running Werkzeug 0.3.1, which is using Python's wsgiref.simple\_server. I ran the WSGI app through wsgiref.validator with no issues.
It seems like this should be easy to debug, but when I trace through a good request on the service side, it looks just like the bad request, in the socket.\_socketobject.close() function, turning delegate methods into dummy methods. When the send or sendto (can't remember which) method is switched off, the FIN or RST is sent, and the client starts processing.
"Connection reset by peer" seems to place blame on the service, but I don't trust httplib2 either. Can the client be at fault?
\*\* Further debugging - Looks like server on Linux \*\*
I have a MacBook, so I tried running the service on one and the client website on the other. The Linux client calls the OS X server without the bug (FIN ACK). The OS X client calls the Linux service with the bug (RST ACK, and a (54, 'Connection reset by peer')). So, it looks like it's the service running on Linux. Is it x86\_64? A bad glibc? wsgiref? Still looking...
\*\* Further testing - wsgiref looks flaky \*\*
We've gone to production with Apache and mod\_wsgi, and the connection resets have gone away. See my answer below, but my advice is to log the connection reset and retry. This will let your server run OK in development mode, and solidly in production. | Don't use wsgiref for production. Use Apache and mod\_wsgi, or something else.
We continue to see these connection resets, sometimes frequently, with wsgiref (the backend used by the werkzeug test server, and possibly others like the Django test server). Our solution was to log the error, retry the call in a loop, and give up after ten failures. httplib2 tries twice, but we needed a few more. They seem to come in bunches as well - adding a 1 second sleep might clear the issue.
We've never seen a connection reset when running through Apache and mod\_wsgi. I don't know what they do differently, (maybe they just mask them), but they don't appear.
When we asked the local dev community for help, someone confirmed that they see a lot of connection resets with wsgiref that go away on the production server. There's a bug there, but it is going to be hard to find it. |
Best way to return the language of a given string | 383,966 | 7 | 2008-12-21T01:12:56Z | 383,988 | 14 | 2008-12-21T01:40:58Z | [
"python",
"algorithm",
"string"
] | More specifically, I'm trying to check if given string (a sentence) is in Turkish.
I can check if the string has Turkish characters such as Ã, Å, Ã, Ã, Ä etc. However that's not very reliable as those might be converted to C, S, U, O, G before I receive the string.
Another method is to have the 100 most used words in Turkish and check if the sentence includes any/some of those words. I can combine these two methods and use a point system.
What do you think is the most efficient way to solve my problem in Python?
Related question: [(human) Language of a document](http://stackoverflow.com/questions/257125/human-language-of-a-document) (Perl, Google Translation API) | One option would be to use a Bayesian Classifier such as [Reverend](http://www.divmod.org/trac/wiki/DivmodReverend). The Reverend homepage gives this suggestion for a naive language detector:
```
from reverend.thomas import Bayes
guesser = Bayes()
guesser.train('french', 'le la les du un une je il elle de en')
guesser.train('german', 'der die das ein eine')
guesser.train('spanish', 'el uno una las de la en')
guesser.train('english', 'the it she he they them are were to')
guesser.guess('they went to el cantina')
guesser.guess('they were flying planes')
guesser.train('english', 'the rain in spain falls mainly on the plain')
guesser.save('my_guesser.bay')
```
Training with more complex token sets would strengthen the results. For more information on Bayesian classification, [see here](http://en.wikipedia.org/wiki/Bayesian_analysis) and [here](http://en.wikipedia.org/wiki/Naive_Bayesian_classification). |
Best way to return the language of a given string | 383,966 | 7 | 2008-12-21T01:12:56Z | 384,062 | 10 | 2008-12-21T03:32:36Z | [
"python",
"algorithm",
"string"
] | More specifically, I'm trying to check if given string (a sentence) is in Turkish.
I can check if the string has Turkish characters such as Ã, Å, Ã, Ã, Ä etc. However that's not very reliable as those might be converted to C, S, U, O, G before I receive the string.
Another method is to have the 100 most used words in Turkish and check if the sentence includes any/some of those words. I can combine these two methods and use a point system.
What do you think is the most efficient way to solve my problem in Python?
Related question: [(human) Language of a document](http://stackoverflow.com/questions/257125/human-language-of-a-document) (Perl, Google Translation API) | A simple statistical method that I've used before:
Get a decent amount of sample training text in the language you want to detect. Split it up into trigrams, e.g.
"Hello foobar" in trigrams is:
'Hel', 'ell', 'llo', 'lo ', 'o f', ' fo', 'foo', 'oob', 'oba', 'bar'
For all of the source data, count up the frequency of occurrence of each trigram, presumably in a dict where key=trigram and value=frequency. You can limit this to the top 300 most frequent 3-letter combinations or something if you want. Pickle the dict away somewhere.
To tell if a new sample of text is written in the same language, repeat the above steps for the sample text. Now, all you have to do is compute a correlation between the sample trigram frequencies and the training trigram frequencies. You'll need to play with it a bit to pick a threshold correlation above which you are willing to consider input to be turkish or not.
This method has been shown to be highly accurate, beating out more sophisticated methods, see
[Cavnar & Trenkle (1994): "N-Gram-Based Text Categorization"](http://lesfourmisrouges.com/bs/documentation/@%20work/sdair-94-bc.pdf)
Using trigrams solves the problem of using word lists, as there is a vast number of words in any given language, especially given different grammatical permutations. I've tried looking for common words, the problem is they often give a false positive for some other language, or themselves have many permutations. The statistical method doesn't require a lot of storage space and does not require complex parsing. By the way this method only works for languages with a phonetic writing system, it works poorly if at all with languages that use an ideographic language (i.e. Chinese, Japanese, Korean).
Alternatively wikipedia has a section on Turkish in [its handy language recognition chart.](http://en.wikipedia.org/wiki/Language_recognition_chart#Turkic_languages) |
How can I color Python logging output? | 384,076 | 174 | 2008-12-21T03:57:45Z | 384,125 | 119 | 2008-12-21T05:17:39Z | [
"python",
"logging",
"colors"
] | Some time ago, I saw a Mono application with colored output, presumably because of its log system (because all the messages were standardized).
Now, Python has the `logging` module, which lets you specify a lot of options to customize output. So, I'm imagining something similar would be possible with Python, but I canât find out how to do this anywhere.
Is there any way to make the Python `logging` module output in color?
What I want (for instance) errors in red, debug messages in blue or yellow, and so on.
Of course this would probably require a compatible terminal (most modern terminals are); but I could fallback to the original `logging` output if color isn't supported.
Any ideas how I can get colored output with the logging module? | I already knew about the color escapes, I used them in my bash prompt a while ago. Thanks anyway.
What I wanted was to integrate it with the logging module, which I eventually did after a couple of tries and errors.
Here is what I end up with:
```
BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = range(8)
#The background is set with 40 plus the number of the color, and the foreground with 30
#These are the sequences need to get colored ouput
RESET_SEQ = "\033[0m"
COLOR_SEQ = "\033[1;%dm"
BOLD_SEQ = "\033[1m"
def formatter_message(message, use_color = True):
if use_color:
message = message.replace("$RESET", RESET_SEQ).replace("$BOLD", BOLD_SEQ)
else:
message = message.replace("$RESET", "").replace("$BOLD", "")
return message
COLORS = {
'WARNING': YELLOW,
'INFO': WHITE,
'DEBUG': BLUE,
'CRITICAL': YELLOW,
'ERROR': RED
}
class ColoredFormatter(logging.Formatter):
def __init__(self, msg, use_color = True):
logging.Formatter.__init__(self, msg)
self.use_color = use_color
def format(self, record):
levelname = record.levelname
if self.use_color and levelname in COLORS:
levelname_color = COLOR_SEQ % (30 + COLORS[levelname]) + levelname + RESET_SEQ
record.levelname = levelname_color
return logging.Formatter.format(self, record)
```
And to use it, create your own Logger:
```
# Custom logger class with multiple destinations
class ColoredLogger(logging.Logger):
FORMAT = "[$BOLD%(name)-20s$RESET][%(levelname)-18s] %(message)s ($BOLD%(filename)s$RESET:%(lineno)d)"
COLOR_FORMAT = formatter_message(FORMAT, True)
def __init__(self, name):
logging.Logger.__init__(self, name, logging.DEBUG)
color_formatter = ColoredFormatter(self.COLOR_FORMAT)
console = logging.StreamHandler()
console.setFormatter(color_formatter)
self.addHandler(console)
return
logging.setLoggerClass(ColoredLogger)
```
Just in case anyone else needs it.
Be careful if you're using more than one logger or handler: `ColoredFormatter` is changing the record object, which is passed further to other handlers or propagated to other loggers. If you have configured file loggers etc. you probably don't want to have the colors in the log files. To avoid that, it's probably best to simply create a copy of `record` with `copy.copy()` before manipulating the levelname attribute, or to reset the levelname to the previous value, before returning the formatted string (credit to [Michael](https://stackoverflow.com/users/715042/michael) in the comments). |
How can I color Python logging output? | 384,076 | 174 | 2008-12-21T03:57:45Z | 1,336,640 | 56 | 2009-08-26T18:29:35Z | [
"python",
"logging",
"colors"
] | Some time ago, I saw a Mono application with colored output, presumably because of its log system (because all the messages were standardized).
Now, Python has the `logging` module, which lets you specify a lot of options to customize output. So, I'm imagining something similar would be possible with Python, but I canât find out how to do this anywhere.
Is there any way to make the Python `logging` module output in color?
What I want (for instance) errors in red, debug messages in blue or yellow, and so on.
Of course this would probably require a compatible terminal (most modern terminals are); but I could fallback to the original `logging` output if color isn't supported.
Any ideas how I can get colored output with the logging module? | Here is a solution that should work on any platform. If it doesn't just tell me and I will update it.
How it works: on platform supporting ANSI escapes is using them (non-Windows) and on Windows it does use API calls to change the console colors.
The script does hack the logging.StreamHandler.emit method from standard library adding a wrapper to it.
**TestColorer.py**
```
# Usage: add Colorer.py near you script and import it.
import logging
import Colorer
logging.warn("a warning")
logging.error("some error")
logging.info("some info")
```
**Colorer.py**
```
#!/usr/bin/env python
# encoding: utf-8
import logging
# now we patch Python code to add color support to logging.StreamHandler
def add_coloring_to_emit_windows(fn):
# add methods we need to the class
def _out_handle(self):
import ctypes
return ctypes.windll.kernel32.GetStdHandle(self.STD_OUTPUT_HANDLE)
out_handle = property(_out_handle)
def _set_color(self, code):
import ctypes
# Constants from the Windows API
self.STD_OUTPUT_HANDLE = -11
hdl = ctypes.windll.kernel32.GetStdHandle(self.STD_OUTPUT_HANDLE)
ctypes.windll.kernel32.SetConsoleTextAttribute(hdl, code)
setattr(logging.StreamHandler, '_set_color', _set_color)
def new(*args):
FOREGROUND_BLUE = 0x0001 # text color contains blue.
FOREGROUND_GREEN = 0x0002 # text color contains green.
FOREGROUND_RED = 0x0004 # text color contains red.
FOREGROUND_INTENSITY = 0x0008 # text color is intensified.
FOREGROUND_WHITE = FOREGROUND_BLUE|FOREGROUND_GREEN |FOREGROUND_RED
# winbase.h
STD_INPUT_HANDLE = -10
STD_OUTPUT_HANDLE = -11
STD_ERROR_HANDLE = -12
# wincon.h
FOREGROUND_BLACK = 0x0000
FOREGROUND_BLUE = 0x0001
FOREGROUND_GREEN = 0x0002
FOREGROUND_CYAN = 0x0003
FOREGROUND_RED = 0x0004
FOREGROUND_MAGENTA = 0x0005
FOREGROUND_YELLOW = 0x0006
FOREGROUND_GREY = 0x0007
FOREGROUND_INTENSITY = 0x0008 # foreground color is intensified.
BACKGROUND_BLACK = 0x0000
BACKGROUND_BLUE = 0x0010
BACKGROUND_GREEN = 0x0020
BACKGROUND_CYAN = 0x0030
BACKGROUND_RED = 0x0040
BACKGROUND_MAGENTA = 0x0050
BACKGROUND_YELLOW = 0x0060
BACKGROUND_GREY = 0x0070
BACKGROUND_INTENSITY = 0x0080 # background color is intensified.
levelno = args[1].levelno
if(levelno>=50):
color = BACKGROUND_YELLOW | FOREGROUND_RED | FOREGROUND_INTENSITY | BACKGROUND_INTENSITY
elif(levelno>=40):
color = FOREGROUND_RED | FOREGROUND_INTENSITY
elif(levelno>=30):
color = FOREGROUND_YELLOW | FOREGROUND_INTENSITY
elif(levelno>=20):
color = FOREGROUND_GREEN
elif(levelno>=10):
color = FOREGROUND_MAGENTA
else:
color = FOREGROUND_WHITE
args[0]._set_color(color)
ret = fn(*args)
args[0]._set_color( FOREGROUND_WHITE )
#print "after"
return ret
return new
def add_coloring_to_emit_ansi(fn):
# add methods we need to the class
def new(*args):
levelno = args[1].levelno
if(levelno>=50):
color = '\x1b[31m' # red
elif(levelno>=40):
color = '\x1b[31m' # red
elif(levelno>=30):
color = '\x1b[33m' # yellow
elif(levelno>=20):
color = '\x1b[32m' # green
elif(levelno>=10):
color = '\x1b[35m' # pink
else:
color = '\x1b[0m' # normal
args[1].msg = color + args[1].msg + '\x1b[0m' # normal
#print "after"
return fn(*args)
return new
import platform
if platform.system()=='Windows':
# Windows does not support ANSI escapes and we are using API calls to set the console color
logging.StreamHandler.emit = add_coloring_to_emit_windows(logging.StreamHandler.emit)
else:
# all non-Windows platforms are supporting ANSI escapes so we use them
logging.StreamHandler.emit = add_coloring_to_emit_ansi(logging.StreamHandler.emit)
#log = logging.getLogger()
#log.addFilter(log_filter())
#//hdlr = logging.StreamHandler()
#//hdlr.setFormatter(formatter())
``` |
How can I color Python logging output? | 384,076 | 174 | 2008-12-21T03:57:45Z | 2,205,909 | 8 | 2010-02-05T08:36:51Z | [
"python",
"logging",
"colors"
] | Some time ago, I saw a Mono application with colored output, presumably because of its log system (because all the messages were standardized).
Now, Python has the `logging` module, which lets you specify a lot of options to customize output. So, I'm imagining something similar would be possible with Python, but I canât find out how to do this anywhere.
Is there any way to make the Python `logging` module output in color?
What I want (for instance) errors in red, debug messages in blue or yellow, and so on.
Of course this would probably require a compatible terminal (most modern terminals are); but I could fallback to the original `logging` output if color isn't supported.
Any ideas how I can get colored output with the logging module? | I modified the original example provided by Sorin and subclassed StreamHandler to a ColorizedConsoleHandler.
The downside of their solution is that it modifies the message, and because that is modifying the actual logmessage any other handlers will get the modified message as well.
This resulted in logfiles with colorcodes in them in our case because we use multiple loggers.
The class below only works on platforms that support ansi, but it should be trivial to add the windows colorcodes to it.
```
import copy
import logging
class ColoredConsoleHandler(logging.StreamHandler):
def emit(self, record):
# Need to make a actual copy of the record
# to prevent altering the message for other loggers
myrecord = copy.copy(record)
levelno = myrecord.levelno
if(levelno >= 50): # CRITICAL / FATAL
color = '\x1b[31m' # red
elif(levelno >= 40): # ERROR
color = '\x1b[31m' # red
elif(levelno >= 30): # WARNING
color = '\x1b[33m' # yellow
elif(levelno >= 20): # INFO
color = '\x1b[32m' # green
elif(levelno >= 10): # DEBUG
color = '\x1b[35m' # pink
else: # NOTSET and anything else
color = '\x1b[0m' # normal
myrecord.msg = color + str(myrecord.msg) + '\x1b[0m' # normal
logging.StreamHandler.emit(self, myrecord)
``` |
How can I color Python logging output? | 384,076 | 174 | 2008-12-21T03:57:45Z | 2,532,931 | 12 | 2010-03-28T12:49:40Z | [
"python",
"logging",
"colors"
] | Some time ago, I saw a Mono application with colored output, presumably because of its log system (because all the messages were standardized).
Now, Python has the `logging` module, which lets you specify a lot of options to customize output. So, I'm imagining something similar would be possible with Python, but I canât find out how to do this anywhere.
Is there any way to make the Python `logging` module output in color?
What I want (for instance) errors in red, debug messages in blue or yellow, and so on.
Of course this would probably require a compatible terminal (most modern terminals are); but I could fallback to the original `logging` output if color isn't supported.
Any ideas how I can get colored output with the logging module? | I updated the example from airmind supporting tags for foreground and background.
Just use the color variables $BLACK - $WHITE in your log formatter string. To set the background just use $BG-BLACK - $BG-WHITE.
```
import logging
BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = range(8)
COLORS = {
'WARNING' : YELLOW,
'INFO' : WHITE,
'DEBUG' : BLUE,
'CRITICAL' : YELLOW,
'ERROR' : RED,
'RED' : RED,
'GREEN' : GREEN,
'YELLOW' : YELLOW,
'BLUE' : BLUE,
'MAGENTA' : MAGENTA,
'CYAN' : CYAN,
'WHITE' : WHITE,
}
RESET_SEQ = "\033[0m"
COLOR_SEQ = "\033[1;%dm"
BOLD_SEQ = "\033[1m"
class ColorFormatter(logging.Formatter):
def __init__(self, *args, **kwargs):
# can't do super(...) here because Formatter is an old school class
logging.Formatter.__init__(self, *args, **kwargs)
def format(self, record):
levelname = record.levelname
color = COLOR_SEQ % (30 + COLORS[levelname])
message = logging.Formatter.format(self, record)
message = message.replace("$RESET", RESET_SEQ)\
.replace("$BOLD", BOLD_SEQ)\
.replace("$COLOR", color)
for k,v in COLORS.items():
message = message.replace("$" + k, COLOR_SEQ % (v+30))\
.replace("$BG" + k, COLOR_SEQ % (v+40))\
.replace("$BG-" + k, COLOR_SEQ % (v+40))
return message + RESET_SEQ
logging.ColorFormatter = ColorFormatter
```
So now you can simple do the following in your config file:
```
[formatter_colorFormatter]
class=logging.ColorFormatter
format= $COLOR%(levelname)s $RESET %(asctime)s $BOLD$COLOR%(name)s$RESET %(message)s
``` |
How can I color Python logging output? | 384,076 | 174 | 2008-12-21T03:57:45Z | 4,691,726 | 10 | 2011-01-14T13:52:14Z | [
"python",
"logging",
"colors"
] | Some time ago, I saw a Mono application with colored output, presumably because of its log system (because all the messages were standardized).
Now, Python has the `logging` module, which lets you specify a lot of options to customize output. So, I'm imagining something similar would be possible with Python, but I canât find out how to do this anywhere.
Is there any way to make the Python `logging` module output in color?
What I want (for instance) errors in red, debug messages in blue or yellow, and so on.
Of course this would probably require a compatible terminal (most modern terminals are); but I could fallback to the original `logging` output if color isn't supported.
Any ideas how I can get colored output with the logging module? | Look at the following solution. The stream handler should be the thing doing the colouring, then you have the option of colouring words rather than just the whole line (with the Formatter).
<http://plumberjack.blogspot.com/2010/12/colorizing-logging-output-in-terminals.html> |
How can I color Python logging output? | 384,076 | 174 | 2008-12-21T03:57:45Z | 7,995,762 | 39 | 2011-11-03T13:31:53Z | [
"python",
"logging",
"colors"
] | Some time ago, I saw a Mono application with colored output, presumably because of its log system (because all the messages were standardized).
Now, Python has the `logging` module, which lets you specify a lot of options to customize output. So, I'm imagining something similar would be possible with Python, but I canât find out how to do this anywhere.
Is there any way to make the Python `logging` module output in color?
What I want (for instance) errors in red, debug messages in blue or yellow, and so on.
Of course this would probably require a compatible terminal (most modern terminals are); but I could fallback to the original `logging` output if color isn't supported.
Any ideas how I can get colored output with the logging module? | Quick and dirty solution for predefined log levels and without defining a new class.
```
logging.addLevelName( logging.WARNING, "\033[1;31m%s\033[1;0m" % logging.getLevelName(logging.WARNING))
logging.addLevelName( logging.ERROR, "\033[1;41m%s\033[1;0m" % logging.getLevelName(logging.ERROR))
``` |
How can I color Python logging output? | 384,076 | 174 | 2008-12-21T03:57:45Z | 16,847,935 | 29 | 2013-05-31T00:16:27Z | [
"python",
"logging",
"colors"
] | Some time ago, I saw a Mono application with colored output, presumably because of its log system (because all the messages were standardized).
Now, Python has the `logging` module, which lets you specify a lot of options to customize output. So, I'm imagining something similar would be possible with Python, but I canât find out how to do this anywhere.
Is there any way to make the Python `logging` module output in color?
What I want (for instance) errors in red, debug messages in blue or yellow, and so on.
Of course this would probably require a compatible terminal (most modern terminals are); but I could fallback to the original `logging` output if color isn't supported.
Any ideas how I can get colored output with the logging module? | Years ago I wrote a colored stream handler for my own use. Then I came across this page and found a collection of code snippets that people are copy/pasting :-(. My stream handler currently only works on UNIX (Linux, Mac OS X) but the advantage is that it's [available on PyPI](https://pypi.python.org/pypi/coloredlogs) (and [GitHub](https://github.com/xolox/python-coloredlogs)) and it's dead simple to use. It also has a Vim syntax mode :-). In the future I might extend it to work on Windows.
**To install the package:**
```
$ pip install coloredlogs
```
**To confirm that it works:**
```
$ python -m coloredlogs.demo
```
**To get started with your own code:**
```
$ python
> import coloredlogs, logging
> coloredlogs.install()
> logging.info("It works!")
2014-07-30 21:21:26 peter-macbook root[7471] INFO It works!
```
The default log format shown in the above example contains the date, time, hostname, the name of the logger, the PID, the log level and the log message. This is what it looks like in practice:
 |
How can I color Python logging output? | 384,076 | 174 | 2008-12-21T03:57:45Z | 23,964,880 | 16 | 2014-05-30T23:39:51Z | [
"python",
"logging",
"colors"
] | Some time ago, I saw a Mono application with colored output, presumably because of its log system (because all the messages were standardized).
Now, Python has the `logging` module, which lets you specify a lot of options to customize output. So, I'm imagining something similar would be possible with Python, but I canât find out how to do this anywhere.
Is there any way to make the Python `logging` module output in color?
What I want (for instance) errors in red, debug messages in blue or yellow, and so on.
Of course this would probably require a compatible terminal (most modern terminals are); but I could fallback to the original `logging` output if color isn't supported.
Any ideas how I can get colored output with the logging module? | **Update**: Because this is an itch that I've been meaning to scratch for so long, I went ahead and wrote a library for lazy people like me who just want simple ways to do things: [zenlog](https://github.com/ManufacturaInd/zenlog)
Colorlog is excellent for this. It's [available on PyPI](https://pypi.python.org/pypi/colorlog/2.2.0) (and thus installable through `pip install colorlog`) and is [actively maintained](https://github.com/borntyping/python-colorlog).
Here's a quick copy-and-pasteable snippet to set up logging and print decent-looking log messages:
```
import logging
LOG_LEVEL = logging.DEBUG
LOGFORMAT = " %(log_color)s%(levelname)-8s%(reset)s | %(log_color)s%(message)s%(reset)s"
from colorlog import ColoredFormatter
logging.root.setLevel(LOG_LEVEL)
formatter = ColoredFormatter(LOGFORMAT)
stream = logging.StreamHandler()
stream.setLevel(LOG_LEVEL)
stream.setFormatter(formatter)
log = logging.getLogger('pythonConfig')
log.setLevel(LOG_LEVEL)
log.addHandler(stream)
log.debug("A quirky message only developers care about")
log.info("Curious users might want to know this")
log.warn("Something is wrong and any user should be informed")
log.error("Serious stuff, this is red for a reason")
log.critical("OH NO everything is on fire")
```
Output:
 |
Database does not update automatically with MySQL and Python | 384,228 | 24 | 2008-12-21T07:57:04Z | 384,240 | 42 | 2008-12-21T08:08:32Z | [
"python",
"mysql",
"mysql-python"
] | I'm having some trouble updating a row in a MySQL database. Here is the code I'm trying to run:
```
import MySQLdb
conn=MySQLdb.connect(host="localhost", user="root", passwd="pass", db="dbname")
cursor=conn.cursor()
cursor.execute("UPDATE compinfo SET Co_num=4 WHERE ID=100")
cursor.execute("SELECT Co_num FROM compinfo WHERE ID=100")
results = cursor.fetchall()
for row in results:
print row[0]
print "Number of rows updated: %d" % cursor.rowcount
cursor.close()
conn.close()
```
The output I get when I run this program is:
> 4
> Number of rows updated: 1
It seems like it's working but if I query the database from the MySQL command line interface (CLI) I find that it was not updated at all. However, if from the CLI I enter `UPDATE compinfo SET Co_num=4 WHERE ID=100;` the database is updated as expected.
What is my problem? I'm running Python 2.5.2 with MySQL 5.1.30 on a Windows box. | I am not certain, but I am going to guess you are using a INNODB table, and you haven't done a commit. I believe MySQLdb enable transactions automatically.
Call `conn.commit()` before calling `close`.
From the FAQ: [Starting with 1.2.0, MySQLdb disables autocommit by default](http://mysql-python.sourceforge.net/FAQ.html#my-data-disappeared-or-won-t-go-away) |
Database does not update automatically with MySQL and Python | 384,228 | 24 | 2008-12-21T07:57:04Z | 384,311 | 7 | 2008-12-21T10:11:28Z | [
"python",
"mysql",
"mysql-python"
] | I'm having some trouble updating a row in a MySQL database. Here is the code I'm trying to run:
```
import MySQLdb
conn=MySQLdb.connect(host="localhost", user="root", passwd="pass", db="dbname")
cursor=conn.cursor()
cursor.execute("UPDATE compinfo SET Co_num=4 WHERE ID=100")
cursor.execute("SELECT Co_num FROM compinfo WHERE ID=100")
results = cursor.fetchall()
for row in results:
print row[0]
print "Number of rows updated: %d" % cursor.rowcount
cursor.close()
conn.close()
```
The output I get when I run this program is:
> 4
> Number of rows updated: 1
It seems like it's working but if I query the database from the MySQL command line interface (CLI) I find that it was not updated at all. However, if from the CLI I enter `UPDATE compinfo SET Co_num=4 WHERE ID=100;` the database is updated as expected.
What is my problem? I'm running Python 2.5.2 with MySQL 5.1.30 on a Windows box. | You need to commit changes manually or turn auto-commit on.
The reason SELECT returns the modified (but not persisted) data is because the connection is still in the same transaction. |
Database does not update automatically with MySQL and Python | 384,228 | 24 | 2008-12-21T07:57:04Z | 384,452 | 22 | 2008-12-21T13:19:16Z | [
"python",
"mysql",
"mysql-python"
] | I'm having some trouble updating a row in a MySQL database. Here is the code I'm trying to run:
```
import MySQLdb
conn=MySQLdb.connect(host="localhost", user="root", passwd="pass", db="dbname")
cursor=conn.cursor()
cursor.execute("UPDATE compinfo SET Co_num=4 WHERE ID=100")
cursor.execute("SELECT Co_num FROM compinfo WHERE ID=100")
results = cursor.fetchall()
for row in results:
print row[0]
print "Number of rows updated: %d" % cursor.rowcount
cursor.close()
conn.close()
```
The output I get when I run this program is:
> 4
> Number of rows updated: 1
It seems like it's working but if I query the database from the MySQL command line interface (CLI) I find that it was not updated at all. However, if from the CLI I enter `UPDATE compinfo SET Co_num=4 WHERE ID=100;` the database is updated as expected.
What is my problem? I'm running Python 2.5.2 with MySQL 5.1.30 on a Windows box. | MySQLdb has autocommit off by default, which may be confusing at first. Your connection exists in its own transaction and you will not be able to see the changes you make from other connections until you commit that transaction.
You can either do `conn.commit()` after the update statement as others have pointed out, or disable this functionality altogether by setting `conn.autocommit(True)` right after you create the connection object. |
MySQL-db lib for Python 3.x? | 384,471 | 29 | 2008-12-21T13:37:38Z | 3,611,517 | 7 | 2010-08-31T17:19:17Z | [
"python",
"mysql",
"python-3.x"
] | So, looking for a mysql-db-lib that is compatible with py3k/py3.0/py3000, any ideas? Google turned up nothing. | I was looking for it too, but also found nothing, so I ported MySQL-python-1.2.3 to py3k
you can read it here
<http://sourceforge.net/p/mysql-python/discussion/70460/thread/61e3a3c9/> |
MySQL-db lib for Python 3.x? | 384,471 | 29 | 2008-12-21T13:37:38Z | 5,288,582 | 29 | 2011-03-13T09:42:27Z | [
"python",
"mysql",
"python-3.x"
] | So, looking for a mysql-db-lib that is compatible with py3k/py3.0/py3000, any ideas? Google turned up nothing. | It appears the MySQLdb is pretty much a dead project. However, [PyMySQL](https://github.com/PyMySQL/PyMySQL/) is a dbapi compliant, pure-python implementation of a mysql client, and it has python 3 support.
EDIT: There's also [MySQL Connector/Python](https://launchpad.net/myconnpy). Same idea. |
PIL and numpy | 384,759 | 67 | 2008-12-21T18:21:32Z | 384,926 | 81 | 2008-12-21T20:46:21Z | [
"python",
"image",
"numpy",
"python-imaging-library"
] | Alright, I'm toying around with converting a PIL image object back and forth to a numpy array so I can do some faster pixel by pixel transformations than PIL's PixelAccess object would allow. I've figured out how to place the pixel information in a useful 3D numpy array by way of:
```
pic = Image.open("foo.jpg")
pix = numpy.array(pic.getdata()).reshape(pic.size[0], pic.size[1], 3)
```
But I can't seem to figure out how to load it back into the PIL object after I've done all my awesome transforms. I'm aware of the **putdata()** method, but can't quite seem to get it to behave.
Any thoughts? | You're not saying how exactly `putdata()` is not behaving. I'm assuming you're doing
```
>>> pic.putdata(a)
Traceback (most recent call last):
File "...blablabla.../PIL/Image.py", line 1185, in putdata
self.im.putdata(data, scale, offset)
SystemError: new style getargs format but argument is not a tuple
```
This is because `putdata` expects a sequence of tuples and you're giving it a numpy array. This
```
>>> data = list(tuple(pixel) for pixel in pix)
>>> pic.putdata(data)
```
will work but it is very slow.
As of PIL 1.1.6, the ["proper" way to convert between images and numpy arrays](http://effbot.org/zone/pil-changes-116.htm) is simply
```
>>> pix = numpy.array(pic)
```
although the resulting array is in a different format than yours (3-d array or rows/columns/rgb in this case).
Then, after you make your changes to the array, you should be able to do either `pic.putdata(pix)` or create a new image with `Image.fromarray(pix)`. |
PIL and numpy | 384,759 | 67 | 2008-12-21T18:21:32Z | 1,095,878 | 66 | 2009-07-08T02:33:17Z | [
"python",
"image",
"numpy",
"python-imaging-library"
] | Alright, I'm toying around with converting a PIL image object back and forth to a numpy array so I can do some faster pixel by pixel transformations than PIL's PixelAccess object would allow. I've figured out how to place the pixel information in a useful 3D numpy array by way of:
```
pic = Image.open("foo.jpg")
pix = numpy.array(pic.getdata()).reshape(pic.size[0], pic.size[1], 3)
```
But I can't seem to figure out how to load it back into the PIL object after I've done all my awesome transforms. I'm aware of the **putdata()** method, but can't quite seem to get it to behave.
Any thoughts? | Open I as an array:
```
>>> I = numpy.asarray(PIL.Image.open('test.jpg'))
```
Do some stuff to I, then, convert it back to an image:
```
>>> im = PIL.Image.fromarray(numpy.uint8(I))
```
[Filter numpy images with FFT, Python](http://barnesc.blogspot.com/2007/09/filter-numpy-images-with-fft-python.html)
If you want to do it explicitly for some reason, there are pil2array() and array2pil() functions using getdata() on [this page](http://bradmontgomery.blogspot.com/2007/12/computing-correlation-coefficients-in.html) in correlation.zip. |
Checking 2-dimensional array (like eight queens puzzle) | 384,874 | 5 | 2008-12-21T20:00:55Z | 384,887 | 19 | 2008-12-21T20:10:32Z | [
"python",
"arrays",
"puzzle"
] | My problem is very similar to eight queens puzzle.
I've got 2-dimensional array (N x N) that for example, looks like this:
```
0,0,0,0,1 y
0,0,0,0,0 |
0,0,0,0,0 V
0,0,0,1,0
0,0,0,0,0
x->
```
I'm checking horizontally, vertically and diagonally for occurrences of 1
```
\,0,|,0,/
0,\,|,/,0
-,-,1,-,-
0,/,|,\,0
/,0,|,0,\
```
I'm thinking about storing only the (x,y) postions of "1"'s in a list
```
[[4,0],[3,3]]
```
and solving it mathematically, check every position of "1" with another (x1,y1)<->(x2,y2),
if `x1 == x2` or `y1 == y2` `we have a collision!` if not check:
```
x2 == x1 + z;
y2 == y1 + z;
x2 == x1 - z;
y2 == y1 - z;
```
(???)
where z is +/- that `( x1+z in 0..N ) and ( y1+z in 0..N ) .......`
***My problem is checking for diagonal collision, is there a better way to do it???*** | One possible solution:
```
def collision(x1, y1, x2, y2):
return x1 == x2 or y1 == y2 or abs(x1-x2) == abs(y1-y2)
```
i.e. there is a collision if the two points are on the same horizontal row, same vertical row or same diagonal (vertical distance == horizontal distance). |
How do I send a custom header with urllib2 in a HTTP Request? | 385,262 | 42 | 2008-12-22T00:39:53Z | 385,411 | 91 | 2008-12-22T02:29:01Z | [
"python",
"header",
"urllib2"
] | I want to send a custom "Accept" header in my request when using urllib2.urlopen(..). How do I do that? | Not quite. Creating a `Request` object does not actually send the request, and Request objects have no `Read()` method. (Also: `read()` is lowercase.) All you need to do is pass the `Request` as the first argument to `urlopen()` and that will give you your response.
```
import urllib2
request = urllib2.Request("http://www.google.com", headers={"Accept" : "text/html"})
contents = urllib2.urlopen(request).read()
``` |
How do I send a custom header with urllib2 in a HTTP Request? | 385,262 | 42 | 2008-12-22T00:39:53Z | 32,026,467 | 8 | 2015-08-15T15:51:41Z | [
"python",
"header",
"urllib2"
] | I want to send a custom "Accept" header in my request when using urllib2.urlopen(..). How do I do that? | ```
import urllib2
request_headers = {
"Accept-Language": "en-US,en;q=0.5",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Referer": "http://thewebsite.com",
"Connection": "keep-alive"
}
request = urllib2.Request("http://thewebsite.com", headers=request_headers)
contents = urllib2.urlopen(request).read()
print contents
``` |
dropping trailing '.0' from floats | 385,325 | 22 | 2008-12-22T01:33:33Z | 385,392 | 14 | 2008-12-22T02:18:07Z | [
"python"
] | I'm looking for a way to convert numbers to string format, dropping any redundant '.0'
The input data is a mix of floats and strings. Desired output:
0 --> '0'
0.0 --> '0'
0.1 --> '0.1'
1.0 --> '1'
I've come up with the following generator expression, but I wonder if there's a faster way:
```
(str(i).rstrip('.0') if i else '0' for i in lst)
```
The truth check is there to prevent 0 from becoming an empty string.
EDIT: The more or less acceptable solution I have for now is this:
```
('%d'%i if i == int(i) else '%s'%i for i in lst)
```
It just seems strange that there is no elegant way to handle this (fairly straightforward) case in python. | `rstrip` doesn't do what you want it to do, it strips *any* of the characters you give it and not a suffix:
```
>>> '30000.0'.rstrip('.0')
'3'
```
Actually, just `'%g' % i` will do what you want.
EDIT: as Robert pointed out in his comment this won't work for large numbers since it uses the default precision of %g which is 6 significant digits.
Since `str(i)` uses 12 significant digits, I think this will work:
```
>>> numbers = [ 0.0, 1.0, 0.1, 123456.7 ]
>>> ['%.12g' % n for n in numbers]
['1', '0', '0.1', '123456.7']
``` |
dropping trailing '.0' from floats | 385,325 | 22 | 2008-12-22T01:33:33Z | 12,080,042 | 28 | 2012-08-22T19:23:16Z | [
"python"
] | I'm looking for a way to convert numbers to string format, dropping any redundant '.0'
The input data is a mix of floats and strings. Desired output:
0 --> '0'
0.0 --> '0'
0.1 --> '0.1'
1.0 --> '1'
I've come up with the following generator expression, but I wonder if there's a faster way:
```
(str(i).rstrip('.0') if i else '0' for i in lst)
```
The truth check is there to prevent 0 from becoming an empty string.
EDIT: The more or less acceptable solution I have for now is this:
```
('%d'%i if i == int(i) else '%s'%i for i in lst)
```
It just seems strange that there is no elegant way to handle this (fairly straightforward) case in python. | See [PEP 3101](http://www.python.org/dev/peps/pep-3101/):
```
'g' - General format. This prints the number as a fixed-point
number, unless the number is too large, in which case
it switches to 'e' exponent notation.
```
Old style:
```
>>> "%g" % float(10)
'10'
```
New style (recommended):
```
>>> '{0:g}'.format(float(21))
'21'
``` |
Prototype based object orientation. The good, the bad and the ugly? | 385,403 | 10 | 2008-12-22T02:25:02Z | 385,467 | 7 | 2008-12-22T03:15:34Z | [
"javascript",
"python",
"language-agnostic",
"lua",
"oop"
] | I come from classes object orientation languages and recently I have been learning those fancy dynamic languages (JavaScript, Python and Lua) and I want some tips about how to use OO in those languages. It would be useful to know the pitfalls and the shortcomings of such approach and the advantages compared to traditional OO.
The general notion that I got is that prototype based OO is basically programming with objects but no standard on how to use them whereas in normal OO there is a fixed predefined way to make and use objects.
In summary, what is the good, the bad and the ugly parts of such approach? | To conserve the bandwidth here is the link to [my answer on "How can I emulate âclassesâ in JavaScript? (with or without a third-party library)"](http://stackoverflow.com/questions/355848/what-is-the-best-way-to-emulate-classes-in-javascript-with-or-without-a-framewo#356343). It contains further references as well as examples.
The short answer: the heart of JavaScript's prototypal [OO](http://en.wikipedia.org/wiki/Object-orientation) is delegation. In this style of OOP different objects of the same "class" can delegate the handling of methods and properties to the same prototype (usually some third object):
```
var foo = {
property: 42,
inc: function(){
++this.counter;
},
dec: function(){
--this.counter;
}
};
// Note: foo does not define `counter`.
```
Let's create a constructor for objects with foo as a prototype. Effectively, everything unhandled will be delegated to foo.
```
var Bar = function(){
this.counter = 0;
};
Bar.prototype = foo; // This is how we set up the delegation.
// Some people refer to Bar (a constructor function) as "class".
var bar = new Bar();
console.log(bar.counter); // 0 --- Comes from bar itself
console.log(bar.property); // 42 --- Not defined in bar, comes from foo
bar.inc(); // Not defined in bar => delegated to foo
bar.inc();
bar.dec(); // Not defined in bar => delegated to foo
// Note: foo.inc() and foo.dec() are called but this === bar
// that is why bar is modified, not foo.
console.log(bar.counter); // 1 --- Comes from bar itself
```
Let's define `inc()` directly on bar:
```
bar.inc = function(){
this.counter = 42;
};
bar.inc(); // Defined in bar => calling it directly.
// foo.inc() is not even called.
console.log(bar.counter); // 42 --- Comes from bar
```
Setting up the single inheritance chain:
```
var Baz = function(){
this.counter = 99;
};
Baz.protype = new Bar();
var baz = new Baz();
console.log(baz.counter); // 99
baz.inc();
console.log(baz.counter); // 100
console.log(baz instanceof Baz); // true
console.log(baz instanceof Bar); // true
console.log(baz instanceof Object); // true
```
Neat, eh? |
Prototype based object orientation. The good, the bad and the ugly? | 385,403 | 10 | 2008-12-22T02:25:02Z | 385,571 | 13 | 2008-12-22T04:34:49Z | [
"javascript",
"python",
"language-agnostic",
"lua",
"oop"
] | I come from classes object orientation languages and recently I have been learning those fancy dynamic languages (JavaScript, Python and Lua) and I want some tips about how to use OO in those languages. It would be useful to know the pitfalls and the shortcomings of such approach and the advantages compared to traditional OO.
The general notion that I got is that prototype based OO is basically programming with objects but no standard on how to use them whereas in normal OO there is a fixed predefined way to make and use objects.
In summary, what is the good, the bad and the ugly parts of such approach? | Prototype-based OO lends itself poorly to static type checking, which some might consider a bad or ugly thing. Prototype-based OO *does* have a standard way of creating new objects, you **clone and modify existing objects**. You can also build factories, etc.
I think what people like most (the "good") is that prototype-based OO is very **lightweight and flexible**, offering a **very high power-to-weight ratio**.
For **tips on how to use prototype-based OO**, a great place to start is the original Self paper on [The Power of Simplicity](http://research.sun.com/self/papers/self-power.html). |
Extract float/double value | 385,558 | 17 | 2008-12-22T04:28:01Z | 385,597 | 38 | 2008-12-22T04:56:22Z | [
"python",
"regex"
] | How do I extract a double value from a string using regex.
```
import re
pattr = re.compile(???)
x = pattr.match("4.5")
``` | A regexp from the [`perldoc perlretut`](http://perldoc.perl.org/perlretut.html#Building-a-regexp):
```
import re
re_float = re.compile("""(?x)
^
[+-]?\ * # first, match an optional sign *and space*
( # then match integers or f.p. mantissas:
\d+ # start out with a ...
(
\.\d* # mantissa of the form a.b or a.
)? # ? takes care of integers of the form a
|\.\d+ # mantissa of the form .b
)
([eE][+-]?\d+)? # finally, optionally match an exponent
$""")
m = re_float.match("4.5")
print m.group(0)
# -> 4.5
```
To extract numbers from a bigger string:
```
s = """4.5 abc -4.5 abc - 4.5 abc + .1e10 abc . abc 1.01e-2 abc
1.01e-.2 abc 123 abc .123"""
print re.findall(r"[+-]? *(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?", s)
# -> ['4.5', '-4.5', '- 4.5', '+ .1e10', ' 1.01e-2',
# ' 1.01', '-.2', ' 123', ' .123']
``` |
Extract float/double value | 385,558 | 17 | 2008-12-22T04:28:01Z | 386,177 | 16 | 2008-12-22T12:41:31Z | [
"python",
"regex"
] | How do I extract a double value from a string using regex.
```
import re
pattr = re.compile(???)
x = pattr.match("4.5")
``` | Here's the easy way. Don't use regex's for built-in types.
```
try:
x = float( someString )
except ValueError, e:
# someString was NOT floating-point, what now?
``` |
Typecasting in Python | 385,572 | 12 | 2008-12-22T04:35:18Z | 385,583 | 26 | 2008-12-22T04:44:48Z | [
"python",
"string",
"int",
"bit",
"casting"
] | I need to convert strings in Python to other types such as unsigned and signed 8, 16, 32, and 64 bit ints, doubles, floats, and strings.
How can I do this? | You can convert a string to a 32-bit signed integer with the `int` function:
```
str = "1234"
i = int(str) // i is a 32-bit integer
```
If the string does not represent an integer, you'll get a `ValueError` exception. Note, however, that if the string does represent an integer, but that integer does not fit into a 32-bit signed int, then you'll actually get an object of type `long` instead.
You can then convert it to other widths and signednesses with some simple math:
```
s8 = (i + 2**7) % 2**8 - 2**7 // convert to signed 8-bit
u8 = i % 2**8 // convert to unsigned 8-bit
s16 = (i + 2**15) % 2**16 - 2**15 // convert to signed 16-bit
u16 = i % 2**16 // convert to unsigned 16-bit
s32 = (i + 2**31) % 2**32 - 2**31 // convert to signed 32-bit
u32 = i % 2**32 // convert to unsigned 32-bit
s64 = (i + 2**63) % 2**64 - 2**63 // convert to signed 64-bit
u64 = i % 2**64 // convert to unsigned 64-bit
```
You can convert strings to floating point with the `float` function:
```
f = float("3.14159")
```
Python floats are what other languages refer to as `double`, i.e. they are 64-bits. There are no 32-bit floats in Python. |
Python Performance - have you ever had to rewrite in something else? | 386,655 | 42 | 2008-12-22T16:23:13Z | 386,674 | 33 | 2008-12-22T16:28:08Z | [
"python",
"performance",
"optimization",
"rewrite"
] | Has anyone ever had code in Python, that turned out not to perform fast enough?
I mean, you were forced to *choose another language* because of it?
We are investigating using Python for a couple of larger projects, and my feeling is that in most cases, Python is plenty fast enough for most scenarios (compared to say, Java) because it relies on optimized C routines.
I wanted to see if people had instances where they started out in Python, **but** ended up having to go with something else because of performance.
Thanks. | Yes, I have. I wrote a row-count program for a binary (length-prefixed rather than delimited) bcp output file once and ended up having to redo it in C because the python one was too slow. This program was quite small (it only took a couple of days to re-write it in C), so I didn't bother to try and build a hybrid application (python glue with central routines written in C) but this would also have been a viable route.
A larger application with performance critical bits can be written in a combination of C and a higher level language. You can write the performance-critical parts in C with an interface to Python for the rest of the system. [SWIG](http://www.swig.org/), [Pyrex](http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/) or [Boost.Python](http://www.boost.org/doc/libs/1_37_0/libs/python/doc/index.html) (if you're using C++) all provide good mechanisms to do the plumbing for your Python interface. The [C API for python](http://docs.python.org/c-api/) is more complex than that for [Tcl](http://www.tcl.tk) or [Lua](http://www.lua.org/pil/24.html), but isn't infeasible to build by hand. For an example of a hand-built Python/C API, check out [cx\_Oracle](http://cx-oracle.sourceforge.net/).
This approach has been used on quite a number of successful applications going back as far as the 1970s (that I am aware of). [Mozilla](http://en.wikipedia.org/wiki/SeaMonkey) was substantially written in Javascript around a core engine written in C. [Several](http://en.wikipedia.org/wiki/IntelliCAD_Technology_Consortium) [CAD packages](http://www.fourmilab.ch/autofile/www/chapter2_35.html), [Interleaf](http://en.wikipedia.org/wiki/Interleaf) (a technical document publishing system) and of course [EMACS](http://www.gnu.org/software/emacs/) are substantially written in LISP with a central C, assembly language or other core. Quite a few commercial and open-source applications (e.g. [Chandler](http://en.wikipedia.org/wiki/Chandler_(PIM)) or [Sungard Front Arena](http://www.sungard.com/FrontArena)) use embedded Python interpreters and implement substantial parts of the application in Python.
**EDIT:** In rsponse to Dutch Masters' comment, keeping someone with C or C++ programming skills on the team for a Python project gives you the option of writing some of the application for speed. The areas where you can expect to get a significant performance gain are where the application does something highly iterative over a large data structure or large volume of data. In the case of the row-counter above it had to inhale a series of files totalling several gigabytes and go through a process where it read a varying length prefix and used that to determine the length of the data field. Most of the fields were short (just a few bytes long). This was somewhat bit-twiddly and very low level and iterative, which made it a natural fit for C.
Many of the python libraries such as [numpy](http://numpy.scipy.org/), [cElementTree](http://effbot.org/zone/celementtree.htm) or [cStringIO](http://effbot.org/librarybook/cstringio.htm) make use of an optimised C core with a python API that facilitates working with data in aggregate. For example, numpy has matrix data structures and operations written in C which do all the hard work and a Python API that provides services at the aggregate level. |
Python Performance - have you ever had to rewrite in something else? | 386,655 | 42 | 2008-12-22T16:23:13Z | 386,702 | 18 | 2008-12-22T16:40:55Z | [
"python",
"performance",
"optimization",
"rewrite"
] | Has anyone ever had code in Python, that turned out not to perform fast enough?
I mean, you were forced to *choose another language* because of it?
We are investigating using Python for a couple of larger projects, and my feeling is that in most cases, Python is plenty fast enough for most scenarios (compared to say, Java) because it relies on optimized C routines.
I wanted to see if people had instances where they started out in Python, **but** ended up having to go with something else because of performance.
Thanks. | This is a much more difficult question to answer than people are willing to admit.
For example, it may be that I am able to write a program that performs better in Python than it does in C. The fallacious conclusion from that statement is "Python is therefore faster than C". In reality, it may be because I have much more recent experience in Python and its best practices and standard libraries.
In fact no one can really answer your question unless they are certain that they can create an optimal solution in both languages, which is unlikely. In other words "My C solution was faster than my Python solution" is not the same as "C is faster than Python"
I'm willing to bet that Guido Van Rossum could have written Python solutions for adam and Dustin's problems that performed quite well.
My rule of thumb is that unless you are writing the sort of application that requires you to count clock cycles, you can probably achieve acceptable performance in Python. |
Python Performance - have you ever had to rewrite in something else? | 386,655 | 42 | 2008-12-22T16:23:13Z | 386,770 | 7 | 2008-12-22T17:12:50Z | [
"python",
"performance",
"optimization",
"rewrite"
] | Has anyone ever had code in Python, that turned out not to perform fast enough?
I mean, you were forced to *choose another language* because of it?
We are investigating using Python for a couple of larger projects, and my feeling is that in most cases, Python is plenty fast enough for most scenarios (compared to say, Java) because it relies on optimized C routines.
I wanted to see if people had instances where they started out in Python, **but** ended up having to go with something else because of performance.
Thanks. | While at uni we were writing a computer vision system for analysing human behaviour based on video clips. We used python because of the excellent PIL, to speed up development and let us get easy access to the image frames we'd extracted from the video for converting to arrays etc.
For 90% of what we wanted it was fine and since the images were reasonably low resolution the speed wasn't bad. However, a few of the processes required some complex pixel-by-pixel computations as well as convolutions which are notoriously slow. For these particular areas we re-wrote the innermost parts of the loops in C and just updated the old Python functions to call the C functions.
This gave us the best of both worlds. We had the ease of data access that python provides, which enabled to develop fast, and then the straight-line speed of C for the most intensive computations. |
Python Performance - have you ever had to rewrite in something else? | 386,655 | 42 | 2008-12-22T16:23:13Z | 386,999 | 7 | 2008-12-22T18:41:19Z | [
"python",
"performance",
"optimization",
"rewrite"
] | Has anyone ever had code in Python, that turned out not to perform fast enough?
I mean, you were forced to *choose another language* because of it?
We are investigating using Python for a couple of larger projects, and my feeling is that in most cases, Python is plenty fast enough for most scenarios (compared to say, Java) because it relies on optimized C routines.
I wanted to see if people had instances where they started out in Python, **but** ended up having to go with something else because of performance.
Thanks. | Not so far. I work for a company that has a molecular simulation engine and a bunch of programs written in python for processing the large multi-gigabyte datasets. All of our analysis software is now being written in Python because of the huge advantages in development flexibility and time.
If something is not fast enough we profile it with cProfile and find the bottlenecks. Usually there are one or two functions which take up 80 or 90% of the runtime. We then take those functions and rewrite them in C, something which Python makes dead easy with its C API. In many cases this results in an order of magnitude or more speedup. Problem gone. We then go on our merry way continuing to write everything else in Python. Rinse and repeat...
For entire modules or classes we tend to use Boost.python, it can be a bit of a bear but ultimately works well. If it's just a function or two, we sometimes inline it with scipy.weave if the project is already using scipy. |
Python Performance - have you ever had to rewrite in something else? | 386,655 | 42 | 2008-12-22T16:23:13Z | 478,872 | 14 | 2009-01-26T04:54:13Z | [
"python",
"performance",
"optimization",
"rewrite"
] | Has anyone ever had code in Python, that turned out not to perform fast enough?
I mean, you were forced to *choose another language* because of it?
We are investigating using Python for a couple of larger projects, and my feeling is that in most cases, Python is plenty fast enough for most scenarios (compared to say, Java) because it relies on optimized C routines.
I wanted to see if people had instances where they started out in Python, **but** ended up having to go with something else because of performance.
Thanks. | Adding my $0.02 for the record.
My work involves developing numeric models that run over 100's of gigabytes of data. The hard problems are in coming up with a revenue-generating solution quickly (i.e. time-to-market). To be commercially successful the solution also has to *execute* quickly (compute the solution in minimal amounts of time).
For us Python has proven to be an excellent choice to develop solutions for the reasons commonly cited: fast development time, language expressiveness, rich libraries, etc. But to meet the execution speed needs we've adopted the 'Hybrid' approach that several responses have already mentioned.
1. Using numpy for computationally intense parts. We get within 1.1x to 2.5x the speed of a 'native' C++ solution with numpy with less code, fewer bugs, and shorter development times.
2. Pickling (Python's object serialization) intermediate results to minimize re-computation. The nature of our system requires multiple steps over the same data, so we 'memorize' the results and re-use them where possible.
3. Profiling and choosing better algorithms. It's been said in other responses, but I'll repeat it: we whip-out cProfile and try to replace hot-spots with a better algorithm. Not applicable in all cases.
4. Going to C++. If the above fails then we call a C++ library. We use [PyBindGen](http://code.google.com/p/pybindgen/) to write our Python/C++ wrappers. We found it far superior to SWIG, SIP, and Boost.Python as it produces direct Python C API code without an intermediate layer.
Reading this list you might think "What a lot of re-work! I'll just do it in [C/C++/Java/assembler] the first time around and be done with it."
Let me put it into perspective. Using Python we were able to produce a working revenue-generating application in 5 weeks that, in other languages, had previously required 3 months for projects of similar scope. This includes the time needed to optimize the Python parts we found to be slow. |
How do I convert part of a python tuple (byte array) into an integer | 386,753 | 4 | 2008-12-22T17:06:00Z | 386,763 | 9 | 2008-12-22T17:10:02Z | [
"python",
"tuples"
] | I am trying to talk to a device using python. I have been handed a tuple of bytes which contains the storage information. How can I convert the data into the correct values:
response = (0, 0, 117, 143, 6)
The first 4 values are a 32-bit int telling me how many bytes have been used and the last value is the percentage used.
I can access the tuple as response[0] but cannot see how I can get the first 4 values into the int I require. | See [Convert Bytes to Floating Point Numbers in Python](http://stackoverflow.com/questions/5415/)
You probably want to use the struct module, e.g.
```
import struct
response = (0, 0, 117, 143, 6)
struct.unpack(">I", ''.join([chr(x) for x in response[:-1]]))
```
Assuming an unsigned int. There may be a better way to do the conversion to unpack, a list comprehension with join was just the first thing that I came up with.
**EDIT**: See also ΤÎΩΤÎÎÎÎ¥'s comment on this answer regarding endianness as well.
**EDIT #2**: If you don't mind using the array module as well, here is an alternate method that obviates the need for a list comprehension. Thanks to @[JimB](http://stackoverflow.com/questions/386753/how-do-i-convert-part-of-a-python-tuple-byte-array-into-an-integer#386998) for pointing out that unpack can operate on arrays as well.
```
import struct
from array import array
response = (0, 0, 117, 143, 6)
bytes = array('B', response[:-1])
struct.unpack('>I', bytes)
``` |
How do I convert part of a python tuple (byte array) into an integer | 386,753 | 4 | 2008-12-22T17:06:00Z | 386,830 | 13 | 2008-12-22T17:36:02Z | [
"python",
"tuples"
] | I am trying to talk to a device using python. I have been handed a tuple of bytes which contains the storage information. How can I convert the data into the correct values:
response = (0, 0, 117, 143, 6)
The first 4 values are a 32-bit int telling me how many bytes have been used and the last value is the percentage used.
I can access the tuple as response[0] but cannot see how I can get the first 4 values into the int I require. | Would,
> num = (response[0] << 24) +
> (response[1] << 16) + (response[2] <<
> 8) + response[3]
meet your needs?
aid |
Evaluate environment variables into a string | 386,934 | 24 | 2008-12-22T18:18:05Z | 386,978 | 41 | 2008-12-22T18:35:16Z | [
"python",
"filesystems",
"environment-variables"
] | I have a string representing a path. Because this application is used on Windows, OSX and Linux, we've defined environment variables to properly map volumes from the different file systems. The result is:
```
"$C/test/testing"
```
What I want to do is evaluate the environment variables in the string so that they're replaced by their respective volume names. Is there a specific command I'm missing, or do I have to take os.environ.keys() and manually replace the strings? | Use [os.path.expandvars](http://docs.python.org/library/os.path.html#os.path.expandvars) to expand the environment variables in the string, for example:
```
>>> os.path.expandvars('$C/test/testing')
'/stackoverflow/test/testing'
``` |
What are the steps to make a ModelForm work with a ManyToMany relationship with an intermediary model in Django? | 387,686 | 30 | 2008-12-22T22:58:24Z | 932,234 | 18 | 2009-05-31T15:27:30Z | [
"python",
"django",
"django-models",
"django-templates",
"django-forms"
] | * I have a *Client* and *Groupe* Model.
* A *Client* can be part of multiple *groups*.
* *Clients* that are part of a group can use its group's free rental rate at anytime but only once. That is where the intermediary model (*ClientGroupe*) comes in with that extra data.
For now, when I try to save the m2m data, it just dies and says I should use the ClientGroupe Manager...so what's missing?
## Here are my models:
```
class Groupe(models.Model):
nom = models.CharField(max_length=1500, blank=True)
class Client(models.Model):
nom = models.CharField(max_length=450, blank=True)
prenom = models.CharField(max_length=450, blank=True)
groupes = models.ManyToManyField(Groupe, null = True, blank = True, through='ClientGroupe')
class ClientGroupe(models.Model):
client = models.ForeignKey(Client)
groupe = models.ForeignKey(Groupe)
dt = models.DateField(null=True, blank=True) # the date the client is using its group's free rental rate
class Meta:
db_table = u'clients_groupes'
```
## and here's my view:
```
def modifier(request, id):
client = Client.objects.get(id=id)
form = ClientForm(instance = client)
dict = {
"form": form
, "instance" : client
}
if request.method == "POST":
form = ClientForm(request.POST, instance = client)
if form.is_valid():
client_mod = form.save()
id = client_mod.id
return HttpResponseRedirect(
"/client/%(id)s/?err=success" % {"id" : id}
)
else:
return HttpResponseRedirect(
"/client/%(id)s/?err=warning" % {"id" : id}
)
return render_to_response(
"client/modifier.html"
, dict
, context_instance=RequestContext(request)
)
```
**EDIT**:
and here's the ClientForm code:
```
class ClientForm(ModelForm):
class Meta:
model = Client
```
**EDIT #2**:
here's the error message:
```
AttributeError at /client/445/
Cannot set values on a ManyToManyField which specifies an intermediary model. Use ClientGroupe's Manager instead.
Request Method: POST
Request URL: http://localhost/client/445/
Exception Type: AttributeError
Exception Value: Cannot set values on a ManyToManyField which specifies an intermediary model. Use ClientGroupe's Manager instead.
Exception Location: C:\Python25\lib\site-packages\django\db\models\fields\related.py in __set__, line 574
Python Executable: C:\xampp\apache\bin\apache.exe
Python Version: 2.5.2
``` | If you use the save method right now, Django will try to save using the manager (which Django doesn't allow). Unfortunately, the behavior you want is a little bit trickier than what `ModelForm` does by default. What you need to do is create a *formset*.
First of all, you will need to change the options of your `ClientForm` so that it doesn't display the `groupes` attribute.
```
class ClientForm(ModelForm):
class Meta:
model = Client
exclude = ('groupes',)
```
Next, you must change the view to display the formset:
```
from django.forms.models import inlineformset_factory
def modifier(request, id):
client = Client.objects.get(id=id)
form = ClientForm(instance = client)
# Create the formset class
GroupeFormset = inlineformset_factory(Client, Groupe)
# Create the formset
formset = GroupeFormset(instance = client)
dict = {
"form": form
, "formset" : formset
, "instance" : client
}
if request.method == "POST":
form = ClientForm(request.POST, instance = client)
formset = GroupeFormset(request.POST, instance = client)
if form.is_valid() and formset.is_valid():
client_mod = form.save()
formset.save()
id = client_mod.id
return HttpResponseRedirect(
"/client/%(id)s/?err=success" % {"id" : id}
)
else:
return HttpResponseRedirect(
"/client/%(id)s/?err=warning" % {"id" : id}
)
return render_to_response(
"client/modifier.html"
, dict
, context_instance=RequestContext(request)
)
```
And obviously, you must also tweak your template to render the formset.
If you need any other advice on formsets, see these articles:
[Model formsets](http://docs.djangoproject.com/en/dev/topics/forms/modelforms/#inline-formsets)
[Formsets](http://docs.djangoproject.com/en/dev/topics/forms/formsets/#topics-forms-formsets) |
What are the steps to make a ModelForm work with a ManyToMany relationship with an intermediary model in Django? | 387,686 | 30 | 2008-12-22T22:58:24Z | 3,882,808 | 10 | 2010-10-07T14:41:20Z | [
"python",
"django",
"django-models",
"django-templates",
"django-forms"
] | * I have a *Client* and *Groupe* Model.
* A *Client* can be part of multiple *groups*.
* *Clients* that are part of a group can use its group's free rental rate at anytime but only once. That is where the intermediary model (*ClientGroupe*) comes in with that extra data.
For now, when I try to save the m2m data, it just dies and says I should use the ClientGroupe Manager...so what's missing?
## Here are my models:
```
class Groupe(models.Model):
nom = models.CharField(max_length=1500, blank=True)
class Client(models.Model):
nom = models.CharField(max_length=450, blank=True)
prenom = models.CharField(max_length=450, blank=True)
groupes = models.ManyToManyField(Groupe, null = True, blank = True, through='ClientGroupe')
class ClientGroupe(models.Model):
client = models.ForeignKey(Client)
groupe = models.ForeignKey(Groupe)
dt = models.DateField(null=True, blank=True) # the date the client is using its group's free rental rate
class Meta:
db_table = u'clients_groupes'
```
## and here's my view:
```
def modifier(request, id):
client = Client.objects.get(id=id)
form = ClientForm(instance = client)
dict = {
"form": form
, "instance" : client
}
if request.method == "POST":
form = ClientForm(request.POST, instance = client)
if form.is_valid():
client_mod = form.save()
id = client_mod.id
return HttpResponseRedirect(
"/client/%(id)s/?err=success" % {"id" : id}
)
else:
return HttpResponseRedirect(
"/client/%(id)s/?err=warning" % {"id" : id}
)
return render_to_response(
"client/modifier.html"
, dict
, context_instance=RequestContext(request)
)
```
**EDIT**:
and here's the ClientForm code:
```
class ClientForm(ModelForm):
class Meta:
model = Client
```
**EDIT #2**:
here's the error message:
```
AttributeError at /client/445/
Cannot set values on a ManyToManyField which specifies an intermediary model. Use ClientGroupe's Manager instead.
Request Method: POST
Request URL: http://localhost/client/445/
Exception Type: AttributeError
Exception Value: Cannot set values on a ManyToManyField which specifies an intermediary model. Use ClientGroupe's Manager instead.
Exception Location: C:\Python25\lib\site-packages\django\db\models\fields\related.py in __set__, line 574
Python Executable: C:\xampp\apache\bin\apache.exe
Python Version: 2.5.2
``` | ```
â¦
if form.is_valid():
client_mod = form.save(commit=False)
client_mod.save()
for groupe in form.cleaned_data.get('groupes'):
clientgroupe = ClientGroupe(client=client_mod, groupe=groupe)
clientgroupe.save()
â¦
``` |
How to gracefully deal with failed future feature (__future__) imports due to old interpreter version? | 388,069 | 68 | 2008-12-23T03:10:51Z | 388,083 | 57 | 2008-12-23T03:20:03Z | [
"python",
"python-import"
] | How do you gracefully handle failed future feature imports? If a user is running using Python 2.5 and the first statement in my module is:
```
from __future__ import print_function
```
Compiling this module for Python 2.5 will fail with a:
```
File "__init__.py", line 1
from __future__ import print_function
SyntaxError: future feature print_function is not defined
```
I'd like to inform the user that they need to rerun the program with Python >= 2.6 and maybe provide some instructions on how to do so. However, to quote [PEP 236](http://www.python.org/dev/peps/pep-0236/):
> The only lines that can appear before
> a future\_statement are:
>
> * The module docstring (if any).
> * Comments.
> * Blank lines.
> * Other future\_statements.
So I can't do something like:
```
import __future__
if hasattr(__future__, 'print_function'):
from __future__ import print_function
else:
raise ImportError('Python >= 2.6 is required')
```
Because it yields:
```
File "__init__.py", line 4
from __future__ import print_function
SyntaxError: from __future__ imports must occur at the beginning of the file
```
This snippet from the PEP seems to give hope of doing it inline:
> Q: I want to wrap future\_statements
> in try/except blocks, so I can use
> different code depending on which
> version of Python I'm running. Why
> can't I?
>
> A: Sorry! try/except is a runtime
> feature; future\_statements are
> primarily compile-time gimmicks, and
> your try/except happens long after the
> compiler is done. That is, by the
> time you do try/except, the semantics
> in effect for the module are already a
> done deal. Since the try/except
> wouldn't accomplish what it *looks*
> like it should accomplish, it's simply
> not allowed. We also want to keep
> these special statements very easy to
> find and to recognize.
>
> Note that you *can* import \_\_future\_\_
> directly, and use the information in
> it, along with sys.version\_info, to
> figure out where the release you're
> running under stands in relation to a
> given feature's status.
Ideas? | "I'd like to inform the user that they need to rerun the program with Python >= 2.6 and maybe provide some instructions on how to do so."
Isn't that what a README file is for?
Here's your alternative. A "wrapper": a little blob of Python that checks the environment before running your target aop.
File: appwrapper.py
```
import sys
major, minor, micro, releaselevel, serial = sys.version_info
if (major,minor) <= (2,5):
# provide advice on getting version 2.6 or higher.
sys.exit(2)
import app
app.main()
```
---
What "direct import" means. You can examine the contents of `__future__`. You're still bound by the fact the a `from __future__ import print_function` is information to the compiler, but you can poke around before importing the module that does the real work.
```
import __future__, sys
if hasattr(__future__, 'print_function'):
# Could also check sys.version_info >= __future__. print_function.optional
import app
app.main()
else:
print "instructions for upgrading"
``` |
How to gracefully deal with failed future feature (__future__) imports due to old interpreter version? | 388,069 | 68 | 2008-12-23T03:10:51Z | 3,037,220 | 46 | 2010-06-14T12:36:33Z | [
"python",
"python-import"
] | How do you gracefully handle failed future feature imports? If a user is running using Python 2.5 and the first statement in my module is:
```
from __future__ import print_function
```
Compiling this module for Python 2.5 will fail with a:
```
File "__init__.py", line 1
from __future__ import print_function
SyntaxError: future feature print_function is not defined
```
I'd like to inform the user that they need to rerun the program with Python >= 2.6 and maybe provide some instructions on how to do so. However, to quote [PEP 236](http://www.python.org/dev/peps/pep-0236/):
> The only lines that can appear before
> a future\_statement are:
>
> * The module docstring (if any).
> * Comments.
> * Blank lines.
> * Other future\_statements.
So I can't do something like:
```
import __future__
if hasattr(__future__, 'print_function'):
from __future__ import print_function
else:
raise ImportError('Python >= 2.6 is required')
```
Because it yields:
```
File "__init__.py", line 4
from __future__ import print_function
SyntaxError: from __future__ imports must occur at the beginning of the file
```
This snippet from the PEP seems to give hope of doing it inline:
> Q: I want to wrap future\_statements
> in try/except blocks, so I can use
> different code depending on which
> version of Python I'm running. Why
> can't I?
>
> A: Sorry! try/except is a runtime
> feature; future\_statements are
> primarily compile-time gimmicks, and
> your try/except happens long after the
> compiler is done. That is, by the
> time you do try/except, the semantics
> in effect for the module are already a
> done deal. Since the try/except
> wouldn't accomplish what it *looks*
> like it should accomplish, it's simply
> not allowed. We also want to keep
> these special statements very easy to
> find and to recognize.
>
> Note that you *can* import \_\_future\_\_
> directly, and use the information in
> it, along with sys.version\_info, to
> figure out where the release you're
> running under stands in relation to a
> given feature's status.
Ideas? | A rather hacky but simple method I've used before is to exploit the fact that byte literals were introduced in Python 2.6 and use something like this near the start of the file:
```
b'This module needs Python 2.6 or later. Please do xxx.'
```
This is harmless in Python 2.6 or later, but a `SyntaxError` in any earlier versions. Anyone trying to compile your file will still get an error, but they also get whatever message you want to give.
You might think that as you will have to have this line after your `from __future__ import print_function` then it will be the import that generates the `SyntaxError` and you won't get to see the useful error message, but strangely enough the later error takes precedence. I suspect that as the error from the import isn't really a syntax error in itself it isn't raised on the first compilation pass, and so real syntax errors get raised first (but I'm guessing).
This might not meet you criteria for being 'graceful', and it is very Python 2.6 specific, but it is quick and easy to do. |
How to gracefully deal with failed future feature (__future__) imports due to old interpreter version? | 388,069 | 68 | 2008-12-23T03:10:51Z | 7,252,783 | 38 | 2011-08-31T05:10:18Z | [
"python",
"python-import"
] | How do you gracefully handle failed future feature imports? If a user is running using Python 2.5 and the first statement in my module is:
```
from __future__ import print_function
```
Compiling this module for Python 2.5 will fail with a:
```
File "__init__.py", line 1
from __future__ import print_function
SyntaxError: future feature print_function is not defined
```
I'd like to inform the user that they need to rerun the program with Python >= 2.6 and maybe provide some instructions on how to do so. However, to quote [PEP 236](http://www.python.org/dev/peps/pep-0236/):
> The only lines that can appear before
> a future\_statement are:
>
> * The module docstring (if any).
> * Comments.
> * Blank lines.
> * Other future\_statements.
So I can't do something like:
```
import __future__
if hasattr(__future__, 'print_function'):
from __future__ import print_function
else:
raise ImportError('Python >= 2.6 is required')
```
Because it yields:
```
File "__init__.py", line 4
from __future__ import print_function
SyntaxError: from __future__ imports must occur at the beginning of the file
```
This snippet from the PEP seems to give hope of doing it inline:
> Q: I want to wrap future\_statements
> in try/except blocks, so I can use
> different code depending on which
> version of Python I'm running. Why
> can't I?
>
> A: Sorry! try/except is a runtime
> feature; future\_statements are
> primarily compile-time gimmicks, and
> your try/except happens long after the
> compiler is done. That is, by the
> time you do try/except, the semantics
> in effect for the module are already a
> done deal. Since the try/except
> wouldn't accomplish what it *looks*
> like it should accomplish, it's simply
> not allowed. We also want to keep
> these special statements very easy to
> find and to recognize.
>
> Note that you *can* import \_\_future\_\_
> directly, and use the information in
> it, along with sys.version\_info, to
> figure out where the release you're
> running under stands in relation to a
> given feature's status.
Ideas? | Just put a comment on the same line with the `"from __future__ import ..."`, like this:
```
from __future__ import print_function, division # We require Python 2.6 or later
```
Since Python displays the line containing the error, if you try to run the module with Python 2.5 you'll get a nice, descriptive error:
```
from __future__ import print_function, division # We require Python 2.6 or later
SyntaxError: future feature print_function is not defined
``` |
How do I use the built in password reset/change views with my own templates | 388,800 | 67 | 2008-12-23T12:47:32Z | 388,858 | 74 | 2008-12-23T13:21:11Z | [
"python",
"django",
"passwords"
] | For example I can point the `url '^/accounts/password/reset/$'` to `django.contrib.auth.views.password_reset` with my template filename in the context but I think need to send more context details.
I need to know exactly what context to add for each of the password reset and change views. | If you take a look at the sources for [django.contrib.auth.views.password\_reset](http://code.djangoproject.com/browser/django/trunk/django/contrib/auth/views.py) you'll see that it uses [`RequestContext`](http://code.djangoproject.com/browser/django/trunk/django/template/__init__.py). The upshot is, you can use Context Processors to modify the context which may allow you to inject the information that you need.
The b-list has a good [introduction to context processors](http://www.b-list.org/weblog/2006/jun/14/django-tips-template-context-processors/).
Edit (I seem to have been confused about what the actual question was):
You'll notice that `password_reset` takes a named parameter called `template_name`:
```
def password_reset(request, is_admin_site=False,
template_name='registration/password_reset_form.html',
email_template_name='registration/password_reset_email.html',
password_reset_form=PasswordResetForm,
token_generator=default_token_generator,
post_reset_redirect=None):
```
Check [password\_reset](https://docs.djangoproject.com/en/1.9/topics/auth/default/#django.contrib.auth.views.password_reset) for more information.
... thus, with a urls.py like:
```
from django.conf.urls.defaults import *
from django.contrib.auth.views import password_reset
urlpatterns = patterns('',
(r'^/accounts/password/reset/$', password_reset, {'template_name': 'my_templates/password_reset.html'}),
...
)
```
`django.contrib.auth.views.password_reset` will be called for URLs matching `'/accounts/password/reset'` with the keyword argument `template_name = 'my_templates/password_reset.html'`.
Otherwise, you don't need to provide any context as the `password_reset` view takes care of itself. If you want to see what context you have available, you can trigger a `TemplateSyntax` error and look through the stack trace find the frame with a local variable named `context`. If you want to modify the context then what I said above about context processors is probably the way to go.
In summary: what do you need to do to use your own template? Provide a `template_name` keyword argument to the view when it is called. You can supply keyword arguments to views by including a dictionary as the third member of a URL pattern tuple. |
How do I use the built in password reset/change views with my own templates | 388,800 | 67 | 2008-12-23T12:47:32Z | 389,679 | 9 | 2008-12-23T18:27:06Z | [
"python",
"django",
"passwords"
] | For example I can point the `url '^/accounts/password/reset/$'` to `django.contrib.auth.views.password_reset` with my template filename in the context but I think need to send more context details.
I need to know exactly what context to add for each of the password reset and change views. | You just need to wrap the existing functions and pass in the template you want. For example:
```
from django.contrib.auth.views import password_reset
def my_password_reset(request, template_name='path/to/my/template'):
return password_reset(request, template_name)
```
To see this just have a look at the function declartion of the built in views:
<http://code.djangoproject.com/browser/django/trunk/django/contrib/auth/views.py#L74> |
How do I use the built in password reset/change views with my own templates | 388,800 | 67 | 2008-12-23T12:47:32Z | 14,868,595 | 22 | 2013-02-14T05:52:07Z | [
"python",
"django",
"passwords"
] | For example I can point the `url '^/accounts/password/reset/$'` to `django.contrib.auth.views.password_reset` with my template filename in the context but I think need to send more context details.
I need to know exactly what context to add for each of the password reset and change views. | Strongly recommend this article.
I just plugged it in and it worked
<http://garmoncheg.blogspot.com.au/2012/07/django-resetting-passwords-with.html> |
Producing documentation for Python classes | 389,688 | 11 | 2008-12-23T18:30:53Z | 389,704 | 12 | 2008-12-23T18:37:06Z | [
"python",
"data-structures",
"documentation"
] | I'm about to start a project where I will be the only one doing actual code and two less experienced programmers (scary to think of myself as experienced!) will be watching and making suggestions on the program in general.
Is there a good (free) system that I can use to provide documentation for classes and functions based on the code I've written? It'd likely help them a lot in getting to grips with the structure of the data. | I have used [epydoc](http://epydoc.sourceforge.net/) to generate documentation for Python modules from embedded docstrings. It's pretty easy to use and generates nice looking output in multiple formats. |
Producing documentation for Python classes | 389,688 | 11 | 2008-12-23T18:30:53Z | 389,706 | 11 | 2008-12-23T18:37:36Z | [
"python",
"data-structures",
"documentation"
] | I'm about to start a project where I will be the only one doing actual code and two less experienced programmers (scary to think of myself as experienced!) will be watching and making suggestions on the program in general.
Is there a good (free) system that I can use to provide documentation for classes and functions based on the code I've written? It'd likely help them a lot in getting to grips with the structure of the data. | python.org is now using [sphinx](http://sphinx.pocoo.org/) for it's documentation.
I personally like the output of sphinx over epydoc. I also feel the restructured text is easier to read in the docstrings than the epydoc markup. |
How can I read Perl data structures from Python? | 389,945 | 9 | 2008-12-23T20:11:08Z | 389,970 | 17 | 2008-12-23T20:19:38Z | [
"python",
"perl",
"configuration",
"data-structures"
] | I've often seen people use Perl data structures in lieu of configuration files; i.e. a lone file containing only:
```
%config = (
'color' => 'red',
'numbers' => [5, 8],
qr/^spam/ => 'eggs'
);
```
What's the best way to convert the contents of these files into Python-equivalent data structures, using pure Python? For the time being we can assume that there are no real expressions to evaluate, only structured data. | Is using pure Python a requirement? If not, you can load it in Perl and convert it to YAML or JSON. Then use PyYAML or something similar to load them in Python. |
How can I read Perl data structures from Python? | 389,945 | 9 | 2008-12-23T20:11:08Z | 390,062 | 8 | 2008-12-23T20:56:23Z | [
"python",
"perl",
"configuration",
"data-structures"
] | I've often seen people use Perl data structures in lieu of configuration files; i.e. a lone file containing only:
```
%config = (
'color' => 'red',
'numbers' => [5, 8],
qr/^spam/ => 'eggs'
);
```
What's the best way to convert the contents of these files into Python-equivalent data structures, using pure Python? For the time being we can assume that there are no real expressions to evaluate, only structured data. | Not sure what the use case is. Here's my assumption: you're going to do a one-time conversion from Perl to Python.
Perl has this
```
%config = (
'color' => 'red',
'numbers' => [5, 8],
qr/^spam/ => 'eggs'
);
```
In Python, it would be
```
config = {
'color' : 'red',
'numbers' : [5, 8],
re.compile( "^spam" ) : 'eggs'
}
```
So, I'm guessing it's a bunch of RE's to replace
* `%variable = (` with `variable = {`
* `);` with `}`
* `variable => value` with `variable : value`
* `qr/.../ =>` with `re.compile( r"..." ) : value`
However, Python's built-in `dict` doesn't do anything unusual with a regex as a hash key. For that, you'd have to write your own subclass of `dict`, and override `__getitem__` to check REGEX keys separately.
```
class PerlLikeDict( dict ):
pattern_type= type(re.compile(""))
def __getitem__( self, key ):
if key in self:
return super( PerlLikeDict, self ).__getitem__( key )
for k in self:
if type(k) == self.pattern_type:
if k.match(key):
return self[k]
raise KeyError( "key %r not found" % ( key, ) )
```
Here's the example of using a Perl-like dict.
```
>>> pat= re.compile( "hi" )
>>> a = { pat : 'eggs' } # native dict, no features.
>>> x=PerlLikeDict( a )
>>> x['b']= 'c'
>>> x
{<_sre.SRE_Pattern object at 0x75250>: 'eggs', 'b': 'c'}
>>> x['b']
'c'
>>> x['ji']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 10, in __getitem__
KeyError: "key 'ji' not found"
>>> x['hi']
'eggs'
``` |
How can I read Perl data structures from Python? | 389,945 | 9 | 2008-12-23T20:11:08Z | 390,589 | 11 | 2008-12-24T01:41:06Z | [
"python",
"perl",
"configuration",
"data-structures"
] | I've often seen people use Perl data structures in lieu of configuration files; i.e. a lone file containing only:
```
%config = (
'color' => 'red',
'numbers' => [5, 8],
qr/^spam/ => 'eggs'
);
```
What's the best way to convert the contents of these files into Python-equivalent data structures, using pure Python? For the time being we can assume that there are no real expressions to evaluate, only structured data. | I'd just turn the Perl data structure into something else. Not seeing the actual file, there might be some extra work that my solution doesn't do.
If the only thing that's in the file is the one variable declaration (so, no `1;` at the end, and so on), it can be really simple to turn your `%config` it into YAML:
```
perl -MYAML -le 'print YAML::Dump( { do shift } )' filename
```
The `do` returns the last thing it evaluated, so in this little code it returns the list of hash key-value pairs. Things such as YAML::Dump like to work with references so they get a hint about the top-level structure, so I make that into a hash reference by surrounding the `do` with the curly braces. For your example, I'd get this YAML output:
```
---
(?-xism:^spam): eggs
color: red
numbers:
- 5
- 8
```
I don't know how Python will like that stringified regex, though. Do you really have a key that is a regex? I'd be curious to know how that's being used as part of the configuration.
---
If there's extra stuff in the file, life is a bit more tough. There's probably a really clever way to get around that, but I used the same idea, but just hard-coded the variable name that I wanted.
I tried this on the Perl data structure that the CPAN.pm module uses, and it looks like it came out fine. The only ugliness is the fore-knowledge of the variable name that it supplies. Now that you've seen the error of configuration in Perl code, avoid making the same mistake with Python code. :)
YAML:
```
perl -MYAML -le 'do shift; print YAML::Dump( $CPAN::Config )' MyConfig.pm
```
JSON:
```
perl -MJSON::Any -le 'do shift; my $j = JSON::Any->new; print $j->objToJson( $CPAN::Config )' MyConfig.pm
```
or
```
# suggested by JF Sebastian
perl -MJSON -le 'do shift; print to_json( $CPAN::Config )' MyConfig.pm
```
XML::Simple doesn't work out so well because it treated everything like an attribute, but maybe someone can improve on this:
```
perl -MXML::Simple -le 'do shift; print XMLout( $CPAN::Config )' MyConfig.pm
``` |
Elegant ways to support equivalence ("equality") in Python classes | 390,250 | 222 | 2008-12-23T22:30:52Z | 390,280 | 8 | 2008-12-23T22:44:26Z | [
"python",
"equality",
"equivalence"
] | When writing custom classes it is often important to allow equivalence by means of the `==` and `!=` operators. In Python, this is made possible by implementing the `__eq__` and `__ne__` special methods, respectively. The easiest way I've found to do this is the following method:
```
class Foo:
def __init__(self, item):
self.item = item
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.__dict__ == other.__dict__
else:
return False
def __ne__(self, other):
return not self.__eq__(other)
```
Do you know of more elegant means of doing this? Do you know of any particular disadvantages to using the above method of comparing `__dict__`s?
**Note**: A bit of clarification--when `__eq__` and `__ne__` are undefined, you'll find this behavior:
```
>>> a = Foo(1)
>>> b = Foo(1)
>>> a is b
False
>>> a == b
False
```
That is, `a == b` evaluates to `False` because it really runs `a is b`, a test of identity (i.e., "Is `a` the same object as `b`?").
When `__eq__` and `__ne__` are defined, you'll find this behavior (which is the one we're after):
```
>>> a = Foo(1)
>>> b = Foo(1)
>>> a is b
False
>>> a == b
True
``` | You don't have to override both `__eq__` and `__ne__` you can override only `__cmp__` but this will make an implication on the result of ==, !==, < , > and so on.
`is` tests for object identity. This means a `is` b will be `True` in the case when a and b both hold the reference to the same object. In python you always hold a reference to an object in a variable not the actual object, so essentially for a is b to be true the objects in them should be located in the same memory location. How and most importantly why would you go about overriding this behaviour?
Edit: I didn't know `__cmp__` was removed from python 3 so avoid it. |
Elegant ways to support equivalence ("equality") in Python classes | 390,250 | 222 | 2008-12-23T22:30:52Z | 390,511 | 143 | 2008-12-24T00:44:13Z | [
"python",
"equality",
"equivalence"
] | When writing custom classes it is often important to allow equivalence by means of the `==` and `!=` operators. In Python, this is made possible by implementing the `__eq__` and `__ne__` special methods, respectively. The easiest way I've found to do this is the following method:
```
class Foo:
def __init__(self, item):
self.item = item
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.__dict__ == other.__dict__
else:
return False
def __ne__(self, other):
return not self.__eq__(other)
```
Do you know of more elegant means of doing this? Do you know of any particular disadvantages to using the above method of comparing `__dict__`s?
**Note**: A bit of clarification--when `__eq__` and `__ne__` are undefined, you'll find this behavior:
```
>>> a = Foo(1)
>>> b = Foo(1)
>>> a is b
False
>>> a == b
False
```
That is, `a == b` evaluates to `False` because it really runs `a is b`, a test of identity (i.e., "Is `a` the same object as `b`?").
When `__eq__` and `__ne__` are defined, you'll find this behavior (which is the one we're after):
```
>>> a = Foo(1)
>>> b = Foo(1)
>>> a is b
False
>>> a == b
True
``` | The way you describe is the way I've always done it. Since it's totally generic, you can always break that functionality out into a mixin class and inherit it in classes where you want that functionality.
```
class CommonEqualityMixin(object):
def __eq__(self, other):
return (isinstance(other, self.__class__)
and self.__dict__ == other.__dict__)
def __ne__(self, other):
return not self.__eq__(other)
class Foo(CommonEqualityMixin):
def __init__(self, item):
self.item = item
``` |
Elegant ways to support equivalence ("equality") in Python classes | 390,250 | 222 | 2008-12-23T22:30:52Z | 390,640 | 166 | 2008-12-24T02:30:13Z | [
"python",
"equality",
"equivalence"
] | When writing custom classes it is often important to allow equivalence by means of the `==` and `!=` operators. In Python, this is made possible by implementing the `__eq__` and `__ne__` special methods, respectively. The easiest way I've found to do this is the following method:
```
class Foo:
def __init__(self, item):
self.item = item
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.__dict__ == other.__dict__
else:
return False
def __ne__(self, other):
return not self.__eq__(other)
```
Do you know of more elegant means of doing this? Do you know of any particular disadvantages to using the above method of comparing `__dict__`s?
**Note**: A bit of clarification--when `__eq__` and `__ne__` are undefined, you'll find this behavior:
```
>>> a = Foo(1)
>>> b = Foo(1)
>>> a is b
False
>>> a == b
False
```
That is, `a == b` evaluates to `False` because it really runs `a is b`, a test of identity (i.e., "Is `a` the same object as `b`?").
When `__eq__` and `__ne__` are defined, you'll find this behavior (which is the one we're after):
```
>>> a = Foo(1)
>>> b = Foo(1)
>>> a is b
False
>>> a == b
True
``` | You need to be careful with inheritance:
```
>>> class Foo:
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.__dict__ == other.__dict__
else:
return False
>>> class Bar(Foo):pass
>>> b = Bar()
>>> f = Foo()
>>> f == b
True
>>> b == f
False
```
Check types more strictly, like this:
```
def __eq__(self, other):
if type(other) is type(self):
return self.__dict__ == other.__dict__
return False
```
Besides that, your approach will work fine, that's what special methods are there for. |
Elegant ways to support equivalence ("equality") in Python classes | 390,250 | 222 | 2008-12-23T22:30:52Z | 12,494,556 | 11 | 2012-09-19T12:19:09Z | [
"python",
"equality",
"equivalence"
] | When writing custom classes it is often important to allow equivalence by means of the `==` and `!=` operators. In Python, this is made possible by implementing the `__eq__` and `__ne__` special methods, respectively. The easiest way I've found to do this is the following method:
```
class Foo:
def __init__(self, item):
self.item = item
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.__dict__ == other.__dict__
else:
return False
def __ne__(self, other):
return not self.__eq__(other)
```
Do you know of more elegant means of doing this? Do you know of any particular disadvantages to using the above method of comparing `__dict__`s?
**Note**: A bit of clarification--when `__eq__` and `__ne__` are undefined, you'll find this behavior:
```
>>> a = Foo(1)
>>> b = Foo(1)
>>> a is b
False
>>> a == b
False
```
That is, `a == b` evaluates to `False` because it really runs `a is b`, a test of identity (i.e., "Is `a` the same object as `b`?").
When `__eq__` and `__ne__` are defined, you'll find this behavior (which is the one we're after):
```
>>> a = Foo(1)
>>> b = Foo(1)
>>> a is b
False
>>> a == b
True
``` | Not a direct answer but seemed relevant enough to be tacked on as it saves a bit of verbose tedium on occasion. Cut straight from the docs...
---
[functools.total\_ordering(cls)](http://docs.python.org/library/functools.html#functools.total_ordering)
**Given a class defining one or more rich comparison ordering methods, this class decorator supplies the rest.** This simplifies the effort involved in specifying all of the possible rich comparison operations:
The class must define one of **lt**(), **le**(), **gt**(), or **ge**(). In addition, the class should supply an **eq**() method.
New in version 2.7
```
@total_ordering
class Student:
def __eq__(self, other):
return ((self.lastname.lower(), self.firstname.lower()) ==
(other.lastname.lower(), other.firstname.lower()))
def __lt__(self, other):
return ((self.lastname.lower(), self.firstname.lower()) <
(other.lastname.lower(), other.firstname.lower()))
``` |
Elegant ways to support equivalence ("equality") in Python classes | 390,250 | 222 | 2008-12-23T22:30:52Z | 25,176,504 | 75 | 2014-08-07T07:24:42Z | [
"python",
"equality",
"equivalence"
] | When writing custom classes it is often important to allow equivalence by means of the `==` and `!=` operators. In Python, this is made possible by implementing the `__eq__` and `__ne__` special methods, respectively. The easiest way I've found to do this is the following method:
```
class Foo:
def __init__(self, item):
self.item = item
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.__dict__ == other.__dict__
else:
return False
def __ne__(self, other):
return not self.__eq__(other)
```
Do you know of more elegant means of doing this? Do you know of any particular disadvantages to using the above method of comparing `__dict__`s?
**Note**: A bit of clarification--when `__eq__` and `__ne__` are undefined, you'll find this behavior:
```
>>> a = Foo(1)
>>> b = Foo(1)
>>> a is b
False
>>> a == b
False
```
That is, `a == b` evaluates to `False` because it really runs `a is b`, a test of identity (i.e., "Is `a` the same object as `b`?").
When `__eq__` and `__ne__` are defined, you'll find this behavior (which is the one we're after):
```
>>> a = Foo(1)
>>> b = Foo(1)
>>> a is b
False
>>> a == b
True
``` | Sorry, but non of the answers fully "work".
Consider the simple problem:
```
class Number:
"""Very basic"""
def __init__(self, some_number):
self.some_number = some_number
n1 = Number(1)
n2 = Number(1)
print n1 == n2 # False -- oops
```
So, Python by default uses the id of objects for comparison.
```
print id(n1) # 140400634555856
print id(n2) # 140400634555920
```
Overriding the **eq** function seams to solve the problem:
```
def __eq__(self, other):
"""Override the default Equals behavior"""
if isinstance(other, self.__class__):
return self.__dict__ == other.__dict__
return False
print n1 == n2 # True
print n1 != n2 # True -- oops
```
Always remember to add the **ne** function override:
```
def __ne__(self, other):
"""Define a non-equality test"""
return not self.__eq__(other)
print n1 == n2 # True
print n1 != n2 # False
```
But that doesn't solve all our problems.
Let's add a subclass:
```
class NumberPlus(Number):
pass
n3 = NumberPlus(1)
print n1 == n3 # True
print n3 == n1 # False -- oops
```
Note - new style classes behave [a bit differently](http://stackoverflow.com/a/12984987/78234) yet I will provide a generic solution.
To fix we need to return the singleton `NotImplemented` when the object types do not match, delegating the result to `superclass.__eq__(subclass)`.
The result looks like this:
```
def __eq__(self, other):
"""Override the default Equals behavior"""
if isinstance(other, self.__class__):
return self.__dict__ == other.__dict__
return NotImplemented
def __ne__(self, other):
"""Define a non-equality test"""
if isinstance(other, self.__class__):
return not self.__eq__(other)
return NotImplemented
```
Are we there yet? Not quite.
How many unique numbers do we have?
```
print len(set([n1, n2, n3])) # 3 -- oops
```
Hmmm. Sets use the hashes of objects, and by default Python returns the id() of the object as a hash. Let's try to override:
```
def __hash__(self):
"""Override the default hash behavior (that returns the id or the object)"""
return hash(tuple(sorted(self.__dict__.items())))
print len(set([n1, n2, n3])) # 1
```
The end result looks like this (I added some assertions at the end for validation):
```
class Number(object):
"""Very basic"""
def __init__(self, some_number):
self.some_number = some_number
def __eq__(self, other):
"""Override the default Equals behavior"""
if isinstance(other, self.__class__):
return self.__dict__ == other.__dict__
return NotImplemented
def __ne__(self, other):
"""Define a non-equality test"""
if isinstance(other, self.__class__):
return not self.__eq__(other)
return NotImplemented
def __hash__(self):
"""Override the default hash behavior (that returns the id or the object)"""
return hash(tuple(sorted(self.__dict__.items())))
n1 = Number(1)
n2 = Number(1)
class NumberPlus(Number):
pass
n3 = NumberPlus(1)
n4 = NumberPlus(4)
assert n1 == n2
assert n2 == n1
assert not n1 != n2
assert not n2 != n1
assert n1 == n3
assert n3 == n1
assert not n1 != n3
assert not n3 != n1
assert not n1 == n4
assert not n4 == n1
assert n1 != n4
assert n4 != n1
assert len(set([n1, n2, n3, ])) == 1
assert len(set([n1, n2, n3, n4])) == 2
``` |
How do you debug Mako templates? | 390,409 | 31 | 2008-12-23T23:43:30Z | 536,087 | 38 | 2009-02-11T09:31:33Z | [
"python",
"debugging",
"templates",
"jinja2",
"mako"
] | So far I've found it impossible to produce usable tracebacks when Mako templates aren't coded correctly.
Is there any way to debug templates besides iterating for every line of code? | Mako actually provides a [VERY nice way to track down errors in a template](http://docs.makotemplates.org/en/latest/usage.html#handling-exceptions):
```
from mako import exceptions
try:
template = lookup.get_template(uri)
print template.render()
except:
print exceptions.html_error_template().render()
``` |
Is there any built-in way to get the length of an iterable in python? | 390,852 | 23 | 2008-12-24T05:55:37Z | 390,861 | 15 | 2008-12-24T06:03:06Z | [
"python",
"iterator"
] | For example, files, in Python, are iterable - they iterate over the lines in the file. I want to count the number of lines.
One quick way is to do this:
```
lines = len(list(open(fname)))
```
However, this loads the whole file into memory (at once). This rather defeats the purpose of an iterator (which only needs to keep the current line in memory).
This doesn't work:
```
lines = len(line for line in open(fname))
```
as generators don't have a length.
Is there any way to do this short of defining a count function?
```
def count(i):
c = 0
for el in i: c += 1
return c
```
EDIT: To clarify, I understand that the whole file will have to be read! I just don't want it in memory all at once =). | If you need a count of lines you can do this, I don't know of any better way to do it:
```
line_count = sum(1 for line in open("yourfile.txt"))
``` |
Is there any built-in way to get the length of an iterable in python? | 390,852 | 23 | 2008-12-24T05:55:37Z | 390,885 | 43 | 2008-12-24T06:23:35Z | [
"python",
"iterator"
] | For example, files, in Python, are iterable - they iterate over the lines in the file. I want to count the number of lines.
One quick way is to do this:
```
lines = len(list(open(fname)))
```
However, this loads the whole file into memory (at once). This rather defeats the purpose of an iterator (which only needs to keep the current line in memory).
This doesn't work:
```
lines = len(line for line in open(fname))
```
as generators don't have a length.
Is there any way to do this short of defining a count function?
```
def count(i):
c = 0
for el in i: c += 1
return c
```
EDIT: To clarify, I understand that the whole file will have to be read! I just don't want it in memory all at once =). | Short of iterating through the iterable and counting the number of iterations, no. That's what makes it an iterable and not a list. This isn't really even a python-specific problem. Look at the classic linked-list data structure. Finding the length is an O(n) operation that involves iterating the whole list to find the number of elements.
As mcrute mentioned above, you can probably reduce your function to:
```
def count_iterable(i):
return sum(1 for e in i)
```
Of course, if you're defining your own iterable object you can always implement `__len__` yourself and keep an element count somewhere. |
Is there any built-in way to get the length of an iterable in python? | 390,852 | 23 | 2008-12-24T05:55:37Z | 390,928 | 8 | 2008-12-24T06:54:08Z | [
"python",
"iterator"
] | For example, files, in Python, are iterable - they iterate over the lines in the file. I want to count the number of lines.
One quick way is to do this:
```
lines = len(list(open(fname)))
```
However, this loads the whole file into memory (at once). This rather defeats the purpose of an iterator (which only needs to keep the current line in memory).
This doesn't work:
```
lines = len(line for line in open(fname))
```
as generators don't have a length.
Is there any way to do this short of defining a count function?
```
def count(i):
c = 0
for el in i: c += 1
return c
```
EDIT: To clarify, I understand that the whole file will have to be read! I just don't want it in memory all at once =). | Absolutely not, for the simple reason that iterables are not guaranteed to be finite.
Consider this perfectly legal generator function:
```
def forever():
while True:
yield "I will run forever"
```
Attempting to calculate the length of this function with `len([x for x in forever()])` will clearly not work.
As you noted, much of the purpose of iterators/generators is to be able to work on a large dataset without loading it all into memory. The fact that you can't get an immediate length should be considered a tradeoff. |
Is there any built-in way to get the length of an iterable in python? | 390,852 | 23 | 2008-12-24T05:55:37Z | 390,989 | 7 | 2008-12-24T07:49:02Z | [
"python",
"iterator"
] | For example, files, in Python, are iterable - they iterate over the lines in the file. I want to count the number of lines.
One quick way is to do this:
```
lines = len(list(open(fname)))
```
However, this loads the whole file into memory (at once). This rather defeats the purpose of an iterator (which only needs to keep the current line in memory).
This doesn't work:
```
lines = len(line for line in open(fname))
```
as generators don't have a length.
Is there any way to do this short of defining a count function?
```
def count(i):
c = 0
for el in i: c += 1
return c
```
EDIT: To clarify, I understand that the whole file will have to be read! I just don't want it in memory all at once =). | I've used this redefinition for some time now:
```
def len(thingy):
try:
return thingy.__len__()
except AttributeError:
return sum(1 for item in iter(thingy))
``` |
Organising my Python project | 391,879 | 45 | 2008-12-24T17:23:14Z | 391,899 | 30 | 2008-12-24T17:31:40Z | [
"python",
"project-organization"
] | I'm starting a Python project and expect to have 20 or more classes in it. As is good practice I want to put them in a separate file each. However, the project directory quickly becomes swamped with files (or will when I do this).
If I put a file to import in a folder I can no longer import it. How do I import a file from another folder and will I need to reference to the class it contains differently now that it's in a folder?
Thanks in advance | Create an `__init__.py` file in your projects folder, and it will be treated like a module by Python.
Classes in your package directory can then be imported using syntax like:
```
from package import class
import package.class
```
Within `__init__.py`, you may create an `__all__` array that defines `from package import *` behavior:
```
# name1 and name2 will be available in calling module's namespace
# when using "from package import *" syntax
__all__ = ['name1', 'name2']
```
And here is [way more information than you even want to know about packages in Python](http://www.python.org/doc/essays/packages.html)
Generally speaking, a good way to learn about how to organize a lot of code is to pick a popular Python package and see how they did it. I'd check out [Django](http://www.djangoproject.com/) and [Twisted](http://twistedmatrix.com/trac/), for starters. |
Organising my Python project | 391,879 | 45 | 2008-12-24T17:23:14Z | 391,902 | 13 | 2008-12-24T17:33:00Z | [
"python",
"project-organization"
] | I'm starting a Python project and expect to have 20 or more classes in it. As is good practice I want to put them in a separate file each. However, the project directory quickly becomes swamped with files (or will when I do this).
If I put a file to import in a folder I can no longer import it. How do I import a file from another folder and will I need to reference to the class it contains differently now that it's in a folder?
Thanks in advance | Python doesn't force you into Java's nasty one-class-per-file style. In fact, it's not even considered good style to put each class in a separate file unless they are huge. (If they are huge, you probably have to do refactoring anyway.) Instead, you should group similar classes and functions in modules. For example, if you are writing a GUI calculator, your package layout might look like this:
```
/amazingcalc
/__init__.py # This makes it a Python package and importable.
/evaluate.py # Contains the code to actually do calculations.
/main.py # Starts the application
/ui.py # Contains the code to make a pretty interface
``` |
Organising my Python project | 391,879 | 45 | 2008-12-24T17:23:14Z | 391,904 | 7 | 2008-12-24T17:33:09Z | [
"python",
"project-organization"
] | I'm starting a Python project and expect to have 20 or more classes in it. As is good practice I want to put them in a separate file each. However, the project directory quickly becomes swamped with files (or will when I do this).
If I put a file to import in a folder I can no longer import it. How do I import a file from another folder and will I need to reference to the class it contains differently now that it's in a folder?
Thanks in advance | simple answer is to create an empty file called `__init__.py` in the new folder you made. Then in your top level .py file include with something like:
```
import mynewsubfolder.mynewclass
``` |
Organising my Python project | 391,879 | 45 | 2008-12-24T17:23:14Z | 391,916 | 22 | 2008-12-24T17:42:54Z | [
"python",
"project-organization"
] | I'm starting a Python project and expect to have 20 or more classes in it. As is good practice I want to put them in a separate file each. However, the project directory quickly becomes swamped with files (or will when I do this).
If I put a file to import in a folder I can no longer import it. How do I import a file from another folder and will I need to reference to the class it contains differently now that it's in a folder?
Thanks in advance | "As is good practice I want to put them in a separate file each. "
This is not actually a very good practice. You should design modules that contain closely-related classes.
As a practical matter, no class actually stands completely alone. Generally classes come in clusters or groups that are logically related. |
Python Optparse list | 392,041 | 33 | 2008-12-24T18:54:52Z | 392,061 | 35 | 2008-12-24T19:03:51Z | [
"python",
"optparse"
] | I'm using the python optparse module in my program, and I'm having trouble finding an easy way to parse an option that contains a list of values.
For example:
```
--groups one,two,three.
```
I'd like to be able to access these values in a list format as `options.groups[]`. Is there an optparse option to convert comma separated values into a list? Or do I have to do this manually? | Look at [option callbacks](http://docs.python.org/2/library/optparse#option-callbacks). Your callback function can parse the value into a list using a basic `optarg.split(',')` |
Python Optparse list | 392,041 | 33 | 2008-12-24T18:54:52Z | 392,258 | 73 | 2008-12-24T21:50:34Z | [
"python",
"optparse"
] | I'm using the python optparse module in my program, and I'm having trouble finding an easy way to parse an option that contains a list of values.
For example:
```
--groups one,two,three.
```
I'd like to be able to access these values in a list format as `options.groups[]`. Is there an optparse option to convert comma separated values into a list? Or do I have to do this manually? | S.Lott's answer has already been accepted, but here's a code sample for the archives:
```
def foo_callback(option, opt, value, parser):
setattr(parser.values, option.dest, value.split(','))
parser = OptionParser()
parser.add_option('-f', '--foo',
type='string',
action='callback',
callback=foo_callback)
``` |
Python Optparse list | 392,041 | 33 | 2008-12-24T18:54:52Z | 29,301,200 | 7 | 2015-03-27T12:53:43Z | [
"python",
"optparse"
] | I'm using the python optparse module in my program, and I'm having trouble finding an easy way to parse an option that contains a list of values.
For example:
```
--groups one,two,three.
```
I'd like to be able to access these values in a list format as `options.groups[]`. Is there an optparse option to convert comma separated values into a list? Or do I have to do this manually? | Again, just for the sake of archive completeness, expanding the example above:
* You can still use "dest" to specify the option name for later access
* Default values cannot be used in such cases (see explanation in [Triggering callback on default value in optparse](http://stackoverflow.com/questions/14568141/triggering-callback-on-default-value-in-optparse))
* If you'd like to validate the input, OptionValueError should be thrown from foo\_callback
The code (with tiny changes) would then be:
```
def get_comma_separated_args(option, opt, value, parser):
setattr(parser.values, option.dest, value.split(','))
parser = OptionParser()
parser.add_option('-f', '--foo',
type='string',
action='callback',
callback=get_comma_separated_args,
dest = foo_args_list)
``` |
What are your (concrete) use-cases for metaclasses in Python? | 392,160 | 51 | 2008-12-24T20:13:06Z | 392,255 | 27 | 2008-12-24T21:43:05Z | [
"python",
"metaclass"
] | I have a friend who likes to use metaclasses, and regularly offers them as a solution.
I am of the mind that you almost never need to use metaclasses. Why? because I figure if you are doing something like that to a class, you should probably be doing it to an object. And a small redesign/refactor is in order.
Being able to use metaclasses has caused a lot of people in a lot of places to use classes as some kind of second rate object, which just seems disastrous to me. Is programming to be replaced by meta-programming? The addition of class decorators has unfortunately made it even more acceptable.
So please, I am desperate to know your valid (concrete) use-cases for metaclasses in Python. Or to be enlightened as to why mutating classes is better than mutating objects, sometimes.
I will start:
> Sometimes when using a third-party
> library it is useful to be able to
> mutate the class in a certain way.
(this is the only case I can think of, and it's not concrete) | The purpose of metaclasses isn't to replace the class/object distinction with metaclass/class - it's to change the behaviour of class definitions (and thus their instances) in some way. Effectively it's to alter the behaviour of the class statement in ways that may be more useful for your particular domain than the default. The things I have used them for are:
* Tracking subclasses, usually to register handlers. This is handy when using a plugin style setup, where you wish to register a handler for a particular thing simply by subclassing and setting up a few class attributes. eg. suppose you write a handler for various music formats, where each class implements appropriate methods (play / get tags etc) for its type. Adding a handler for a new type becomes:
```
class Mp3File(MusicFile):
extensions = ['.mp3'] # Register this type as a handler for mp3 files
...
# Implementation of mp3 methods go here
```
The metaclass then maintains a dictionary of `{'.mp3' : MP3File, ... }` etc, and constructs an object of the appropriate type when you request a handler through a factory function.
* Changing behaviour. You may want to attach a special meaning to certain attributes, resulting in altered behaviour when they are present. For example, you may want to look for methods with the name `_get_foo` and `_set_foo` and transparently convert them to properties. As a real-world example, [here's](http://code.activestate.com/recipes/498149/) a recipe I wrote to give more C-like struct definitions. The metaclass is used to convert the declared items into a struct format string, handling inheritance etc, and produce a class capable of dealing with it.
For other real-world examples, take a look at various ORMs, like [sqlalchemy's](http://www.sqlalchemy.org/) ORM or [sqlobject](http://www.sqlobject.org/). Again, the purpose is to interpret defintions (here SQL column definitions) with a particular meaning. |
What are your (concrete) use-cases for metaclasses in Python? | 392,160 | 51 | 2008-12-24T20:13:06Z | 392,442 | 12 | 2008-12-25T02:23:47Z | [
"python",
"metaclass"
] | I have a friend who likes to use metaclasses, and regularly offers them as a solution.
I am of the mind that you almost never need to use metaclasses. Why? because I figure if you are doing something like that to a class, you should probably be doing it to an object. And a small redesign/refactor is in order.
Being able to use metaclasses has caused a lot of people in a lot of places to use classes as some kind of second rate object, which just seems disastrous to me. Is programming to be replaced by meta-programming? The addition of class decorators has unfortunately made it even more acceptable.
So please, I am desperate to know your valid (concrete) use-cases for metaclasses in Python. Or to be enlightened as to why mutating classes is better than mutating objects, sometimes.
I will start:
> Sometimes when using a third-party
> library it is useful to be able to
> mutate the class in a certain way.
(this is the only case I can think of, and it's not concrete) | Let's start with Tim Peter's classic quote:
> Metaclasses are deeper magic than 99%
> of users should ever worry about. If
> you wonder whether you need them, you
> don't (the people who actually need
> them know with certainty that they
> need them, and don't need an
> explanation about why). Tim Peters
> (c.l.p post 2002-12-22)
Having said that, I have (periodically) run across true uses of metaclasses. The one that comes to mind is in Django where all of your models inherit from models.Model. models.Model, in turn, does some serious magic to wrap your DB models with Django's ORM goodness. That magic happens by way of metaclasses. It creates all manner of exception classes, manager classes, etc. etc.
See django/db/models/base.py, class ModelBase() for the beginning of the story. |
What are your (concrete) use-cases for metaclasses in Python? | 392,160 | 51 | 2008-12-24T20:13:06Z | 393,368 | 10 | 2008-12-26T01:35:56Z | [
"python",
"metaclass"
] | I have a friend who likes to use metaclasses, and regularly offers them as a solution.
I am of the mind that you almost never need to use metaclasses. Why? because I figure if you are doing something like that to a class, you should probably be doing it to an object. And a small redesign/refactor is in order.
Being able to use metaclasses has caused a lot of people in a lot of places to use classes as some kind of second rate object, which just seems disastrous to me. Is programming to be replaced by meta-programming? The addition of class decorators has unfortunately made it even more acceptable.
So please, I am desperate to know your valid (concrete) use-cases for metaclasses in Python. Or to be enlightened as to why mutating classes is better than mutating objects, sometimes.
I will start:
> Sometimes when using a third-party
> library it is useful to be able to
> mutate the class in a certain way.
(this is the only case I can think of, and it's not concrete) | I have a class that handles non-interactive plotting, as a frontend to Matplotlib. However, on occasion one wants to do interactive plotting. With only a couple functions I found that I was able to increment the figure count, call draw manually, etc, but I needed to do these before and after every plotting call. So to create both an interactive plotting wrapper and an offscreen plotting wrapper, I found it was more efficient to do this via metaclasses, wrapping the appropriate methods, than to do something like:
```
class PlottingInteractive:
add_slice = wrap_pylab_newplot(add_slice)
```
This method doesn't keep up with API changes and so on, but one that iterates over the class attributes in `__init__` before re-setting the class attributes is more efficient and keeps things up to date:
```
class _Interactify(type):
def __init__(cls, name, bases, d):
super(_Interactify, cls).__init__(name, bases, d)
for base in bases:
for attrname in dir(base):
if attrname in d: continue # If overridden, don't reset
attr = getattr(cls, attrname)
if type(attr) == types.MethodType:
if attrname.startswith("add_"):
setattr(cls, attrname, wrap_pylab_newplot(attr))
elif attrname.startswith("set_"):
setattr(cls, attrname, wrap_pylab_show(attr))
```
Of course, there might be better ways to do this, but I've found this to be effective. Of course, this could also be done in `__new__` or `__init__`, but this was the solution I found the most straightforward. |
What are your (concrete) use-cases for metaclasses in Python? | 392,160 | 51 | 2008-12-24T20:13:06Z | 31,061,875 | 13 | 2015-06-25T22:26:32Z | [
"python",
"metaclass"
] | I have a friend who likes to use metaclasses, and regularly offers them as a solution.
I am of the mind that you almost never need to use metaclasses. Why? because I figure if you are doing something like that to a class, you should probably be doing it to an object. And a small redesign/refactor is in order.
Being able to use metaclasses has caused a lot of people in a lot of places to use classes as some kind of second rate object, which just seems disastrous to me. Is programming to be replaced by meta-programming? The addition of class decorators has unfortunately made it even more acceptable.
So please, I am desperate to know your valid (concrete) use-cases for metaclasses in Python. Or to be enlightened as to why mutating classes is better than mutating objects, sometimes.
I will start:
> Sometimes when using a third-party
> library it is useful to be able to
> mutate the class in a certain way.
(this is the only case I can think of, and it's not concrete) | I was asked the same question recently, and came up with several answers. I hope it's OK to revive this thread, as I wanted to elaborate on a few of the use cases mentioned, and add a few new ones.
Most metaclasses I've seen do one of two things:
1. Registration (adding a class to a data structure):
```
models = {}
class ModelMetaclass(type):
def __new__(meta, name, bases, attrs):
models[name] = cls = type.__new__(meta, name, bases, attrs)
return cls
class Model(object):
__metaclass__ = ModelMetaclass
```
Whenever you subclass `Model`, your class is registered in the `models` dictionary:
```
>>> class A(Model):
... pass
...
>>> class B(A):
... pass
...
>>> models
{'A': <__main__.A class at 0x...>,
'B': <__main__.B class at 0x...>}
```
This can also be done with class decorators:
```
models = {}
def model(cls):
models[cls.__name__] = cls
return cls
@model
class A(object):
pass
```
Or with an explicit registration function:
```
models = {}
def register_model(cls):
models[cls.__name__] = cls
class A(object):
pass
register_model(A)
```
Actually, this is pretty much the same: you mention class decorators unfavorably, but it's really nothing more than syntactic sugar for a function invocation on a class, so there's no magic about it.
Anyway, the advantage of metaclasses in this case is inheritance, as they work for any subclasses, whereas the other solutions only work for subclasses explicitly decorated or registered.
```
>>> class B(A):
... pass
...
>>> models
{'A': <__main__.A class at 0x...> # No B :(
```
2. Refactoring (modifying class attributes or adding new ones):
```
class ModelMetaclass(type):
def __new__(meta, name, bases, attrs):
fields = {}
for key, value in attrs.items():
if isinstance(value, Field):
value.name = '%s.%s' % (name, key)
fields[key] = value
for base in bases:
if hasattr(base, '_fields'):
fields.update(base._fields)
attrs['_fields'] = fields
return type.__new__(meta, name, bases, attrs)
class Model(object):
__metaclass__ = ModelMetaclass
```
Whenever you subclass `Model` and define some `Field` attributes, they are injected with their names (for more informative error messages, for example), and grouped into a `_fields` dictionary (for easy iteration, without having to look through all the class attributes and all its base classes' attributes every time):
```
>>> class A(Model):
... foo = Integer()
...
>>> class B(A):
... bar = String()
...
>>> B._fields
{'foo': Integer('A.foo'), 'bar': String('B.bar')}
```
Again, this can be done (without inheritance) with a class decorator:
```
def model(cls):
fields = {}
for key, value in vars(cls).items():
if isinstance(value, Field):
value.name = '%s.%s' % (cls.__name__, key)
fields[key] = value
for base in cls.__bases__:
if hasattr(base, '_fields'):
fields.update(base._fields)
cls._fields = fields
return cls
@model
class A(object):
foo = Integer()
class B(A):
bar = String()
# B.bar has no name :(
# B._fields is {'foo': Integer('A.foo')} :(
```
Or explicitly:
```
class A(object):
foo = Integer('A.foo')
_fields = {'foo': foo} # Don't forget all the base classes' fields, too!
```
Although, on the contrary to your advocacy for readable and maintainable non-meta programming, this is much more cumbersome, redundant and error prone:
```
class B(A):
bar = String()
# vs.
class B(A):
bar = String('bar')
_fields = {'B.bar': bar, 'A.foo': A.foo}
```
Having considered the most common and concrete use cases, the only cases where you absolutely HAVE to use metaclasses are when you want to modify the class name or list of base classes, because once defined, these parameters are baked into the class, and no decorator or function can unbake them.
```
class Metaclass(type):
def __new__(meta, name, bases, attrs):
return type.__new__(meta, 'foo', (int,), attrs)
class Baseclass(object):
__metaclass__ = Metaclass
class A(Baseclass):
pass
class B(A):
pass
print A.__name__ # foo
print B.__name__ # foo
print issubclass(B, A) # False
print issubclass(B, int) # True
```
This may be useful in frameworks for issuing warnings whenever classes with similar names or incomplete inheritance trees are defined, but I can't think of a reason beside trolling to actually change these values. Maybe David Beazley can.
Anyway, in Python 3, metaclasses also have the `__prepare__` method, which lets you evaluate the class body into a mapping other than a `dict`, thus supporting ordered attributes, overloaded attributes, and other wicked cool stuff:
```
import collections
class Metaclass(type):
@classmethod
def __prepare__(meta, name, bases, **kwds):
return collections.OrderedDict()
def __new__(meta, name, bases, attrs, **kwds):
print(list(attrs))
# Do more stuff...
class A(metaclass=Metaclass):
x = 1
y = 2
# prints ['x', 'y'] rather than ['y', 'x']
```
Â
```
class ListDict(dict):
def __setitem__(self, key, value):
self.setdefault(key, []).append(value)
class Metaclass(type):
@classmethod
def __prepare__(meta, name, bases, **kwds):
return ListDict()
def __new__(meta, name, bases, attrs, **kwds):
print(attrs['foo'])
# Do more stuff...
class A(metaclass=Metaclass):
def foo(self):
pass
def foo(self, x):
pass
# prints [<function foo at 0x...>, <function foo at 0x...>] rather than <function foo at 0x...>
```
You might argue ordered attributes can be achieved with creation counters, and overloading can be simulated with default arguments:
```
import itertools
class Attribute(object):
_counter = itertools.count()
def __init__(self):
self._count = Attribute._counter.next()
class A(object):
x = Attribute()
y = Attribute()
A._order = sorted([(k, v) for k, v in vars(A).items() if isinstance(v, Attribute)],
key = lambda (k, v): v._count)
```
Â
```
class A(object):
def _foo0(self):
pass
def _foo1(self, x):
pass
def foo(self, x=None):
if x is None:
return self._foo0()
else:
return self._foo1(x)
```
Besides being much more ugly, it's also less flexible: what if you want ordered literal attributes, like integers and strings? What if `None` is a valid value for `x`?
Here's a creative way to solve the first problem:
```
import sys
class Builder(object):
def __call__(self, cls):
cls._order = self.frame.f_code.co_names
return cls
def ordered():
builder = Builder()
def trace(frame, event, arg):
builder.frame = frame
sys.settrace(None)
sys.settrace(trace)
return builder
@ordered()
class A(object):
x = 1
y = 'foo'
print A._order # ['x', 'y']
```
And here's a creative way to solve the second one:
```
_undefined = object()
class A(object):
def _foo0(self):
pass
def _foo1(self, x):
pass
def foo(self, x=_undefined):
if x is _undefined:
return self._foo0()
else:
return self._foo1(x)
```
But this is much, MUCH voodoo-er than a simple metaclass (especially the first one, which really melts your brain). My point is, you look at metaclasses as unfamiliar and counter-intuitive, but you can also look at them as the next step of evolution in programming languages: you just have to adjust your mindset. After all, you could probably do everything in C, including defining a struct with function pointers and passing it as the first argument to its functions. A person seeing C++ for the first time might say, "what is this magic? Why is the compiler implicitly passing `this` to methods, but not to regular and static functions? It's better to be explicit and verbose about your arguments". But then, object-oriented programming is much more powerful once you get it; and so is this, uh... quasi-aspect-oriented programming, I guess. And once you understand metaclasses, they're actually very simple, so why not use them when convenient?
And finally, metaclasses are rad, and programming should be fun. Using standard programming constructs and design patterns all the time is boring and uninspiring, and hinders your imagination. Live a little! Here's a metametaclass, just for you.
```
class MetaMetaclass(type):
def __new__(meta, name, bases, attrs):
def __new__(meta, name, bases, attrs):
cls = type.__new__(meta, name, bases, attrs)
cls._label = 'Made in %s' % meta.__name__
return cls
attrs['__new__'] = __new__
return type.__new__(meta, name, bases, attrs)
class China(type):
__metaclass__ = MetaMetaclass
class Taiwan(type):
__metaclass__ = MetaMetaclass
class A(object):
__metaclass__ = China
class B(object):
__metaclass__ = Taiwan
print A._label # Made in China
print B._label # Made in Taiwan
``` |
Modify bound variables of a closure in Python | 392,349 | 22 | 2008-12-24T23:38:30Z | 392,366 | 17 | 2008-12-24T23:58:23Z | [
"python",
"functional-programming",
"closures"
] | Is there any way to modify the bound value of one of the variables inside a closure? Look at the example to understand it better.
```
def foo():
var_a = 2
var_b = 3
def _closure(x):
return var_a + var_b + x
return _closure
localClosure = foo()
# Local closure is now "return 2 + 3 + x"
a = localClosure(1) # 2 + 3 + 1 == 6
# DO SOME MAGIC HERE TO TURN "var_a" of the closure into 0
# ...but what magic? Is this even possible?
# Local closure is now "return 0 + 3 + x"
b = localClosure(1) # 0 + 3 +1 == 4
``` | I don't think there is any way to do that in Python. When the closure is defined, the current state of variables in the enclosing scope is captured and no longer has a directly referenceable name (from outside the closure). If you were to call `foo()` again, the new closure would have a different set of variables from the enclosing scope.
In your simple example, you might be better off using a class:
```
class foo:
def __init__(self):
self.var_a = 2
self.var_b = 3
def __call__(self, x):
return self.var_a + self.var_b + x
localClosure = foo()
# Local closure is now "return 2 + 3 + x"
a = localClosure(1) # 2 + 3 + 1 == 6
# DO SOME MAGIC HERE TO TURN "var_a" of the closure into 0
# ...but what magic? Is this even possible?
localClosure.var_a = 0
# Local closure is now "return 0 + 3 + x"
b = localClosure(1) # 0 + 3 +1 == 4
```
If you do use this technique I would no longer use the name `localClosure` because it is no longer actually a closure. However, it works the same as one. |
Modify bound variables of a closure in Python | 392,349 | 22 | 2008-12-24T23:38:30Z | 392,372 | 8 | 2008-12-25T00:08:52Z | [
"python",
"functional-programming",
"closures"
] | Is there any way to modify the bound value of one of the variables inside a closure? Look at the example to understand it better.
```
def foo():
var_a = 2
var_b = 3
def _closure(x):
return var_a + var_b + x
return _closure
localClosure = foo()
# Local closure is now "return 2 + 3 + x"
a = localClosure(1) # 2 + 3 + 1 == 6
# DO SOME MAGIC HERE TO TURN "var_a" of the closure into 0
# ...but what magic? Is this even possible?
# Local closure is now "return 0 + 3 + x"
b = localClosure(1) # 0 + 3 +1 == 4
``` | I've found an alternate answer answer to Greg's, slightly less verbose because it uses Python 2.1's custom function attributes (which conveniently enough can be accessed from inside their own function).
```
def foo():
var_b = 3
def _closure(x):
return _closure.var_a + var_b + x
_closure.func_dict['var_a'] = 2
return _closure
localClosure = foo()
# Local closure is now "return 2 + 3 + x"
a = localClosure(1) # 2 + 3 + 1 == 6
# DO SOME MAGIC HERE TO TURN "var_a" of the closure into 0
# ...but what magic? Is this even possible?
# apparently, it is
localClosure.var_a = 0
# Local closure is now "return 0 + 3 + x"
b = localClosure(1) # 0 + 3 +1 == 4
```
Thought I'd post it for completeness. Cheers anyways. |
Modify bound variables of a closure in Python | 392,349 | 22 | 2008-12-24T23:38:30Z | 392,468 | 19 | 2008-12-25T03:13:56Z | [
"python",
"functional-programming",
"closures"
] | Is there any way to modify the bound value of one of the variables inside a closure? Look at the example to understand it better.
```
def foo():
var_a = 2
var_b = 3
def _closure(x):
return var_a + var_b + x
return _closure
localClosure = foo()
# Local closure is now "return 2 + 3 + x"
a = localClosure(1) # 2 + 3 + 1 == 6
# DO SOME MAGIC HERE TO TURN "var_a" of the closure into 0
# ...but what magic? Is this even possible?
# Local closure is now "return 0 + 3 + x"
b = localClosure(1) # 0 + 3 +1 == 4
``` | It is quite possible in python 3 thanks to the magic of [nonlocal](http://jeremyhylton.blogspot.com/2007/02/nonlocal-implemented.html).
```
def foo():
var_a = 2
var_b = 3
def _closure(x, magic = None):
nonlocal var_a
if magic is not None:
var_a = magic
return var_a + var_b + x
return _closure
localClosure = foo()
# Local closure is now "return 2 + 3 + x"
a = localClosure(1) # 2 + 3 + 1 == 6
print(a)
# DO SOME MAGIC HERE TO TURN "var_a" of the closure into 0
localClosure(0, 0)
# Local closure is now "return 0 + 3 + x"
b = localClosure(1) # 0 + 3 +1 == 4
print(b)
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.