title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
Parallel processing from a command queue on Linux (bash, python, ruby... whatever)
463,963
42
2009-01-21T02:54:45Z
464,038
38
2009-01-21T03:58:50Z
[ "python", "ruby", "bash", "shell", "parallel-processing" ]
I have a list/queue of 200 commands that I need to run in a shell on a Linux server. I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer. When a process finishes I want the next command to be "popped" from the queue and executed. Does anyone have code to solve this problem? **Further elaboration:** There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done. The actual problem I'm trying to solve is using `imapsync` to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly.
I would imagine you could do this using make and the make -j xx command. Perhaps a makefile like this ``` all : usera userb userc.... usera: imapsync usera userb: imapsync userb .... ``` make -j 10 -f makefile
Parallel processing from a command queue on Linux (bash, python, ruby... whatever)
463,963
42
2009-01-21T02:54:45Z
628,543
12
2009-03-10T00:31:35Z
[ "python", "ruby", "bash", "shell", "parallel-processing" ]
I have a list/queue of 200 commands that I need to run in a shell on a Linux server. I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer. When a process finishes I want the next command to be "popped" from the queue and executed. Does anyone have code to solve this problem? **Further elaboration:** There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done. The actual problem I'm trying to solve is using `imapsync` to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly.
For this kind of job PPSS is written: Parallel processing shell script. Google for this name and you will find it, I won't linkspam.
Parallel processing from a command queue on Linux (bash, python, ruby... whatever)
463,963
42
2009-01-21T02:54:45Z
2,148,784
24
2010-01-27T17:00:56Z
[ "python", "ruby", "bash", "shell", "parallel-processing" ]
I have a list/queue of 200 commands that I need to run in a shell on a Linux server. I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer. When a process finishes I want the next command to be "popped" from the queue and executed. Does anyone have code to solve this problem? **Further elaboration:** There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done. The actual problem I'm trying to solve is using `imapsync` to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly.
[Parallel](https://savannah.nongnu.org/projects/parallel/) is made exatcly for this purpose. ``` cat userlist | parallel imapsync ``` One of the beauties of [Parallel](https://savannah.nongnu.org/projects/parallel/) compared to other solutions is that it makes sure output is not mixed. Doing `traceroute` in [Parallel](https://savannah.nongnu.org/projects/parallel/) works fine for example: ``` (echo foss.org.my; echo www.debian.org; echo www.freenetproject.org) | parallel traceroute ```
How are POST and GET variables handled in Python?
464,040
103
2009-01-21T03:59:03Z
464,087
25
2009-01-21T04:31:44Z
[ "python", "post", "get" ]
In PHP you can just use `$_POST` for POST and `$_GET` for GET (Query string) variables. What's the equivalent in Python?
They are stored in the CGI fieldstorage object. ``` import cgi form = cgi.FieldStorage() print "The user entered %s" % form.getvalue("uservalue") ```
How are POST and GET variables handled in Python?
464,040
103
2009-01-21T03:59:03Z
464,977
187
2009-01-21T11:53:01Z
[ "python", "post", "get" ]
In PHP you can just use `$_POST` for POST and `$_GET` for GET (Query string) variables. What's the equivalent in Python?
suppose you're posting a html form with this: ``` <input type="text" name="username"> ``` If using [raw cgi](http://docs.python.org/library/cgi.html): ``` import cgi form = cgi.FieldStorage() print form["username"] ``` If using [Django](http://docs.djangoproject.com/), [Pylons](http://pylonshq.com/), [Flask](http://flask.pocoo.org/) or [Pyramid](http://www.pylonsproject.org/projects/pyramid/about): ``` print request.GET['username'] # for GET form method print request.POST['username'] # for POST form method ``` Using [Turbogears](http://turbogears.org/), [Cherrypy](http://www.cherrypy.org/): ``` from cherrypy import request print request.params['username'] ``` [Web.py](http://webpy.org): ``` form = web.input() print form.username ``` [Werkzeug](http://werkzeug.pocoo.org/): ``` print request.form['username'] ``` If using Cherrypy or Turbogears, you can also define your handler function taking a parameter directly: ``` def index(self, username): print username ``` [Google App Engine](https://developers.google.com/appengine/docs/python/gettingstartedpython27/introduction): ``` class SomeHandler(webapp2.RequestHandler): def post(self): name = self.request.get('username') # this will get the value from the field named username self.response.write(name) # this will write on the document ``` So you really will have to choose one of those frameworks.
How are POST and GET variables handled in Python?
464,040
103
2009-01-21T03:59:03Z
11,353,284
21
2012-07-05T22:15:06Z
[ "python", "post", "get" ]
In PHP you can just use `$_POST` for POST and `$_GET` for GET (Query string) variables. What's the equivalent in Python?
I've found nosklo's answer very extensive and useful! For those, like myself, who might find accessing the raw request data directly also useful, I would like to add the way to do that: ``` import os, sys # the query string, which contains the raw GET data # (For example, for http://example.com/myscript.py?a=b&c=d&e # this is "a=b&c=d&e") os.getenv("QUERY_STRING") # the raw POST data sys.stdin.read() ```
How are POST and GET variables handled in Python?
464,040
103
2009-01-21T03:59:03Z
27,893,309
13
2015-01-11T23:26:51Z
[ "python", "post", "get" ]
In PHP you can just use `$_POST` for POST and `$_GET` for GET (Query string) variables. What's the equivalent in Python?
I know this is an old question. Yet it's surprising that no good answer was given. First of all the question is completely valid without mentioning the framework. The CONTEXT is PHP language equivalence. Although there are many ways to get the query string parameters in Python, the framework variables are just conveniently populated. In PHP, $\_GET and $\_POST are also convenience variables. They are parsed from QUERY\_URI and php://input respectively. In Python, these functions would be os.getenv('QUERY\_STRING') and sys.stdin.read(). Remember to import os and sys modules. We have to be careful with the word "CGI" here, especially when talking about two languages and their commonalities when interfacing with a web server. 1. CGI, as a protocol, defines the data transport mechanism in the HTTP protocol. 2. Python can be configured to run as a CGI-script in Apache. 3. The cgi module in Python offers some convenience functions. Since the HTTP protocol is language-independent, and that Apache's CGI extension is also language-independent, getting the GET and POST parameters should bear only syntax differences across languages. Here's the Python routine to populate a GET dictionary: ``` GET={} args=os.getenv("QUERY_STRING").split('&') for arg in args: t=arg.split('=') if len(t)>1: k,v=arg.split('='); GET[k]=v ``` and for POST: ``` POST={} args=sys.stdin.read().split('&') for arg in args: t=arg.split('=') if len(t)>1: k, v=arg.split('='); POST[k]=v ``` You can now access the fields as following: ``` print GET.get('user_id') print POST.get('user_name') ``` I must also point out that the cgi module doesn't work well. Consider this HTTP request: ``` POST / test.py?user_id=6 user_name=Bob&age=30 ``` Using cgi.FieldStorage().getvalue('user\_id') will cause a null pointer exception because the module blindly checks the POST data, ignoring the fact that a POST request can carry GET parameters too.
Combining two sorted lists in Python
464,342
46
2009-01-21T07:33:47Z
464,350
24
2009-01-21T07:36:52Z
[ "python", "list", "sorting" ]
I have two lists of objects. Each list is already sorted by a property of the object that is of the datetime type. I would like to combine the two lists into one sorted list. Is the best way just to do a sort or is there a smarter way to do this in Python?
This is simply merging. Treat each list as if it were a stack, and continuously pop the smaller of the two stack heads, adding the item to the result list, until one of the stacks is empty. Then add all remaining items to the resulting list.
Combining two sorted lists in Python
464,342
46
2009-01-21T07:33:47Z
464,538
81
2009-01-21T09:14:08Z
[ "python", "list", "sorting" ]
I have two lists of objects. Each list is already sorted by a property of the object that is of the datetime type. I would like to combine the two lists into one sorted list. Is the best way just to do a sort or is there a smarter way to do this in Python?
People seem to be over complicating this.. Just combine the two lists, then sort them: ``` >>> l1 = [1, 3, 4, 7] >>> l2 = [0, 2, 5, 6, 8, 9] >>> l1.extend(l2) >>> sorted(l1) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` ..or shorter (and without modifying `l1`): ``` >>> sorted(l1 + l2) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` ..easy! Plus, it's using only two built-in functions, so assuming the lists are of a reasonable size, it should be quicker than implementing the sorting/merging in a loop. More importantly, the above is much less code, and very readable. If your lists are large (over a few hundred thousand, I would guess), it may be quicker to use an alternative/custom sorting method, but there are likely other optimisations to be made first (e.g not storing millions of `datetime` objects) Using the `timeit.Timer().repeat()` (which repeats the functions 1000000 times), I loosely benchmarked it against [ghoseb's](http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python#464454) solution, and `sorted(l1+l2)` is substantially quicker: `merge_sorted_lists` took.. ``` [9.7439379692077637, 9.8844599723815918, 9.552299976348877] ``` `sorted(l1+l2)` took.. ``` [2.860386848449707, 2.7589840888977051, 2.7682540416717529] ```
Combining two sorted lists in Python
464,342
46
2009-01-21T07:33:47Z
464,767
10
2009-01-21T10:36:56Z
[ "python", "list", "sorting" ]
I have two lists of objects. Each list is already sorted by a property of the object that is of the datetime type. I would like to combine the two lists into one sorted list. Is the best way just to do a sort or is there a smarter way to do this in Python?
There is a slight flaw in [ghoseb's](http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python#464454) solution, making it O(n\*\*2), rather than O(n). The problem is that this is performing: ``` item = l1.pop(0) ``` With linked lists or deques this would be an O(1) operation, so wouldn't affect complexity, but since python lists are implemented as vectors, this copies the rest of the elements of l1 one space left, an O(n) operation. Since this is done each pass through the list, it turns an O(n) algorithm into an O(n\*\*2) one. This can be corrected by using a method that doesn't alter the source lists, but just keeps track of the current position. I've tried out benchmarking a corrected algorithm vs a simple sorted(l1+l2) as suggested by [dbr](http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python#464538) ``` def merge(l1,l2): if not l1: return list(l2) if not l2: return list(l1) # l2 will contain last element. if l1[-1] > l2[-1]: l1,l2 = l2,l1 it = iter(l2) y = it.next() result = [] for x in l1: while y < x: result.append(y) y = it.next() result.append(x) result.append(y) result.extend(it) return result ``` I've tested these with lists generated with ``` l1 = sorted([random.random() for i in range(NITEMS)]) l2 = sorted([random.random() for i in range(NITEMS)]) ``` For various sizes of list, I get the following timings (repeating 100 times): ``` # items: 1000 10000 100000 1000000 merge : 0.079 0.798 9.763 109.044 sort : 0.020 0.217 5.948 106.882 ``` So in fact, it looks like dbr is right, just using sorted() is preferable unless you're expecting very large lists, though it does have worse algorithmic complexity. The break even point being at around a million items in each source list (2 million total). One advantage of the merge approach though is that it is trivial to rewrite as a generator, which will use substantially less memory (no need for an intermediate list). **[Edit]** I've retried this with a situation closer to the question - using a list of objects containing a field "`date`" which is a datetime object. The above algorithm was changed to compare against `.date` instead, and the sort method was changed to: ``` return sorted(l1 + l2, key=operator.attrgetter('date')) ``` This does change things a bit. The comparison being more expensive means that the number we perform becomes more important, relative to the constant-time speed of the implementation. This means merge makes up lost ground, surpassing the sort() method at 100,000 items instead. Comparing based on an even more complex object (large strings or lists for instance) would likely shift this balance even more. ``` # items: 1000 10000 100000 1000000[1] merge : 0.161 2.034 23.370 253.68 sort : 0.111 1.523 25.223 313.20 ``` [1]: Note: I actually only did 10 repeats for 1,000,000 items and scaled up accordingly as it was pretty slow.
Combining two sorted lists in Python
464,342
46
2009-01-21T07:33:47Z
465,043
68
2009-01-21T12:16:08Z
[ "python", "list", "sorting" ]
I have two lists of objects. Each list is already sorted by a property of the object that is of the datetime type. I would like to combine the two lists into one sorted list. Is the best way just to do a sort or is there a smarter way to do this in Python?
> is there a smarter way to do this in Python This hasn't been mentioned, so I'll go ahead - there is a [merge stdlib function](http://svn.python.org/view/python/trunk/Lib/heapq.py?view=markup) in the heapq module of python 2.6+. If all you're looking to do is getting things done, this might be a better idea. Of course, if you want to implement your own, the merge of merge-sort is the way to go. ``` >>> list1 = [1, 5, 8, 10, 50] >>> list2 = [3, 4, 29, 41, 45, 49] >>> from heapq import merge >>> list(merge(list1, list2)) [1, 3, 4, 5, 8, 10, 29, 41, 45, 49, 50] ```
Combining two sorted lists in Python
464,342
46
2009-01-21T07:33:47Z
482,848
44
2009-01-27T10:09:03Z
[ "python", "list", "sorting" ]
I have two lists of objects. Each list is already sorted by a property of the object that is of the datetime type. I would like to combine the two lists into one sorted list. Is the best way just to do a sort or is there a smarter way to do this in Python?
Long story short, unless `len(l1 + l2) ~ 1000000` use: ``` L = l1 + l2 L.sort() ``` ![merge vs. sort comparison](http://i.stack.imgur.com/XsRdj.png) Description of the figure and source code can be found [here](http://stackoverflow.com/questions/464960/code-golf-combining-multiple-sorted-lists-into-a-single-sorted-list#464967). The figure was generated by the following command: ``` $ python make-figures.py --nsublists 2 --maxn=0x100000 -s merge_funcs.merge_26 -s merge_funcs.sort_builtin ```
Python regular expressions - how to capture multiple groups from a wildcard expression?
464,736
29
2009-01-21T10:29:31Z
464,755
34
2009-01-21T10:33:24Z
[ "python", "regex", "lexical-analysis" ]
I have a Python regular expression that contains a group which can occur zero or many times - but when I retrieve the list of groups afterwards, only the last one is present. Example: `re.search("(\w)*", "abcdefg").groups`() this returns the list ('g',) I need it to return ('a','b','c','d','e','f','g',) Is that possible? How can I do it?
``` re.findall(r"\w","abcdefg") ```
Python regular expressions - how to capture multiple groups from a wildcard expression?
464,736
29
2009-01-21T10:29:31Z
464,879
23
2009-01-21T11:19:08Z
[ "python", "regex", "lexical-analysis" ]
I have a Python regular expression that contains a group which can occur zero or many times - but when I retrieve the list of groups afterwards, only the last one is present. Example: `re.search("(\w)*", "abcdefg").groups`() this returns the list ('g',) I need it to return ('a','b','c','d','e','f','g',) Is that possible? How can I do it?
In addition to [Douglas Leeder's solution](http://stackoverflow.com/questions/464736/python-regular-expressions-how-to-capture-multiple-groups-from-a-wildcard-expre#464755), here is the explanation: In regular expressions the group count is fixed. Placing a quantifier behind a group does not increase group count (imagine all other group indexes increment because an eralier group matched more than once). Groups with quantifiers are the way of making a complex sub-expression atomic, when there is need to match it more than once. The regex engine has no other way than saving the last match only to the group. In short: There is no way to achieve what you want with a single "unarmed" regular expression, and you have to find another way.
Python code to pick out all possible combinations from a list?
464,864
123
2009-01-21T11:13:40Z
464,882
146
2009-01-21T11:20:04Z
[ "python", "combinations" ]
I have a list with 15 numbers in, and I need to write some code that produces all 32,768 combinations of those numbers. I've found [some code](http://desk.stinkpot.org:8080/tricks/index.php/2008/04/get-all-possible-combinations-of-a-lists-elements-in-python/) (by googling) that apparently does what I'm looking for, but I found the code fairly opaque and am wary of using it. Plus I have a feeling there must be a more elegant solution. The only thing that occurs to me would be to just loop through the decimal integers 1-32768 and convert those to binary, and use the binary representation as a filter to pick out the appropriate numbers. Does anyone know of a better way? Using map(), maybe?
Have a look at [itertools.combinations](http://docs.python.org/library/itertools.html#itertools.combinations): > ``` > itertools.combinations(iterable, r) > ``` > > Return r length subsequences of elements from > the input iterable. > > Combinations are emitted in lexicographic sort order. So, if the > input iterable is sorted, the > combination tuples will be produced in > sorted order. Since 2.6, batteries are included!
Python code to pick out all possible combinations from a list?
464,864
123
2009-01-21T11:13:40Z
5,898,031
327
2011-05-05T12:56:24Z
[ "python", "combinations" ]
I have a list with 15 numbers in, and I need to write some code that produces all 32,768 combinations of those numbers. I've found [some code](http://desk.stinkpot.org:8080/tricks/index.php/2008/04/get-all-possible-combinations-of-a-lists-elements-in-python/) (by googling) that apparently does what I'm looking for, but I found the code fairly opaque and am wary of using it. Plus I have a feeling there must be a more elegant solution. The only thing that occurs to me would be to just loop through the decimal integers 1-32768 and convert those to binary, and use the binary representation as a filter to pick out the appropriate numbers. Does anyone know of a better way? Using map(), maybe?
[This answer](http://stackoverflow.com/questions/464864/python-code-to-pick-out-all-possible-combinations-from-a-list/464882#464882) missed one aspect: the OP asked for ALL combinations... not just combinations of length "r". So you'd either have to loop through all lengths "L": ``` import itertools stuff = [1, 2, 3] for L in range(0, len(stuff)+1): for subset in itertools.combinations(stuff, L): print(subset) ``` Or -- if you want to get snazzy (or bend the brain of whoever reads your code after you) -- you can generate the chain of "combinations()" generators, and iterate through that: ``` from itertools import chain, combinations def all_subsets(ss): return chain(*map(lambda x: combinations(ss, x), range(0, len(ss)+1))) for subset in all_subsets(stuff): print(subset) ```
Python code to pick out all possible combinations from a list?
464,864
123
2009-01-21T11:13:40Z
6,542,458
23
2011-07-01T00:21:10Z
[ "python", "combinations" ]
I have a list with 15 numbers in, and I need to write some code that produces all 32,768 combinations of those numbers. I've found [some code](http://desk.stinkpot.org:8080/tricks/index.php/2008/04/get-all-possible-combinations-of-a-lists-elements-in-python/) (by googling) that apparently does what I'm looking for, but I found the code fairly opaque and am wary of using it. Plus I have a feeling there must be a more elegant solution. The only thing that occurs to me would be to just loop through the decimal integers 1-32768 and convert those to binary, and use the binary representation as a filter to pick out the appropriate numbers. Does anyone know of a better way? Using map(), maybe?
Here's a lazy one-liner, also using itertools: ``` def combinations(items): return ( set(compress(items,mask)) for mask in product(*[[0,1]]*len(items)) ) # alternative: ...in product([0,1], repeat=len(items)) ) ``` Main idea behind this answer: there are 2^N combinations -- same as the number of binary strings of length N. For each binary string, you pick all elements corresponding to a "1". ``` items=abc * mask=### | V 000 -> 001 -> c 010 -> b 011 -> bc 100 -> a 101 -> a c 110 -> ab 111 -> abc ``` Things to consider: * This requires that you can call `len(...)` on `items` (workaround: if `items` is something like an iterable like a generator, turn it into a list first with `items=list(_itemsArg)`) * This requires that the order of iteration on `items` is not random (workaround: don't be insane) * This requires that the items are unique, or else `{2,2,1}` and `{2,1,1}` will both collapse to `{2,1}` (workaround: use `collections.Counter` as a drop-in replacement for `set`; it's basically a multiset... though you may need to later use `tuple(sorted(Counter(...).elements()))` if you need it to be hashable) --- **Demo** ``` >>> list(combinations(range(4))) [set(), {3}, {2}, {2, 3}, {1}, {1, 3}, {1, 2}, {1, 2, 3}, {0}, {0, 3}, {0, 2}, {0, 2, 3}, {0, 1}, {0, 1, 3}, {0, 1, 2}, {0, 1, 2, 3}] >>> list(combinations('abcd')) [set(), {'d'}, {'c'}, {'c', 'd'}, {'b'}, {'b', 'd'}, {'c', 'b'}, {'c', 'b', 'd'}, {'a'}, {'a', 'd'}, {'a', 'c'}, {'a', 'c', 'd'}, {'a', 'b'}, {'a', 'b', 'd'}, {'a', 'c', 'b'}, {'a', 'c', 'b', 'd'}] ```
Python code to pick out all possible combinations from a list?
464,864
123
2009-01-21T11:13:40Z
7,173,919
11
2011-08-24T10:24:55Z
[ "python", "combinations" ]
I have a list with 15 numbers in, and I need to write some code that produces all 32,768 combinations of those numbers. I've found [some code](http://desk.stinkpot.org:8080/tricks/index.php/2008/04/get-all-possible-combinations-of-a-lists-elements-in-python/) (by googling) that apparently does what I'm looking for, but I found the code fairly opaque and am wary of using it. Plus I have a feeling there must be a more elegant solution. The only thing that occurs to me would be to just loop through the decimal integers 1-32768 and convert those to binary, and use the binary representation as a filter to pick out the appropriate numbers. Does anyone know of a better way? Using map(), maybe?
I agree with Dan H that Ben indeed asked for **all** combinations. `itertools.combinations()` does not give all combinations. Another issue is, if the input iterable is big, it is perhaps better to return a generator instead of everything in a list: ``` iterable = range(10) for s in xrange(len(iterable)+1): for comb in itertools.combinations(iterable, s): yield comb ```
Python code to pick out all possible combinations from a list?
464,864
123
2009-01-21T11:13:40Z
23,743,696
8
2014-05-19T17:25:51Z
[ "python", "combinations" ]
I have a list with 15 numbers in, and I need to write some code that produces all 32,768 combinations of those numbers. I've found [some code](http://desk.stinkpot.org:8080/tricks/index.php/2008/04/get-all-possible-combinations-of-a-lists-elements-in-python/) (by googling) that apparently does what I'm looking for, but I found the code fairly opaque and am wary of using it. Plus I have a feeling there must be a more elegant solution. The only thing that occurs to me would be to just loop through the decimal integers 1-32768 and convert those to binary, and use the binary representation as a filter to pick out the appropriate numbers. Does anyone know of a better way? Using map(), maybe?
Here is one using recursion: ``` >>> import copy >>> def combinations(target,data): ... for i in range(len(data)): ... new_target = copy.copy(target) ... new_data = copy.copy(data) ... new_target.append(data[i]) ... new_data = data[i+1:] ... print new_target ... combinations(new_target, ... new_data) ... ... >>> target = [] >>> data = ['a','b','c','d'] >>> >>> combinations(target,data) ['a'] ['a', 'b'] ['a', 'b', 'c'] ['a', 'b', 'c', 'd'] ['a', 'b', 'd'] ['a', 'c'] ['a', 'c', 'd'] ['a', 'd'] ['b'] ['b', 'c'] ['b', 'c', 'd'] ['b', 'd'] ['c'] ['c', 'd'] ['d'] ```
Python code to pick out all possible combinations from a list?
464,864
123
2009-01-21T11:13:40Z
24,402,296
12
2014-06-25T07:08:01Z
[ "python", "combinations" ]
I have a list with 15 numbers in, and I need to write some code that produces all 32,768 combinations of those numbers. I've found [some code](http://desk.stinkpot.org:8080/tricks/index.php/2008/04/get-all-possible-combinations-of-a-lists-elements-in-python/) (by googling) that apparently does what I'm looking for, but I found the code fairly opaque and am wary of using it. Plus I have a feeling there must be a more elegant solution. The only thing that occurs to me would be to just loop through the decimal integers 1-32768 and convert those to binary, and use the binary representation as a filter to pick out the appropriate numbers. Does anyone know of a better way? Using map(), maybe?
This one-liner gives you all the combinations (between `0` and `n` items if the original list/set contains `n` distinct elements) and uses the native method [`itertools.combinations`](https://docs.python.org/2/library/itertools.html#itertools.combinations): ``` from itertools import combinations input = ['a', 'b', 'c', 'd'] output = sum([map(list, combinations(input, i)) for i in range(len(input) + 1)], []) ``` --- The output will be: ``` [[], ['a'], ['b'], ['c'], ['d'], ['a', 'b'], ['a', 'c'], ['a', 'd'], ['b', 'c'], ['b', 'd'], ['c', 'd'], ['a', 'b', 'c'], ['a', 'b', 'd'], ['a', 'c', 'd'], ['b', 'c', 'd'], ['a', 'b', 'c', 'd']] ``` --- Try it online: <http://ideone.com/COghfX>
Python code to pick out all possible combinations from a list?
464,864
123
2009-01-21T11:13:40Z
29,092,176
8
2015-03-17T05:52:28Z
[ "python", "combinations" ]
I have a list with 15 numbers in, and I need to write some code that produces all 32,768 combinations of those numbers. I've found [some code](http://desk.stinkpot.org:8080/tricks/index.php/2008/04/get-all-possible-combinations-of-a-lists-elements-in-python/) (by googling) that apparently does what I'm looking for, but I found the code fairly opaque and am wary of using it. Plus I have a feeling there must be a more elegant solution. The only thing that occurs to me would be to just loop through the decimal integers 1-32768 and convert those to binary, and use the binary representation as a filter to pick out the appropriate numbers. Does anyone know of a better way? Using map(), maybe?
**You can generating all combinations of a list in python using this simple code** ``` import itertools a = [1,2,3,4] for i in xrange(0,len(a)+1): print list(itertools.combinations(a,i)) ``` **Result would be :** ``` [()] [(1,), (2,), (3,), (4,)] [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)] [(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)] [(1, 2, 3, 4)] ```
Writing unit tests in Django / Python
465,065
28
2009-01-21T12:22:24Z
465,242
9
2009-01-21T13:12:53Z
[ "python", "django", "unit-testing" ]
I've not used Unit Tests before other than a quick introduction in a Uni course. I'm currently writing an application though and would like to teach myself TDD in the process. The problem is, I've no idea what to test or really how. I'm writing a Django application, and so far have only created the models (and customised the admin application). This is how I've written the skeletons of my tests so far: ``` class ModelTests(TestCase): fixtures = ['initial_data.json',] def setUp(self): pass def testSSA(self): ssa = SSA.objects.create(name="sdfsdf", cost_center=1111, street_num=8, street_name="dfsdfsf Street", suburb="sdfsdfsdf", post_code=3333) def testResident(self): pass def testSSA_Client(self): pass ``` I planned to write a function to test each model within the ModelTests class. Is this a good way of writing tests? Also, what exactly should I be testing for? That creating a model with all of the fields completed works? That a half complete model fails? That any special cases are tested (like a null and is\_required=False)? I've trust in the ORM, which as far as I'm aware is heavily tested, so I shouldn't need to test all of the methods should I? What do I need to test for a web application written in Django/Python? Some examples would be nice.
I'm not exactly sure of the specifics of what you're trying to test here, I'd need more code snippets for this, but I can give you some general advice. First, read the unit testing chapter of "Dive into Python" (it's free online! <http://diveintopython3.ep.io/unit-testing.html>), it's a great explanation of unit testing in general, what you need to do, and why. Second, with regards to TDD, it is a valuable practice, but be careful about growing too dependent on it as I've found it can lead to over-specifying software and further on to having software that can not be re-developed and adapted to new tasks. This is just my experience, mind. Also, providing you don't use it dogmatically TDD is valuable. Third, it strikes me that the best piece of advice for your specific situation is to *strive to test your logic, but not the logic of frameworks that you depend on*. That means that often testing half-complete models fail etc. etc. may not be appropriate, since that is not your logic, but django's, and so should already be tested. More valuable would be to test a few expected cases, instantiations that you expect, exceptions that you expect etc. to make sure your model specification is sound, and then move on to the more substantial logic of your application.
Writing unit tests in Django / Python
465,065
28
2009-01-21T12:22:24Z
465,684
36
2009-01-21T15:08:38Z
[ "python", "django", "unit-testing" ]
I've not used Unit Tests before other than a quick introduction in a Uni course. I'm currently writing an application though and would like to teach myself TDD in the process. The problem is, I've no idea what to test or really how. I'm writing a Django application, and so far have only created the models (and customised the admin application). This is how I've written the skeletons of my tests so far: ``` class ModelTests(TestCase): fixtures = ['initial_data.json',] def setUp(self): pass def testSSA(self): ssa = SSA.objects.create(name="sdfsdf", cost_center=1111, street_num=8, street_name="dfsdfsf Street", suburb="sdfsdfsdf", post_code=3333) def testResident(self): pass def testSSA_Client(self): pass ``` I planned to write a function to test each model within the ModelTests class. Is this a good way of writing tests? Also, what exactly should I be testing for? That creating a model with all of the fields completed works? That a half complete model fails? That any special cases are tested (like a null and is\_required=False)? I've trust in the ORM, which as far as I'm aware is heavily tested, so I shouldn't need to test all of the methods should I? What do I need to test for a web application written in Django/Python? Some examples would be nice.
**Is a function to test each model within the ModelTests class a good way of writing tests?** No. **What exactly should I be testing for?** * That creating a model with all of the fields completed works? * That a half complete model fails? * That any special cases are tested (like a null and is\_required=False)? * I've trust in the ORM, which as far as I'm aware is heavily tested, so I shouldn't need to test all of the methods should I? Not much of that. You might test validation rules, but that isn't meaningful until you've defined some Form objects. Then you have something to test -- does the form enforce all the rules. You'll need at least one TestCase class for each form. A function will be a scenario -- different combinations of inputs that are allowed or not allowed. For each Model class, you'll need at least one TestCase class definition. TestCases are cheap, define lots of them. Your model embodies your "business entity" definitions. Your models will have methods that implement business rules. Your methods will do things like summarize, filter, calculate, aggregate, reduce, all kinds of things. You'll have functions for each of these features of a model class. You're not testing Django. You're testing how your business rules actually work in Django. Later, when you have more stuff in your application (forms, views, urls, etc.) you'll want to use the Django unittest client to exercise each method for each url. Again, one TestCase per
How can I print over the current line in a command line application?
465,348
10
2009-01-21T13:43:17Z
465,475
16
2009-01-21T14:15:10Z
[ "python", "windows", "command-line" ]
On Unix, I can either use \r (carriage return) or \b (backspace) to print over text already visible in the shell (i.e. overwrite the current line again). Can I achieve the same effect in a Windows command line from a Python script? I tried the curses module but it doesn't seem to be available on Windows.
yes: ``` import sys import time def restart_line(): sys.stdout.write('\r') sys.stdout.flush() sys.stdout.write('some data') sys.stdout.flush() time.sleep(2) # wait 2 seconds... restart_line() sys.stdout.write('other different data') sys.stdout.flush() ```
How can I print over the current line in a command line application?
465,348
10
2009-01-21T13:43:17Z
21,672,188
7
2014-02-10T08:28:27Z
[ "python", "windows", "command-line" ]
On Unix, I can either use \r (carriage return) or \b (backspace) to print over text already visible in the shell (i.e. overwrite the current line again). Can I achieve the same effect in a Windows command line from a Python script? I tried the curses module but it doesn't seem to be available on Windows.
``` import sys import time for i in range(10): print '\r', # print is Ok, and comma is needed. time.sleep(0.3) print i, sys.stdout.flush() # flush is needed. ``` And if on the IPython-notebook, just like this: ``` import time from IPython.display import clear_output for i in range(10): time.sleep(0.25) print(i) clear_output(wait=True) ``` <http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/notebooks/Animations%20Using%20clear_output.ipynb>
How do I take the output of one program and use it as the input of another?
465,421
4
2009-01-21T14:01:11Z
465,466
10
2009-01-21T14:12:23Z
[ "python", "ruby", "io" ]
I've looked at [this](http://stackoverflow.com/questions/316007/what-is-the-best-way-to-make-the-output-of-one-stream-the-input-to-another) and it wasn't much help. I have a Ruby program that puts a question to the cmd line and I would like to write a Python program that can return an answer. Does anyone know of any links or in general how I might go about doing this? Thanks for your help. **EDIT** Thanks to the guys that mentioned *piping*. I haven't used it too much and was glad it was brought up since it forced me too look in to it more.
``` p = subprocess.Popen(['ruby', 'ruby_program.rb'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) ruby_question = p.stdout.readline() answer = calculate_answer(ruby_question) p.stdin.write(answer) print p.communicate()[0] # prints further info ruby may show. ``` The last 2 lines could be made into one: ``` print p.communicate(answer)[0] ```
What is a simple way to generate keywords from a text?
465,795
17
2009-01-21T15:43:36Z
465,909
16
2009-01-21T16:14:29Z
[ "python", "perl", "metadata" ]
I suppose I could take a text and remove high frequency English words from it. By keywords, I mean that I want to extract words that are most the characterizing of the content of the text (tags ) . It doesn't have to be perfect, a good approximation is perfect for my needs. Has anyone done anything like that? Do you known a Perl or Python library that does that? Lingua::EN::Tagger is exactly what I asked however I needed a library that could work for french text too.
The name for the "high frequency English words" is [stop words](http://en.wikipedia.org/wiki/Stop_words) and there are many lists available. I'm not aware of any python or perl libraries, but you could encode your stop word list in a binary tree or hash (or you could use python's frozenset), then as you read each word from the input text, check if it is in your 'stop list' and filter it out. Note that after you remove the stop words, you'll need to do some [stemming](http://en.wikipedia.org/wiki/Stemming) to normalize the resulting text (remove plurals, -ings, -eds), then remove all the duplicate "keywords".
What is a simple way to generate keywords from a text?
465,795
17
2009-01-21T15:43:36Z
466,037
9
2009-01-21T16:44:49Z
[ "python", "perl", "metadata" ]
I suppose I could take a text and remove high frequency English words from it. By keywords, I mean that I want to extract words that are most the characterizing of the content of the text (tags ) . It doesn't have to be perfect, a good approximation is perfect for my needs. Has anyone done anything like that? Do you known a Perl or Python library that does that? Lingua::EN::Tagger is exactly what I asked however I needed a library that could work for french text too.
You could try using the perl module [Lingua::EN::Tagger](http://search.cpan.org/~acoburn/Lingua-EN-Tagger-0.15/Tagger.pm) for a quick and easy solution. A more complicated module [Lingua::EN::Semtags::Engine](http://code.google.com/p/lingua-en-semtags-engine/) uses Lingua::EN::Tagger with a WordNet database to get a more structured output. Both are pretty easy to use, just check out the documentation on CPAN or use perldoc after you install the module.
How to make a model instance read-only after saving it once?
466,135
3
2009-01-21T17:11:14Z
466,641
7
2009-01-21T19:29:25Z
[ "python", "django", "django-models", "django-admin", "django-signals" ]
One of the functionalities in a Django project I am writing is sending a newsletter. I have a model, `Newsletter` and a function, `send_newsletter`, which I have registered to listen to `Newsletter`'s `post_save` signal. When the newsletter object is saved via the admin interface, `send_newsletter` checks if `created` is True, and if yes it actually sends the mail. However, it doesn't make much sense to edit a newsletter that has already been sent, for the obvious reasons. Is there a way of making the `Newsletter` object read-only once it has been saved? ## Edit: I know I can override the `save` method of the object to raise an error or do nothin if the object existed. However, I don't see the point of doing that. As for the former, I don't know where to catch that error and how to communicate the user the fact that the object wasn't saved. As for the latter, giving the user false feedback (the admin interface saying that the save succeded) doesn't seem like a Good Thing. What I really want is allow the user to use the Admin interface to write the newsletter and send it, and then browse the newsletters that have already been sent. I would like the admin interface to show the data for sent newsletters in an non-editable input box, without the "Save" button. Alternatively I would like the "Save" button to be inactive.
You can check if it is creation or update in the model's `save` method: ``` def save(self, *args, **kwargs): if self.pk: raise StandardError('Can\'t modify bla bla bla.') super(Payment, self).save(*args, **kwargs) ``` Code above will raise an exception if you try to save an existing object. **Objects not previously persisted don't have their primary keys set.**
How can I hide the console window in a PyQt app running on Windows?
466,203
12
2009-01-21T17:25:38Z
466,222
17
2009-01-21T17:28:34Z
[ "python", "windows", "command-line", "pyqt" ]
Surely this is possible? I have been hunting through PyQt tutorials and documentation but cannot find the answer to it. Probably I just need to phrase my search query differently. [Edit] Thanks PEZ for the answer - more details including use of the `.pyw` extension in [Python Programming on Win32 chapter 20](http://www.onlamp.com/pub/a/python/excerpts/chpt20/index.html?page=2)
I think you should be able to run your app with pythonw.exe.
How can I hide the console window in a PyQt app running on Windows?
466,203
12
2009-01-21T17:25:38Z
466,279
14
2009-01-21T17:39:19Z
[ "python", "windows", "command-line", "pyqt" ]
Surely this is possible? I have been hunting through PyQt tutorials and documentation but cannot find the answer to it. Probably I just need to phrase my search query differently. [Edit] Thanks PEZ for the answer - more details including use of the `.pyw` extension in [Python Programming on Win32 chapter 20](http://www.onlamp.com/pub/a/python/excerpts/chpt20/index.html?page=2)
An easy way to do this is to give your script a .pyw extension instead of the usual .py. This has the same effect as PEZ's answer (runs the script using pythonw.exe).
Converting string into datetime
466,345
826
2009-01-21T18:00:29Z
466,366
433
2009-01-21T18:07:17Z
[ "python", "datetime" ]
Short and simple. I've got a huge list of date-times like this as strings: ``` Jun 1 2005 1:33PM Aug 28 1999 12:00AM ``` I'm going to be shoving these back into proper datetime fields in a database so I need to magic them into real datetime objects. Any help (even if it's just a kick in the right direction) would be appreciated. Edit: This is going through Django's ORM so I can't use SQL to do the conversion on insert.
Check out [strptime](http://docs.python.org/2/library/time.html#time.strptime) in the [time](http://docs.python.org/2/library/time.html) module. It is the inverse of [strftime](http://docs.python.org/2/library/time.html#time.strftime).
Converting string into datetime
466,345
826
2009-01-21T18:00:29Z
466,376
1,383
2009-01-21T18:08:52Z
[ "python", "datetime" ]
Short and simple. I've got a huge list of date-times like this as strings: ``` Jun 1 2005 1:33PM Aug 28 1999 12:00AM ``` I'm going to be shoving these back into proper datetime fields in a database so I need to magic them into real datetime objects. Any help (even if it's just a kick in the right direction) would be appreciated. Edit: This is going through Django's ORM so I can't use SQL to do the conversion on insert.
``` from datetime import datetime date_object = datetime.strptime('Jun 1 2005 1:33PM', '%b %d %Y %I:%M%p') ``` [Link to the Python documentation for strptime](https://docs.python.org/2/library/datetime.html#datetime.datetime.strptime) [and a link for the strftime format mask](https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior) [This is also a really nice reference for stftime](http://strftime.org/)
Converting string into datetime
466,345
826
2009-01-21T18:00:29Z
470,303
399
2009-01-22T18:27:18Z
[ "python", "datetime" ]
Short and simple. I've got a huge list of date-times like this as strings: ``` Jun 1 2005 1:33PM Aug 28 1999 12:00AM ``` I'm going to be shoving these back into proper datetime fields in a database so I need to magic them into real datetime objects. Any help (even if it's just a kick in the right direction) would be appreciated. Edit: This is going through Django's ORM so I can't use SQL to do the conversion on insert.
Use the third party [dateutil](http://labix.org/python-dateutil) library: ``` from dateutil import parser dt = parser.parse("Aug 28 1999 12:00AM") ``` It can handle most date formats, including the one you need to parse. It's more convenient than strptime as it can guess the correct format most of the time. It very useful for writing tests, where readability is more important than performance. You can install it with: ``` pip install python-dateutil ```
Converting string into datetime
466,345
826
2009-01-21T18:00:29Z
7,761,860
17
2011-10-14T00:13:28Z
[ "python", "datetime" ]
Short and simple. I've got a huge list of date-times like this as strings: ``` Jun 1 2005 1:33PM Aug 28 1999 12:00AM ``` I'm going to be shoving these back into proper datetime fields in a database so I need to magic them into real datetime objects. Any help (even if it's just a kick in the right direction) would be appreciated. Edit: This is going through Django's ORM so I can't use SQL to do the conversion on insert.
Something that isn't mentioned here and is useful: adding a suffix to the day. I decoupled the suffix logic so you can use it for any number you like, not just dates. ``` import time def num_suffix(n): ''' Returns the suffix for any given int ''' suf = ('th','st', 'nd', 'rd') n = abs(n) # wise guy tens = int(str(n)[-2:]) units = n % 10 if tens > 10 and tens < 20: return suf[0] # teens with 'th' elif units <= 3: return suf[units] else: return suf[0] # 'th' def day_suffix(t): ''' Returns the suffix of the given struct_time day ''' return num_suffix(t.tm_mday) # Examples print num_suffix(123) print num_suffix(3431) print num_suffix(1234) print '' print day_suffix(time.strptime("1 Dec 00", "%d %b %y")) print day_suffix(time.strptime("2 Nov 01", "%d %b %y")) print day_suffix(time.strptime("3 Oct 02", "%d %b %y")) print day_suffix(time.strptime("4 Sep 03", "%d %b %y")) print day_suffix(time.strptime("13 Nov 90", "%d %b %y")) print day_suffix(time.strptime("14 Oct 10", "%d %b %y"))​​​​​​​ ```
Converting string into datetime
466,345
826
2009-01-21T18:00:29Z
22,128,786
50
2014-03-02T14:22:44Z
[ "python", "datetime" ]
Short and simple. I've got a huge list of date-times like this as strings: ``` Jun 1 2005 1:33PM Aug 28 1999 12:00AM ``` I'm going to be shoving these back into proper datetime fields in a database so I need to magic them into real datetime objects. Any help (even if it's just a kick in the right direction) would be appreciated. Edit: This is going through Django's ORM so I can't use SQL to do the conversion on insert.
I have put together a project that can convert some really neat expressions. Check out **[timestring](http://github.com/stevepeak/timestring)**. ## Here are some examples below: [`pip install timestring`](http://github.com/stevepeak/timestring) ``` >>> import timestring >>> timestring.Range('next week') <timestring.Range From 03/03/14 00:00:00 to 03/10/14 00:00:00 4496004880> >>> timestring.Date('monday, aug 15th 2015 at 8:40 pm') <timestring.Date 2015-08-15 20:40:00 4491909392> ```
Converting string into datetime
466,345
826
2009-01-21T18:00:29Z
22,223,725
24
2014-03-06T11:53:05Z
[ "python", "datetime" ]
Short and simple. I've got a huge list of date-times like this as strings: ``` Jun 1 2005 1:33PM Aug 28 1999 12:00AM ``` I'm going to be shoving these back into proper datetime fields in a database so I need to magic them into real datetime objects. Any help (even if it's just a kick in the right direction) would be appreciated. Edit: This is going through Django's ORM so I can't use SQL to do the conversion on insert.
Many timestamps have an implied timezone. To ensure that your code will work in every timezone, you should use UTC internally and attach a timezone each time a foreign object enters the system. Python 3.2+: ``` >>> datetime.datetime.strptime( ... "March 5, 2014, 20:13:50", "%B %d, %Y, %H:%M:%S" ... ).replace(tzinfo=datetime.timezone(datetime.timedelta(hours=-3))) ```
Converting string into datetime
466,345
826
2009-01-21T18:00:29Z
27,401,685
14
2014-12-10T13:00:49Z
[ "python", "datetime" ]
Short and simple. I've got a huge list of date-times like this as strings: ``` Jun 1 2005 1:33PM Aug 28 1999 12:00AM ``` I'm going to be shoving these back into proper datetime fields in a database so I need to magic them into real datetime objects. Any help (even if it's just a kick in the right direction) would be appreciated. Edit: This is going through Django's ORM so I can't use SQL to do the conversion on insert.
You string representation of datetime is `Jun 1 2005 1:33PM` which is equals to `%b %d %Y %I:%M%p` > %b Month as locale’s abbreviated name(Jun) > > %d Day of the month as a zero-padded decimal number(1) > > %Y Year with century as a decimal number(2015) > > %I Hour (12-hour clock) as a zero-padded decimal number(01) > > %M Minute as a zero-padded decimal number(33) > > %p Locale’s equivalent of either AM or PM(PM) ``` >>> dates = [] >>> dates.append('Jun 1 2005 1:33PM') >>> dates.append('Aug 28 1999 12:00AM') >>> from datetime import datetime >>> for d in dates: ... date = datetime.strptime(d, '%b %d %Y %I:%M%p') ... print type(date) ... print date ... ``` Output ``` <type 'datetime.datetime'> 2005-06-01 13:33:00 <type 'datetime.datetime'> 1999-08-28 00:00:00 ```
Converting string into datetime
466,345
826
2009-01-21T18:00:29Z
34,377,575
7
2015-12-20T03:03:25Z
[ "python", "datetime" ]
Short and simple. I've got a huge list of date-times like this as strings: ``` Jun 1 2005 1:33PM Aug 28 1999 12:00AM ``` I'm going to be shoving these back into proper datetime fields in a database so I need to magic them into real datetime objects. Any help (even if it's just a kick in the right direction) would be appreciated. Edit: This is going through Django's ORM so I can't use SQL to do the conversion on insert.
Here is a solution using Pandas to convert dates formatted as strings into datetime.date objects. ``` import pandas as pd dates = ['2015-12-25', '2015-12-26'] >>> [d.date() for d in pd.to_datetime(dates)] [datetime.date(2015, 12, 25), datetime.date(2015, 12, 26)] ``` And here is how to convert the OP's original date-time examples: ``` datetimes = ['Jun 1 2005 1:33PM', 'Aug 28 1999 12:00AM'] >>> pd.to_datetime(datetimes).to_pydatetime().tolist() [datetime.datetime(2005, 6, 1, 13, 33), datetime.datetime(1999, 8, 28, 0, 0)] ``` There are many options for converting from the strings to Pandas Timestamps using `to_datetime`, so check the [docs](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html) if you need anything special. Likewise, Timestamps have many [properties and methods](http://pandas.pydata.org/pandas-docs/stable/api.html#datetimelike-properties) that can be accessed in addition to `.date`
Why do I have to specify my own class when using super(), and is there a way to get around it?
466,611
12
2009-01-21T19:20:58Z
466,638
7
2009-01-21T19:28:33Z
[ "python", "multiple-inheritance" ]
When using Python's `super()` to do method chaining, you have to explicitly specify your own class, for example: ``` class MyDecorator(Decorator): def decorate(self): super(MyDecorator, self).decorate() ``` I have to specify the name of my class `MyDecorator` as an argument to `super()`. This is not DRY. When I rename my class now I will have to rename it twice. Why is this implemented this way? And is there a way to weasel out of having to write the name of the class twice(or more)?
The BDFL agrees. See [Pep 367 - New Super](http://www.python.org/dev/peps/pep-0367/) for 2.6 and [PEP 3135 - New Super](http://www.python.org/dev/peps/pep-3135/) for 3.0.
Why do I have to specify my own class when using super(), and is there a way to get around it?
466,611
12
2009-01-21T19:20:58Z
466,647
10
2009-01-21T19:31:04Z
[ "python", "multiple-inheritance" ]
When using Python's `super()` to do method chaining, you have to explicitly specify your own class, for example: ``` class MyDecorator(Decorator): def decorate(self): super(MyDecorator, self).decorate() ``` I have to specify the name of my class `MyDecorator` as an argument to `super()`. This is not DRY. When I rename my class now I will have to rename it twice. Why is this implemented this way? And is there a way to weasel out of having to write the name of the class twice(or more)?
Your wishes come true: Just use python 3.0. In it you just use `super()` and it does `super(ThisClass, self)`. Documentation [here](http://docs.python.org/3.0/library/functions.html#super). Code sample from the documentation: ``` class C(B): def method(self, arg): super().method(arg) # This does the same thing as: super(C, self).method(arg) ```
How can I return system information in Python?
466,684
23
2009-01-21T19:40:34Z
467,291
16
2009-01-21T22:25:02Z
[ "python", "operating-system" ]
Using Python, how can information such as CPU usage, memory usage (free, used, etc), process count, etc be returned in a generic manner so that the same code can be run on Linux, Windows, BSD, etc? Alternatively, how could this information be returned on all the above systems with the code specific to that OS being run only if that OS is indeed the operating environment?
Regarding cross-platform: your best bet is probably to write platform-specific code, and then import it conditionally. e.g. ``` import sys if sys.platform == 'win32': import win32_sysinfo as sysinfo elif sys.platform == 'darwin': import mac_sysinfo as sysinfo elif 'linux' in sys.platform: import linux_sysinfo as sysinfo #etc print 'Memory available:', sysinfo.memory_available() ``` For specific resources, as Anthony points out you can access `/proc` under linux. For Windows, you could have a poke around at the [Microsoft Script Repository](http://www.microsoft.com/technet/scriptcenter/scripts/python/default.mspx?mfr=true). I'm not sure where to get that kind of information on Macs, but I can think of a great website where you could ask :-)
Python piping on Windows: Why does this not work?
466,801
10
2009-01-21T20:20:05Z
466,849
23
2009-01-21T20:30:27Z
[ "python", "windows", "piping" ]
I'm trying something like this Output.py ``` print "Hello" ``` Input.py ``` greeting = raw_input("Give me the greeting. ") print "The greeting is:", greeting ``` At the cmd line ``` Output.py | Input.py ``` But it returns an *EOFError*. Can someone tell me what I am doing wrong? Thanks for your help. **EDIT** Patrick Harrington [solution](http://stackoverflow.com/questions/466801/python-piping-on-windows-why-does-this-not-work#466851) works but I don't know why...
I tested this on my Windows machine and it works if you specify the Python exe: ``` C:\>C:\Python25\python.exe output.py | C:\Python25\python.exe input.py Give me the greeting. The greeting is: hello ``` But I get an EOFError also if running the commands directly as: ``` output.py | input.py ``` I'm not sure exactly why that is, I'm still looking into this one but at least this should provide you with a workaround for now. It may have something to do with the way the file handler is invoked for .py files. **UPDATE**: well, what do you know. Looks like this is actually a bug in Windows where stdin/stdout redirection may not work properly when started from a file association. So the workaround is as noted by myself and Patrick, you need to specify "python" will be running input.py, otherwise it will not redirect stdout from output.py to the stdin for input.py correctly. *Reference*: **<http://mail.python.org/pipermail/python-bugs-list/2004-August/024923.html>** **<http://support.microsoft.com/default.aspx?kbid=321788>** **UPDATE 2**: To change this behavior and make Windows pipes work as expected for stdin/stdout redirection, you can add this value to the registry (tested on my box and verified this works as desired). > 1. Start Registry Editor. > 2. Locate and then click the following key in the registry: > > **HKEY\_LOCAL\_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer** > 3. On the Edit menu, click Add Value, and then add the following > registry value: > > Value name: *InheritConsoleHandles* > Data type: \*REG\_DWORD\* > Radix: Decimal > Value data: 1 > 4. Quit Registry Editor.
python: list comprehension tactics
467,094
8
2009-01-21T21:27:47Z
467,102
19
2009-01-21T21:30:19Z
[ "python", "list-comprehension" ]
I'm looking to take a string and create a list of strings that build up the original string. e.g.: ``` "asdf" => ["a", "as", "asd", "asdf"] ``` I'm sure there's a "pythonic" way to do it; I think I'm just losing my mind. What's the best way to get this done?
One possibility: ``` >>> st = 'asdf' >>> [st[:n+1] for n in range(len(st))] ['a', 'as', 'asd', 'asdf'] ```
python: list comprehension tactics
467,094
8
2009-01-21T21:27:47Z
467,161
16
2009-01-21T21:47:34Z
[ "python", "list-comprehension" ]
I'm looking to take a string and create a list of strings that build up the original string. e.g.: ``` "asdf" => ["a", "as", "asd", "asdf"] ``` I'm sure there's a "pythonic" way to do it; I think I'm just losing my mind. What's the best way to get this done?
If you're going to be looping over the elements of your "list", you may be better off using a generator rather than list comprehension: ``` >>> text = "I'm a little teapot." >>> textgen = (text[:i + 1] for i in xrange(len(text))) >>> textgen <generator object <genexpr> at 0x0119BDA0> >>> for item in textgen: ... if re.search("t$", item): ... print item I'm a lit I'm a litt I'm a little t I'm a little teapot >>> ``` This code never creates a list object, nor does it ever (delta garbage collection) create more than one extra string (in addition to `text`).
Implementing a "rules engine" in Python
467,738
16
2009-01-22T01:11:24Z
468,737
52
2009-01-22T11:22:15Z
[ "python", "parsing", "rules" ]
I'm writing a log collection / analysis application in Python and I need to write a "rules engine" to match and act on log messages. It needs to feature: * Regular expression matching for the message itself * Arithmetic comparisons for message severity/priority * Boolean operators I envision An example rule would probably be something like: ``` (message ~ "program\\[\d+\\]: message" and severity >= high) or (severity >= critical) ``` I'm thinking about using [PyParsing](http://pyparsing.wikispaces.com/) or similar to actually parse the rules and construct the parse tree. The current (not yet implemented) design I have in mind is to have classes for each rule type, and construct and chain them together according to the parse tree. Then each rule would have a "matches" method that could take a message object return whether or not it matches the rule. Very quickly, something like: ``` class RegexRule(Rule): def __init__(self, regex): self.regex = regex def match(self, message): return self.regex.match(message.contents) class SeverityRule(Rule): def __init__(self, operator, severity): self.operator = operator def match(self, message): if operator == ">=": return message.severity >= severity # more conditions here... class BooleanAndRule(Rule): def __init__(self, rule1, rule2): self.rule1 = rule1 self.rule2 = rule2 def match(self, message): return self.rule1.match(message) and self.rule2.match(message) ``` These rule classes would then be chained together according to the parse tree of the message, and the match() method called on the top rule, which would cascade down until all the rules were evaluated. I'm just wondering if this is a reasonable approach, or if my design and ideas are way totally out of whack? Unfortunately I never had the chance to take a compiler design course or anything like that in Unviersity so I'm pretty much coming up with this stuff of my own accord. Could someone with some experience in these kinds of things please chime in and evaluate the idea? **EDIT:** Some good answers so far, here's a bit of clarification. The aim of the program is to collect log messages from servers on the network and store them in the database. Apart from the collection of log messages, the collector will define a set of rules that will either match or ignore messages depending on the conditions and flag an alert if necessary. I can't see the rules being of more than a moderate complexity, and they will be applied in a chain (list) until either a matching alert or ignore rule is hit. However, this part isn't quite as relevant to the question. As far the syntax being close to Python syntax, yes that is true, however I think it would be difficult to filter the Python down to the point where the user couldn't inadvertently do some crazy stuff with the rules that was not intended.
Do not invent yet another rules language. Either use Python or use some other existing, already debugged and working language like BPEL. Just write your rules in Python, import them and execute them. Life is simpler, far easier to debug, and you've actually solved the actual log-reading problem without creating another problem. Imagine this scenario. Your program breaks. It's now either the rule parsing, the rule execution, or the rule itself. You must debug all three. If you wrote the rule in Python, it would be the rule, and that would be that. "I think it would be difficult to filter the Python down to the point where the user couldn't inadvertently do some crazy stuff with the rules that was not intended." This is largely the "I want to write a compiler" argument. 1) You're the primary user. You'll write, debug and maintain the rules. Are there really armies of crazy programmers who will be doing crazy things? Really? If there is any potential crazy user, *talk to them*. Teach Them. Don't fight against them by inventing a new language (which you will then have to maintain and debug forever.) 2) It's just log processing. There's no real cost to the *craziness*. No one is going to subvert the world economic system with faulty log handling. Don't make a small task with a few dozen lines of Python onto a 1000 line interpreter to interpret a few dozen lines of some rule language. Just write the few dozen lines of Python. Just write it in Python as quickly and clearly as you can and move on to the next project.
Is there a Perl equivalent of Python's re.findall/re.finditer (iterative regex results)?
467,800
6
2009-01-22T01:52:54Z
467,820
13
2009-01-22T02:04:14Z
[ "python", "regex", "perl", "iterator" ]
In Python compiled regex patterns [have a `findall` method](http://docs.python.org/library/re.html#re.findall) that does the following: > Return all non-overlapping matches of > pattern in string, as a list of > strings. The string is scanned > left-to-right, and matches are > returned in the order found. If one or > more groups are present in the > pattern, return a list of groups; this > will be a list of tuples if the > pattern has more than one group. Empty > matches are included in the result > unless they touch the beginning of > another match. What's the canonical way of doing this in Perl? A naive algorithm I can think of is along the lines of "while a search and replace with the empty string is successful, do [suite]". I'm hoping there's a nicer way. :-) Thanks in advance!
Use the `/g` modifier in your match. From the `perlop` manual: > The "`/g`" modifier specifies global pattern matching--that is, matching as many times as possible within the string. How it behaves depends on the context. In list context, it returns a list of the substrings matched by any capturing parentheses in the regular expression. If there are no parentheses, it returns a list of all the matched strings, as if there were parentheses around the whole pattern. > > In scalar context, each execution of "`m//g`" finds the next match, returning true if it matches, and false if there is no further match. The position after the last match can be read or set using the `pos()` function; see "`pos`" in `perlfunc`. A failed match normally resets the search position to the beginning of the string, but you can avoid that by adding the "`/c`" modifier (e.g. "`m//gc`"). Modifying the target string also resets the search position.
Is there a Perl equivalent of Python's re.findall/re.finditer (iterative regex results)?
467,800
6
2009-01-22T01:52:54Z
467,874
7
2009-01-22T02:35:09Z
[ "python", "regex", "perl", "iterator" ]
In Python compiled regex patterns [have a `findall` method](http://docs.python.org/library/re.html#re.findall) that does the following: > Return all non-overlapping matches of > pattern in string, as a list of > strings. The string is scanned > left-to-right, and matches are > returned in the order found. If one or > more groups are present in the > pattern, return a list of groups; this > will be a list of tuples if the > pattern has more than one group. Empty > matches are included in the result > unless they touch the beginning of > another match. What's the canonical way of doing this in Perl? A naive algorithm I can think of is along the lines of "while a search and replace with the empty string is successful, do [suite]". I'm hoping there's a nicer way. :-) Thanks in advance!
To build on Chris' response, it's probably most relevant to encase the `//g` regex in a `while` loop, like: ``` my @matches; while ( 'foobarbaz' =~ m/([aeiou])/g ) { push @matches, $1; } ``` Pasting some quick Python I/O: ``` >>> import re >>> re.findall(r'([aeiou])([nrs])','I had a sandwich for lunch') [('a', 'n'), ('o', 'r'), ('u', 'n')] ``` To get something comparable in Perl, the construct could be something like: ``` my $matches = []; while ( 'I had a sandwich for lunch' =~ m/([aeiou])([nrs])/g ) { push @$matches, [$1,$2]; } ``` But in general, whatever function you're iterating for, you can probably do within the `while` loop itself.
In what situation should the built-in 'operator' module be used in python?
467,920
21
2009-01-22T03:03:46Z
467,937
16
2009-01-22T03:15:07Z
[ "python", "operators" ]
I'm speaking of this module: <http://docs.python.org/library/operator.html> From the article: > The operator module exports a set of > functions implemented in C > corresponding to the intrinsic > operators of Python. For example, > operator.add(x, y) is equivalent to > the expression x+y. The function names > are those used for special class > methods; variants without leading and > trailing \_\_ are also provided for > convenience. I'm not sure I understand the benefit or purpose of this module.
One example is in the use of the `reduce()` function: ``` >>> import operator >>> a = [2, 3, 4, 5] >>> reduce(lambda x, y: x + y, a) 14 >>> reduce(operator.add, a) 14 ```
In what situation should the built-in 'operator' module be used in python?
467,920
21
2009-01-22T03:03:46Z
467,945
22
2009-01-22T03:19:40Z
[ "python", "operators" ]
I'm speaking of this module: <http://docs.python.org/library/operator.html> From the article: > The operator module exports a set of > functions implemented in C > corresponding to the intrinsic > operators of Python. For example, > operator.add(x, y) is equivalent to > the expression x+y. The function names > are those used for special class > methods; variants without leading and > trailing \_\_ are also provided for > convenience. I'm not sure I understand the benefit or purpose of this module.
Possibly the most popular usage is operator.itemgetter. Given a list `lst` of tuples, you can sort by the ith element by: `lst.sort(key=operator.itemgetter(i))` Certainly, you could do the same thing without operator by defining your own key function, but the operator module makes it slightly neater. As to the rest, python allows a functional style of programming, and so it can come up -- for instance, Greg's reduce example. You might argue: "Why do I need `operator.add` when I can just do: `add = lambda x, y: x+y`?" The answers are: 1. `operator.add` is (I think) slightly faster. 2. It makes the code easier to understand for you, or another person later, looking at it. They don't need to look for the definition of add, because they know what the operator module does.
Tracking file load progress in Python
468,238
3
2009-01-22T06:44:52Z
468,349
7
2009-01-22T07:46:14Z
[ "python", "file", "load", "progress" ]
A lot of modules I use import entire files into memory or trickle a file's contents in while they process it. I'm wondering if there's any way to track this sort of loading progress? Possibly a wrapper class that takes a callback?
I would go by this by determining the size of the file, and then simply dividing the total by the number of bytes read. Like this: ``` import os def show_progress(file_name, chunk_size=1024): fh = open(file_name, "r") total_size = os.path.getsize(file_name) total_read = 0 while True: chunk = fh.read(chunk_size) if not chunk: fh.close() break total_read += len(chunk) print "Progress: %s percent" % (total_read/total_size) yield chunk for chunk in show_progress("my_file.txt"): # Process the chunk pass ``` **Edit:** I know it isn't the best code, but I just wanted to show the concept.
Ruby equivalent of Python's "dir"?
468,421
39
2009-01-22T08:32:05Z
468,433
44
2009-01-22T08:39:41Z
[ "python", "ruby", "inspection" ]
In Python we can "dir" a module, like this: ``` >>> import re >>> dir(re) ``` And it lists all functions in the module. Is there a similar way to do this in Ruby?
As far as I know not exactly but you get somewhere with ``` object.methods.sort ```
Ruby equivalent of Python's "dir"?
468,421
39
2009-01-22T08:32:05Z
704,557
15
2009-04-01T08:20:38Z
[ "python", "ruby", "inspection" ]
In Python we can "dir" a module, like this: ``` >>> import re >>> dir(re) ``` And it lists all functions in the module. Is there a similar way to do this in Ruby?
I like to have this in my .irbrc: ``` class Object def local_methods (methods - Object.instance_methods).sort end end ``` So when I'm in irb: ``` >> Time.now.local_methods => ["+", "-", "<", "<=", "<=>", ">", ">=", "_dump", "asctime", "between?", "ctime", "day", "dst?", "getgm", "getlocal", "getutc", "gmt?", "gmt_offset", "gmtime", "gmtoff", "hour", "isdst", "localtime", "mday", "min", "mon", "month", "sec", "strftime", "succ", "to_f", "to_i", "tv_sec", "tv_usec", "usec", "utc", "utc?", "utc_offset", "wday", "yday", "year", "zone"] ``` Or even cuter - with grep: ``` >> Time.now.local_methods.grep /str/ => ["strftime"] ```
How can I listen for 'usb device inserted' events in Linux, in Python?
469,243
28
2009-01-22T14:09:59Z
469,433
7
2009-01-22T14:59:36Z
[ "python", "linux", "usb" ]
I'd like to write a Python script for Amarok in Linux to automatically copy the stackoverflow podcast to my player. When I plug in the player, it would mount the drive, copy any pending podcasts, and eject the player. How can I listen for the "plugged in" event? I have looked through hald but couldn't find a good example.
I haven't tried writing such a program myself, however I've just looked at the following two links (thanks Google!), which I think will be of help: * [dbus-python tutorial](http://dbus.freedesktop.org/doc/dbus-python/doc/tutorial.html) (which talks about how to use Python to access D-Bus) * [HAL 0.5.10 Specification](http://www.marcuscom.com/hal-spec/hal-spec.html#interfaces) (which talks about how HAL publishes events to D-Bus) In particular, read about the `org.freedesktop.Hal.Manager` interface, and its `DeviceAdded` and `DeviceRemoved` events. :-) Hope this helps!
How can I listen for 'usb device inserted' events in Linux, in Python?
469,243
28
2009-01-22T14:09:59Z
471,099
45
2009-01-22T22:27:58Z
[ "python", "linux", "usb" ]
I'd like to write a Python script for Amarok in Linux to automatically copy the stackoverflow podcast to my player. When I plug in the player, it would mount the drive, copy any pending podcasts, and eject the player. How can I listen for the "plugged in" event? I have looked through hald but couldn't find a good example.
**Update**: As said in comments, Hal is not supported in recent distributions, the standard now is udev, Here is a small example that makes use of glib loop and **udev**, I keep the Hal version for historical reasons. This is basically the [example in the pyudev documentation](http://pyudev.readthedocs.org/en/latest/api/pyudev.glib.html#pyudev.glib.MonitorObserver), adapted to work with older versions, and with the glib loop, notice that the filter should be customized for your specific needing: ``` import glib from pyudev import Context, Monitor try: from pyudev.glib import MonitorObserver def device_event(observer, device): print 'event {0} on device {1}'.format(device.action, device) except: from pyudev.glib import GUDevMonitorObserver as MonitorObserver def device_event(observer, action, device): print 'event {0} on device {1}'.format(action, device) context = Context() monitor = Monitor.from_netlink(context) monitor.filter_by(subsystem='usb') observer = MonitorObserver(monitor) observer.connect('device-event', device_event) monitor.start() glib.MainLoop().run() ``` ***Old version with Hal and d-bus:*** You can use D-Bus bindings and listen to `DeviceAdded` and `DeviceRemoved` signals. You will have to check the capabilities of the Added device in order to select the storage devices only. Here is a small example, you can remove the comments and try it. ``` import dbus import gobject class DeviceAddedListener: def __init__(self): ``` You need to connect to Hal Manager using the System Bus. ``` self.bus = dbus.SystemBus() self.hal_manager_obj = self.bus.get_object( "org.freedesktop.Hal", "/org/freedesktop/Hal/Manager") self.hal_manager = dbus.Interface(self.hal_manager_obj, "org.freedesktop.Hal.Manager") ``` And you need to connect a listener to the signals you are interested on, in this case `DeviceAdded`. ``` self.hal_manager.connect_to_signal("DeviceAdded", self._filter) ``` I'm using a filter based on capabilities. It will accept any `volume` and will call `do_something` with if, you can read Hal documentation to find the more suitable queries for your needs, or more information about the properties of the Hal devices. ``` def _filter(self, udi): device_obj = self.bus.get_object ("org.freedesktop.Hal", udi) device = dbus.Interface(device_obj, "org.freedesktop.Hal.Device") if device.QueryCapability("volume"): return self.do_something(device) ``` Example function that shows some information about the volume: ``` def do_something(self, volume): device_file = volume.GetProperty("block.device") label = volume.GetProperty("volume.label") fstype = volume.GetProperty("volume.fstype") mounted = volume.GetProperty("volume.is_mounted") mount_point = volume.GetProperty("volume.mount_point") try: size = volume.GetProperty("volume.size") except: size = 0 print "New storage device detectec:" print " device_file: %s" % device_file print " label: %s" % label print " fstype: %s" % fstype if mounted: print " mount_point: %s" % mount_point else: print " not mounted" print " size: %s (%.2fGB)" % (size, float(size) / 1024**3) if __name__ == '__main__': from dbus.mainloop.glib import DBusGMainLoop DBusGMainLoop(set_as_default=True) loop = gobject.MainLoop() DeviceAddedListener() loop.run() ```
problem using an instance in a with_statement
469,950
4
2009-01-22T16:58:42Z
469,992
12
2009-01-22T17:08:38Z
[ "python", "with-statement" ]
I've recently started to learn python , and I reached the **with** statement . I've tried to use it with a class instance , but I think I'm doing something wrong . Here is the code : ``` from __future__ import with_statement import pdb class Geo: def __init__(self,text): self.text = text def __enter__(self): print "entering" def __exit__(self,exception_type,exception_value,exception_traceback): print "exiting" def ok(self): print self.text def __get(self): return self.text with Geo("line") as g : g.ok() ``` The thing is that when the interpreter reaches the **ok** method inside the with statement , the following exception is raised : ``` Traceback (most recent call last): File "dec.py", line 23, in g.ok() AttributeError: 'NoneType' object has no attribute 'ok' ``` Why does the g object have the type NoneType ? How can I use an instance with the **with** statement ?
Your `__enter__` method needs to return the object that should be used for the "`as g`" part of the with statement. See the [documentation](http://docs.python.org/reference/compound_stmts.html#with), where it states: * If a target was included in the with statement, the return value from `__enter__()` is assigned to it. Currently, it has no return statement, so g gets bound to `None` (the default return value)
Why does 1+++2 = 3 in python?
470,139
16
2009-01-22T17:41:44Z
470,156
47
2009-01-22T17:46:08Z
[ "python", "evaluation" ]
I am from C background and I just started learning python... while trying some programs, I got this doubt... how python evaluates the expression 1+++2? No matter how many number of '+' I put in between, it is printing 3 as the answer. Please can anyone explain this behavior and for 1--2 it is printing 3 and for 1---2 it is printing -1 please clear my doubt. Regards Sunil
Your expression is the same as: ``` 1+(+(+2)) ``` Any numeric expression can be preceded by `-` to make it negative, or `+` to do nothing (the option is present for symmetry). With negative signs: ``` 1-(-(2)) = 1-(-2) = 1+2 = 3 ``` and ``` 1-(-(-2)) = 1-(2) = -1 ``` I see you clarified your question to say that you come from a C background. In Python, there are no increment operators like `++` and `--` in C, which was probably the source of your confusion. To increment or decrement a variable `i` or `j` in Python use this style: ``` i += 1 j -= 1 ```
Why does 1+++2 = 3 in python?
470,139
16
2009-01-22T17:41:44Z
470,162
11
2009-01-22T17:47:04Z
[ "python", "evaluation" ]
I am from C background and I just started learning python... while trying some programs, I got this doubt... how python evaluates the expression 1+++2? No matter how many number of '+' I put in between, it is printing 3 as the answer. Please can anyone explain this behavior and for 1--2 it is printing 3 and for 1---2 it is printing -1 please clear my doubt. Regards Sunil
The extra +'s are not incrementors (like ++a or a++ in c++). They are just showing that the number is positive. There is no such ++ operator. There is a unary + operator and a unary - operator though. The unary + operator has no effect on its argument. The unary - operator negates its operator or mulitplies it by -1. ``` +1 ``` -> 1 ``` ++1 ``` -> 1 This is the same as +(+(1)) ``` 1+++2 ``` -> 3 Because it's the same as 1 + (+(+(2)) Likewise you can do --1 to mean - (-1) which is +1. ``` --1 ``` -> 1 For completeness there is no \* unary opeartor. So \*1 is an error. But there is a \*\* operator which is power of, it takes 2 arguments. ``` 2**3 ``` -> 8
Jython and python modules
471,000
18
2009-01-22T21:57:46Z
471,143
7
2009-01-22T22:39:59Z
[ "java", "python", "interop", "jython" ]
I've just started using the `PythonInterpreter` from within my Java classes, and it works great! However, if I try to include python modules (`re`, `HTMLParser`, etc.), I'm receiving the following exception (for `re`): ``` Exception in thread "main" Traceback (innermost last): File "", line 1, in ? ImportError: no module named re ``` How could I make the classes from the jython jar "see" the modules python has available?
According to the [FAQ](http://www.jython.org/Project/userfaq.html#jython-modules): > ## 4.1 What parts of the Python library are supported? > > The good news is that Jython now supports a large majority of the standard Python library. The bad news is that this has moved so rapidly, it's hard to keep the documentation up to date. > > Built-in modules (e.g. those that are written in C for CPython) are a different story. These would have to be ported to Java, or implemented with a JNI bridge in order to be used by Jython. Some built-in modules have been ported to JPython, most notably cStringIO, cPickle, struct, and binascii. It is unlikely that JNI modules will be included in Jython proper though. > > If you want to use a standard Python module, just try importing it. If that works, you're probably all set. You can also do a dir() on the modules to check the list of functions it implements. > > If there is some standard Python module that you have a real need for that doesn't work with Jython yet, please send us mail. In other words, you *can* directly use Python modules from Jython, unless you're trying to use built-in modules, in which case you're stuck with whatever has been ported to Jython.
Jython and python modules
471,000
18
2009-01-22T21:57:46Z
483,165
14
2009-01-27T12:09:48Z
[ "java", "python", "interop", "jython" ]
I've just started using the `PythonInterpreter` from within my Java classes, and it works great! However, if I try to include python modules (`re`, `HTMLParser`, etc.), I'm receiving the following exception (for `re`): ``` Exception in thread "main" Traceback (innermost last): File "", line 1, in ? ImportError: no module named re ``` How could I make the classes from the jython jar "see" the modules python has available?
You embed jython and you will use some Python-Modules somewere: if you want to set the path (sys.path) in your Java-Code : ``` public void init() { interp = new PythonInterpreter(null, new PySystemState()); PySystemState sys = Py.getSystemState(); sys.path.append(new PyString(rootPath)); sys.path.append(new PyString(modulesDir)); } ``` Py is in org.python.core. rootPath and modulesDir is where YOU want ! let rootPath point where you located the standard-jython-lib Have a look at src/org/python/util/PyServlet.java in the Jython-Source-Code for example
Why compile Python code?
471,191
160
2009-01-22T22:57:34Z
471,217
7
2009-01-22T23:02:39Z
[ "python", "compilation" ]
Why would you compile a Python script? You can run them directly from the .py file and it works fine, so is there a performance advantage or something? I also notice that some files in my application get compiled into .pyc while others do not, why is this?
There's certainly a performance difference when running a compiled script. If you run normal `.py` scripts, the machine compiles it every time it is run and this takes time. On modern machines this is hardly noticeable but as the script grows it may become more of an issue.
Why compile Python code?
471,191
160
2009-01-22T22:57:34Z
471,220
8
2009-01-22T23:03:26Z
[ "python", "compilation" ]
Why would you compile a Python script? You can run them directly from the .py file and it works fine, so is there a performance advantage or something? I also notice that some files in my application get compiled into .pyc while others do not, why is this?
There is a performance increase in running compiled python. However when you run a .py file as an imported module, python will compile and store it, and as long as the .py file does not change it will always use the compiled version. With any interpeted language when the file is used the process looks something like this: 1. File is processed by the interpeter. 2. File is compiled 3. Compiled code is executed. obviously by using pre-compiled code you can eliminate step 2, this applies python, PHP and others. Heres an interesting blog post explaining the differences <http://julipedia.blogspot.com/2004/07/compiled-vs-interpreted-languages.html> And here's an entry that explains the Python compile process <http://effbot.org/zone/python-compile.htm>
Why compile Python code?
471,191
160
2009-01-22T22:57:34Z
471,227
175
2009-01-22T23:06:13Z
[ "python", "compilation" ]
Why would you compile a Python script? You can run them directly from the .py file and it works fine, so is there a performance advantage or something? I also notice that some files in my application get compiled into .pyc while others do not, why is this?
It's compiled to bytecode which can be used much, much, much faster. The reason some files aren't compiled is that the main script, which you invoke with `python main.py` is recompiled every time you run the script. All imported scripts will be compiled and stored on the disk. *Important addition by [Ben Blank](http://stackoverflow.com/users/46387/ben-blank):* > It's worth noting that while running a > compiled script has a faster *startup* > time (as it doesn't need to be > compiled), it doesn't *run* any > faster.
Why compile Python code?
471,191
160
2009-01-22T22:57:34Z
471,242
7
2009-01-22T23:09:53Z
[ "python", "compilation" ]
Why would you compile a Python script? You can run them directly from the .py file and it works fine, so is there a performance advantage or something? I also notice that some files in my application get compiled into .pyc while others do not, why is this?
As already mentioned, you can get a performance increase from having your python code compiled into bytecode. This is usually handled by python itself, for imported scripts only. Another reason you might want to compile your python code, could be to protect your intellectual property from being copied and/or modified. You can read more about this in the [Python documentation](http://docs.python.org/tutorial/modules.html#compiled-python-files).
Why compile Python code?
471,191
160
2009-01-22T22:57:34Z
471,252
57
2009-01-22T23:14:33Z
[ "python", "compilation" ]
Why would you compile a Python script? You can run them directly from the .py file and it works fine, so is there a performance advantage or something? I also notice that some files in my application get compiled into .pyc while others do not, why is this?
The .pyc file is Python that has already been compiled to byte-code. Python automatically runs a .pyc file if it finds one with the same name as a .py file you invoke. "An Introduction to Python" [says](http://www.network-theory.co.uk/docs/pytut/CompiledPythonfiles.html) this about compiled Python files: > A program doesn't run any faster when > it is read from a ‘.pyc’ or ‘.pyo’ > file than when it is read from a ‘.py’ > file; the only thing that's faster > about ‘.pyc’ or ‘.pyo’ files is the > speed with which they are loaded. The advantage of running a .pyc file is that Python doesn't have to incur the overhead of compiling it before running it. Since Python would compile to byte-code before running a .py file anyway, there shouldn't be any performance improvement aside from that. How much improvement can you get from using compiled .pyc files? That depends on what the script does. For a very brief script that simply prints "Hello World," compiling could constitute a large percentage of the total startup-and-run time. But the cost of compiling a script relative to the total run time diminishes for longer-running scripts. The script you name on the command-line is never saved to a .pyc file. Only modules loaded by that "main" script are saved in that way.
Why compile Python code?
471,191
160
2009-01-22T22:57:34Z
23,256,357
20
2014-04-23T22:26:10Z
[ "python", "compilation" ]
Why would you compile a Python script? You can run them directly from the .py file and it works fine, so is there a performance advantage or something? I also notice that some files in my application get compiled into .pyc while others do not, why is this?
**Pluses:** First: mild, defeatable obfuscation. Second: if compilation results in a significantly smaller file, you will get faster load times. Nice for the web. Third: Python can skip the compilation step. Faster at intial load. Nice for the CPU and the web. Fourth: the more you comment, the smaller the `.pyc` or `.pyo` file will be in comparison to the source `.py` file. Fifth: an end user with only a `.pyc` or `.pyo` file in hand is much less likely to present you with a bug they caused by an un-reverted change they forgot to tell you about. Sixth: if you're aiming at an embedded system, obtaining a smaller size file to embed may represent a significant plus, and the architecture is stable so drawback one, detailed below, does not come into play. **Top level compilation** It is useful to know that you can compile a top level python source file into a `.pyc` file this way: ``` python -m py_compile myscript.py ``` This removes comments. It leaves `docstrings` intact. If you'd like to get rid of the `docstrings` as well (you might want to seriously think about why you're doing that) then compile this way instead... ``` python -OO -m py_compile myscript.py ``` ...and you'll get a `.pyo` file instead of a `.pyc` file; equally distributable in terms of the code's essential functionality, but smaller by the size of the stripped-out `docstrings` (and less easily understood for subsequent employment if it had decent `docstrings` in the first place). But see drawback three, below. Note that python uses the `.py` file's date, if it is present, to decide whether it should execute the `.py` file as opposed to the `.pyc` or `.pyo` file --- so edit your .py file, and the `.pyc` or `.pyo` is obsolete and whatever benefits you gained are lost. You need to recompile it in order to get the `.pyc` or `.pyo` benefits back again again, such as they may be. **Drawbacks:** First: There's a "magic cookie" in `.pyc` and `.pyo` files that indicates the system architecture that the python file was compiled in. If you distribute one of these files into an environment of a different type, it will break. If you distribute the `.pyc` or `.pyo` without the associated `.py` to recompile or `touch` so it supersedes the `.pyc` or `.pyo`, the end user can't fix it, either. Second: If `docstrings` are skipped with the use of the `-OO` command line option as described above, no one will be able to get at that information, which can make use of the code more difficult (or impossible.) Third: Python's `-OO` option also implements some optimizations as per the `-O` command line option; this may result in changes in operation. Known optimizations are: * `sys.flags.optimize` = 1 * `assert` statements are skipped * `__debug__` = False Fourth: if you had intentionally made your python script executable with something on the order of `#!/usr/bin/python` on the first line, this is stripped out in `.pyc` and `.pyo` files and that functionality is lost. Fifth: somewhat obvious, but if you compile your code, not only can its use be impacted, but the potential for others to learn from your work is reduced, often severely.
Is there a Term::ANSIScreen equivalent for Python?
471,463
3
2009-01-23T00:43:24Z
1,977,010
8
2009-12-29T21:18:25Z
[ "python", "perl", "ansi" ]
Perl has the excellent module `Term::ANSIScreen` for doing all sorts of fancy cursor movement and terminal color control. I'd like to reimplement a program that's currently in Perl in Python instead, but the terminal ANSI colors are key to its function. Is anyone aware of an equivalent?
If you only need colors You may want to borrow the implementation from pygments. IMO it's much cleaner than the one from ActiveState <http://dev.pocoo.org/hg/pygments-main/file/b2deea5b5030/pygments/console.py>
Any way to override the and operator in Python?
471,546
18
2009-01-23T01:36:42Z
471,561
14
2009-01-23T01:42:34Z
[ "python" ]
I tried overriding `__and__`, but that is for the & operator, not *and* - the one that I want. Can I override *and*?
You cannot override the `and`, `or`, and `not` boolean operators.
Any way to override the and operator in Python?
471,546
18
2009-01-23T01:36:42Z
471,567
29
2009-01-23T01:44:25Z
[ "python" ]
I tried overriding `__and__`, but that is for the & operator, not *and* - the one that I want. Can I override *and*?
No you can't override `and` and `or`. With the behavior that these have in Python (i.e. short-circuiting) they are more like control flow tools than operators and overriding them would be more like overriding `if` than + or -. You *can* influence the truth value of your objects (i.e. whether they evaluate as true or false) by overriding `__nonzero__` (or `__bool__` in Python 3).
Customizing an Admin form in Django while also using autodiscover
471,550
22
2009-01-23T01:38:17Z
471,661
43
2009-01-23T02:22:18Z
[ "python", "django", "forms", "django-admin", "customization" ]
I want to modify a few tiny details of Django's built-in `django.contrib.auth` module. Specifically, I want a different form that makes username an email field (and email an alternate email address. (I'd rather not modify `auth` any more than necessary -- a simple form change *seems* to be all that's needed.) When I use `autodiscover` with a customized `ModelAdmin` for `auth` I wind up conflicting with `auth`'s own admin interface and get an "already registered" error. It looks like I have to create my own admin site, enumerating all of my Models. It's only 18 classes, but it seems like a DRY problem -- every change requires both adding to the Model **and** adding to the customized admin site. Or, should I write my own version of "`autodiscover` with exclusions" to essentially import all the `admin` modules **except** `auth`?
None of the above. Just use admin.site.unregister(). Here's how I recently added filtering Users on is\_active in the admin (**n.b.** is\_active filtering is now on the User model by default in Django core; still works here as an example), all DRY as can be: ``` from django.contrib import admin from django.contrib.auth.admin import UserAdmin from django.contrib.auth.models import User class MyUserAdmin(UserAdmin): list_filter = UserAdmin.list_filter + ('is_active',) admin.site.unregister(User) admin.site.register(User, MyUserAdmin) ```
Pros and cons of IronPython and IronPython Studio
471,712
16
2009-01-23T02:49:32Z
472,312
8
2009-01-23T09:06:35Z
[ "python", "ironpython", "ironpython-studio" ]
We are ready in our company to move everything to Python instead of C#, we are a consulting company and we usually write small projects in C# we don't do huge projects and our work is more based on complex mathematical models not complex software structures. So we believe IronPython is a good platform for us because it provides standard GUI functionality on windows and access to all of .Net libraries. I know Ironpython studio is not complete, and in fact I had a hard time adding my references but I was wondering if someone could list some of the pros and cons of this migration for us, considering Python code is easier to read by our clients and we usually deliver a proof-of-concept prototype instead of a full-functional code, our clients usually go ahead and implement the application themselves
There are a lot of reasons why you want to switch from C# to python, i did this myself recently. After a lot of investigating, here are the reasons why i stick to CPython: * Performance: There are some articles out there stating that there are always cases where ironpython is slower, so if performance is an issue * Take the original: many people argue that new features etc. are always integrated in CPython first and you have to wait until they are implemented in ironpython. * Licensing: Some people argue this is a timebomb: nobody knows how the licensing of ironpython/mono might change in near future * Extensions: one of the strengths of python are the thousands of extensions which are all usable by CPython, as you mentioned mathematical problems: numpy might be a suitable fast package for you which might not run as expected under IronPython (although [Ironclad](http://www.resolversystems.com/documentation/index.php/Ironclad)) * Especially under Windows you have a native GUI-toolkit with wxPython which also looks great under several other platforms and there are pyQT and a lot of other toolkits. They have nice designer like wxGlade, but here VisualStudio C# Designer is easier to use. * Platform independence (if this is an issue): CPython is ported to really a lot of platforms, whereas ironpython can only be used on the major platforms (recently read a developer was sad that he couldn't get mono to run under his AIX) Ironpython is a great work, and if i had a special .NET library i would have to use, IronPython might be the choice, but for general purpose problems, people seem to suggest using the original CPython, unless Guido changes his mind.
Pros and cons of IronPython and IronPython Studio
471,712
16
2009-01-23T02:49:32Z
472,355
17
2009-01-23T09:29:31Z
[ "python", "ironpython", "ironpython-studio" ]
We are ready in our company to move everything to Python instead of C#, we are a consulting company and we usually write small projects in C# we don't do huge projects and our work is more based on complex mathematical models not complex software structures. So we believe IronPython is a good platform for us because it provides standard GUI functionality on windows and access to all of .Net libraries. I know Ironpython studio is not complete, and in fact I had a hard time adding my references but I was wondering if someone could list some of the pros and cons of this migration for us, considering Python code is easier to read by our clients and we usually deliver a proof-of-concept prototype instead of a full-functional code, our clients usually go ahead and implement the application themselves
My company, Resolver Systems, develops what is probably the biggest application written in IronPython yet. (It's called Resolver One, and it's a Pythonic spreadsheet). We are also hosting the Ironclad project (to run CPython extensions under IronPython) and that is going well (we plan to release a beta of Resolver One & numpy soon). The reason we chose IronPython was the .NET integration - our clients want 100% integration on Windows and the easiest way to do that right now is .NET. We design our GUI (without behaviour) in Visual Studio, compile it into a DLL and subclass it from IronPython to add behaviour. We have found that IronPython is faster at some cases and slower at some others. However, the IronPython team is very responsive, whenever we report a regression they fix it and usually backport it to the bugfix release. If you worry about performance, you can always implement a critical part in C# (we haven't had to do that yet). If you have experience with C#, then IronPython will be natural for you, and easier than C#, especially for prototypes. Regarding IronPython studio, we don't use it. Each of us has his editor of choice (TextPad, Emacs, Vim & Wing), and everything works fine.
Way to have compiled python files in a separate folder?
471,928
25
2009-01-23T05:02:12Z
471,985
16
2009-01-23T05:26:17Z
[ "python", "file", "compiled" ]
Is it possible to have Python save the `.pyc` files to a separate folder location that is in `sys.path`? ``` /code foo.py foo.pyc bar.py bar.pyc ``` To: ``` /code foo.py bar.py /code_compiled foo.pyc bar.pyc ``` I would like this because I feel it'd be more organized. Thanks for any help you can give me.
There is [PEP 304: Controlling Generation of Bytecode Files](http://www.python.org/dev/peps/pep-0304/). Its status is `Withdrawn` and corresponding [patch](http://bugs.python.org/issue677103) rejected. Therefore there might be no direct way to do it. If you don't need source code then you may just delete `*.py` files. `*.pyc` files can be used as is or packed in an egg.
Way to have compiled python files in a separate folder?
471,928
25
2009-01-23T05:02:12Z
16,476,434
7
2013-05-10T06:30:04Z
[ "python", "file", "compiled" ]
Is it possible to have Python save the `.pyc` files to a separate folder location that is in `sys.path`? ``` /code foo.py foo.pyc bar.py bar.pyc ``` To: ``` /code foo.py bar.py /code_compiled foo.pyc bar.pyc ``` I would like this because I feel it'd be more organized. Thanks for any help you can give me.
*In the dark and ancient days of 2003, PEP 304 came forth to challenge this problem. Its patch was found wanting. Environment variable platform dependencies and version skews ripped it to shreds and left its bits scattered across the wastelands.* *After years of suffering, a new challenger rose in the last days of 2009. Barry Warsaw summoned PEP 3147 and sent it to do battle, wielding a simple weapon with skill. The PEP crushed the cluttering PYC files, silenced the waring Unladen Swallow and CPython interpreter each trying to argue its PYC file should be triumphant, and allowed Python to rest easy with its dead ghosts occasionally running in the dead of night. PEP 3147 was found worthy by the dictator and was knighted into the official roles in the days of 3.2.* As of 3.2, Python stores a module's PYC files in `__pycache__` under the module's directory. Each PYC file contains the name and version of the interpreter, e.g., `__pycache__/foo.cpython-33.pyc`. You might also have a `__pycache__/foo.cpython-32.pyc` compiled by an earlier version of Python. The right magic happens: the correct one is used and recompiled if out of sync with the source code. At runtime, look at the module's `mymodule.__cached__` for the pyc filename and parse it with `imp.get_tag()`. See [the What's New section](http://docs.python.org/3/whatsnew/3.2.html#pep-3147-pyc-repository-directories) for more information. TL;DR - Just works in Python 3.2 and above. Poor hacks substitute for versions before that.
Usage of __slots__?
472,000
306
2009-01-23T05:37:23Z
472,017
105
2009-01-23T05:50:21Z
[ "python" ]
What is the purpose of [`__slots__`](https://docs.python.org/3/reference/datamodel.html#slots) in Python — especially with respect to when would I want to use it and when not?
You would want to use `__slots__` if you are going to instantiate a lot (hundreds, thousands) of objects of the same class. `__slots__` only exists as a memory optimization tool. It's highly discouraged to use `__slots__` for constraining attribute creation, and in general you want to avoid it because it breaks pickle, along with some other introspection features of python.
Usage of __slots__?
472,000
306
2009-01-23T05:37:23Z
472,024
243
2009-01-23T05:54:46Z
[ "python" ]
What is the purpose of [`__slots__`](https://docs.python.org/3/reference/datamodel.html#slots) in Python — especially with respect to when would I want to use it and when not?
Quoting [Jacob Hallen](http://code.activestate.com/lists/python-list/531365/): > The proper use of `__slots__` is to save space in objects. Instead of having > a dynamic dict that allows adding attributes to objects at anytime, > there is a static structure which does not allow additions after creation. > This saves the overhead of one dict for every object that uses slots. While this is sometimes a useful optimization, it would be completely > unnecessary if the Python interpreter was dynamic enough so that it would > only require the dict when there actually were additions to the object. > > Unfortunately there is a side effect to slots. They change the behavior of > the objects that have slots in a way that can be abused by control freaks > and static typing weenies. This is bad, because the control freaks should > be abusing the metaclasses and the static typing weenies should be abusing > decorators, since in Python, there should be only one obvious way of doing something. > > Making CPython smart enough to handle saving space without `__slots__` is a major > undertaking, which is probably why it is not on the list of changes for P3k (yet).
Usage of __slots__?
472,000
306
2009-01-23T05:37:23Z
472,899
48
2009-01-23T13:38:21Z
[ "python" ]
What is the purpose of [`__slots__`](https://docs.python.org/3/reference/datamodel.html#slots) in Python — especially with respect to when would I want to use it and when not?
Each python object has a `__dict__` atttribute which is a dictionary containing all other attributes. e.g. when you type `self.attr` python is actually doing `self.__dict__['attr']`. As you can imagine using a dictionary to store attribute takes some extra space & time for accessing it. However, when you use `__slots__`, any object created for that class won't have a `__dict__` attribute. Instead, all attribute access is done directly via pointers. So if want a C style structure rather than a full fledged class you can use `__slots__` for compacting size of the objects & reducing attribute access time. A good example is a Point class containing attributes x & y. If you are going to have a lot of points, you can try using `__slots__` in order to conserve some memory.
Usage of __slots__?
472,000
306
2009-01-23T05:37:23Z
13,547,906
10
2012-11-25T03:06:11Z
[ "python" ]
What is the purpose of [`__slots__`](https://docs.python.org/3/reference/datamodel.html#slots) in Python — especially with respect to when would I want to use it and when not?
Slots are very useful for library calls to eliminate the "named method dispatch" when making function calls. This is mentioned in the SWIG [documentation](http://www.swig.org/Doc2.0/Python.html#Python_builtin_types). For high performance libraries that want to reduce function overhead for commonly called functions using slots is much faster. Now this may not be directly related to the OPs question. It is related more to building extensions than it does to using the **slots** syntax on an object. But it does help complete the picture for the usage of slots and some of the reasoning behind them.
Usage of __slots__?
472,000
306
2009-01-23T05:37:23Z
28,059,785
119
2015-01-21T04:46:42Z
[ "python" ]
What is the purpose of [`__slots__`](https://docs.python.org/3/reference/datamodel.html#slots) in Python — especially with respect to when would I want to use it and when not?
> # In Python, what is the purpose of `__slots__` and what are the cases one should avoid this? ## TLDR: The special attribute `__slots__` allows you to explicitly state in your code which instance attributes you expect your object instances to have, with the expected results: 1. **faster** attribute access. 2. potential **space savings** in memory. And the biggest caveat for multiple inheritance - multiple "parent classes with nonempty slots" cannot be combined. (Solution? Factor out all but one (or just all) parents' abstraction which they respectively and you collectively will inherit from - giving the abstraction(s) empty slots.) ### Requirements: * To have attributes named in `__slots__` to actually be stored in slots instead of a `__dict__`, a class must inherit from `object`. * To prevent the creation of a `__dict__`, you must inherit from `object` and all classes in the inheritance must declare `__slots__` and none of them can have a `'__dict__'` entry - and they cannot use multiple inheritance. There are a lot of details if you wish to keep reading. ## Why use `__slots__`: Faster attribute access. The creator of Python, Guido van Rossum, [states](http://python-history.blogspot.com/2010/06/inside-story-on-new-style-classes.html) that he actually created `__slots__` for faster attribute access. It is trivial to demonstrate measurably significant faster access: ``` import timeit class Foo(object): __slots__ = 'foo', class Bar(object): pass slotted = Foo() not_slotted = Bar() def get_set_delete_fn(obj): def get_set_delete(): obj.foo = 'foo' obj.foo del obj.foo return get_set_delete ``` and ``` >>> min(timeit.repeat(get_set_delete_fn(slotted))) 0.2846834529991611 >>> min(timeit.repeat(get_set_delete_fn(not_slotted))) 0.3664822799983085 ``` The slotted access is almost 30% faster in Python 3.5 on Ubuntu. ``` >>> 0.3664822799983085 / 0.2846834529991611 1.2873325658284342 ``` In Python 2 on Windows I have measured it about 15% faster. ## Why use `__slots__`: Memory Savings Another purpose of `__slots__` is to reduce the space in memory that each object instance takes up. [The documentation clearly states the reasons behind this](https://docs.python.org/2/reference/datamodel.html#slots): > By default, instances of both old and new-style classes have a dictionary for attribute storage. This wastes space for objects having very few instance variables. The space consumption can become acute when creating large numbers of instances. > > The default can be overridden by defining `__slots__` in a new-style class definition. The `__slots__` declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because `__dict__` is not created for each instance. [SQLAlchemy attributes](http://docs.sqlalchemy.org/en/rel_1_0/changelog/migration_10.html#significant-improvements-in-structural-memory-use) a lot of memory savings with `__slots__`. To verify this, using the Anaconda distribution of Python 2.7 on Ubuntu Linux, with `guppy.hpy` (aka heapy) and `sys.getsizeof`, the size of a class instance without `__slots__` declared, and nothing else, is 64 bytes. That does *not* include the `__dict__`. Thank you Python for lazy evaluation again, the `__dict__` is apparently not called into existence until it is referenced, but classes without data are usually useless. When called into existence, the `__dict__` attribute is a minimum of 280 bytes additionally. In contrast, a class instance with `__slots__` declared to be `()` (no data) is only 16 bytes, and 56 total bytes with one item in slots, 64 with two. I tested when my particular implementation of dicts size up by enumerating alphabet characters into a dict, and on the sixth item it climbs to 1048, 22 to 3352, then 85 to 12568 (rather impractical to put that many attributes on a single class, probably violating the single responsibility principle there.) ``` attrs __slots__ no slots declared + __dict__ none 16 64 (+ 280 if __dict__ referenced) one 56 64 + 280 two 64 64 + 280 six 96 64 + 1048 22 224 64 + 3352 ``` So we see how nicely `__slots__` scale for instances to save us memory, and that is the reason you would want to use `__slots__`. ## Demonstration of `__slots__`: To prevent the creation of a `__dict__`, you must subclass `object`: ``` >>> class Base(object): __slots__ = () >>> b = Base() >>> b.a = 'a' Traceback (most recent call last): File "<pyshell#38>", line 1, in <module> b.a = 'a' AttributeError: 'Base' object has no attribute 'a' ``` Or another class that defines `__slots__` ``` >>> class Child(Base): __slots__ = ('a',) >>> c = Child() >>> c.a = 'a' >>> c.b = 'b' Traceback (most recent call last): File "<pyshell#42>", line 1, in <module> c.b = 'b' AttributeError: 'Child' object has no attribute 'b' ``` To allow `__dict__` creation while subclassing slotted objects, just add `'__dict__'` to the `__slots__`: ``` >>> class SlottedWithDict(Child): __slots__ = ('__dict__', 'b') >>> swd = SlottedWithDict() >>> swd.a = 'a' >>> swd.b = 'b' >>> swd.c = 'c' >>> swd.__dict__ {'c': 'c'} ``` Or you don't even need to declare **slots** in your subclass, and you will still use slots from the parents, but not restrict the creation of a `__dict__`: ``` >>> class NoSlots(Child): pass >>> ns = NoSlots() >>> ns.a = 'a' >>> ns.b = 'b' >>> ns.__dict__ {'b': 'b'} ``` However, `__slots__` may cause problems for multiple inheritance: ``` >>> class BaseA(object): __slots__ = ('a',) >>> class BaseB(object): __slots__ = ('b',) >>> class Child(BaseA, BaseB): __slots__ = () Traceback (most recent call last): File "<pyshell#68>", line 1, in <module> class Child(BaseA, BaseB): __slots__ = () TypeError: Error when calling the metaclass bases multiple bases have instance lay-out conflict ``` If you run into this problem, just remove `__slots__`, and put it back in where you have a lot of instances. ``` >>> class BaseA(object): __slots__ = () >>> class BaseB(object): __slots__ = () >>> class Child(BaseA, BaseB): __slots__ = ('a', 'b') >>> c = Child >>> c.a = 'a' >>> c.b = 'b' >>> c.c = 'c' >>> c.__dict__ <dictproxy object at 0x10C944B0> >>> c.__dict__['c'] 'c' ``` ### Add `'__dict__'` to `__slots__` to get dynamic assignment: ``` class Foo(object): __slots__ = 'bar', 'baz', '__dict__' ``` and now: ``` >>> foo = Foo() >>> foo.boink = 'boink' ``` So with `'__dict__'` in slots we lose some of the size benefits with the upside of having dynamic assignment and still having slots for the names we do expect. When you inherit from an object that isn't slotted, you get the same sort of semantics when you use `__slots__` - names that are in `__slots__` point to slotted values, while any other values are put in the instance's `__dict__`. Avoiding `__slots__` because you want to be able to add attributes on the fly is actually not a good reason - just add `"__dict__"` to your `__slots__` if this is required. ### Set to empty tuple when subclassing a namedtuple: The namedtuple builtin make immutable instances that are very lightweight (essentially, the size of tuples) but to get the benefits, you need to do it yourself if you subclass them: ``` from collections import namedtuple class MyNT(namedtuple('MyNT', 'bar baz')): """MyNT is an immutable and lightweight object""" __slots__ = () ``` usage: ``` >>> nt = MyNT('bar', 'baz') >>> nt.bar 'bar' >>> nt.baz 'baz' ``` ## Biggest Caveat: Multiple inheritance Even when non-empty slots are the same for multiple parents, they cannot be used together: ``` >>> class Foo(object): __slots__ = 'foo', 'bar' >>> class Bar(object): __slots__ = 'foo', 'bar' # alas, would work if empty, i.e. () >>> class Baz(Foo, Bar): pass Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Error when calling the metaclass bases multiple bases have instance lay-out conflict ``` Using an empty `__slots__` in the parent seems to provide the most flexibility, **allowing the child to choose to prevent or allow** (by adding `'__dict__'` to get dynamic assignment, see section above) **the creation of a dict**: ``` >>> class Foo(object): __slots__ = () >>> class Bar(object): __slots__ = () >>> class Baz(Foo, Bar): __slots__ = ('foo', 'bar') >>> b = Baz() >>> b.foo, b.bar = 'foo', 'bar' ``` You don't *have* to have slots - so if you add them, and remove them later, it shouldn't cause any problems. **Going out on a limb here**: If you're composing [mixins](http://stackoverflow.com/questions/860245/mixin-vs-inheritance/27907511#27907511) or using [abstract base classes](http://stackoverflow.com/questions/372042/difference-between-abstract-class-and-interface-in-python/31439126#31439126), which aren't intended to be instantiated, an empty `__slots__` in those parents seems to be the best way to go in terms of flexibility for subclassers. ## Other cases to avoid slots: * Avoid them when you want to perform `__class__` assignment with another class that doesn't have them (and you can't add them). * Avoid them if you want to subclass variable length builtins like long, tuple, or str, and you want to add attributes to them. * Avoid them if you insist on providing default values via class attributes for instance variables. * Avoid them for parent classes in the case of multiple inheritance - you can reinsert them for a child where you have a lot of instances (see previous section for recommendation). You may be able to tease out further caveats from the rest of the `__slots__` [documentation](https://docs.python.org/3/reference/datamodel.html#slots), which follows: > **`__slots__`** > > > This class variable can be assigned a string, iterable, or sequence of strings with variable names used by instances. If defined in a new-style class, `__slots__` reserves space for the declared variables and prevents the automatic creation of `__dict__` and `__weakref__` for each instance. > > Notes on using `__slots__` > > * When inheriting from a class without `__slots__`, the `__dict__` attribute of that class will always be accessible, so a `__slots__` definition in the subclass is meaningless. > * Without a `__dict__` variable, instances cannot be assigned new variables not listed in the `__slots__` definition. Attempts to assign to an unlisted variable name raises `AttributeError`. If dynamic assignment of new variables is desired, then add `'__dict__'` to the sequence of strings in the `__slots__` declaration. > > *Changed in version 2.3:* Previously, adding `'__dict__'` to the `__slots__` declaration would not enable the assignment of new attributes not specifically listed in the sequence of instance variable names. > * Without a `__weakref__` variable for each instance, classes defining `__slots__` do not support weak references to its instances. If weak reference support is needed, then add `'__weakref__'` to the sequence of strings in the `__slots__` declaration. > > *Changed in version 2.3:* Previously, adding `'__weakref__'` to the `__slots__` declaration would not enable support for weak references. > * `__slots__` are implemented at the class level by creating descriptors ([Implementing Descriptors](https://docs.python.org/2/reference/datamodel.html#implementing-descriptors)) for each variable name. As a result, class attributes cannot be used to set default values for instance variables defined by `__slots__`; otherwise, the class attribute would overwrite the descriptor assignment. > * The action of a `__slots__` declaration is limited to the class where it is defined. As a result, subclasses will have a `__dict__` unless they also define `__slots__` (which must only contain names of any additional slots). > * If a class defines a slot also defined in a base class, the instance variable defined by the base class slot is inaccessible (except by retrieving its descriptor directly from the base class). This renders the meaning of the program undefined. In the future, a check may be added to prevent this. > * Nonempty `__slots__` does not work for classes derived from “variable-length” built-in types such as long, str and tuple. > * Any non-string iterable may be assigned to `__slots__`. Mappings may also be used; however, in the future, special meaning may be assigned to the values corresponding to each key. > * `__class__` assignment works only if both classes have the same `__slots__`. > > *Changed in version 2.6:* Previously, `__class__` assignment raised an error if either new or old class had `__slots__`. ## Critiques of other answers The current top answers cite outdated information and are quite hand-wavy and miss the mark in some important ways. ### Pickling, `__slots__` doesn't break When pickling a slotted object, you may find it complains with a misleading `TypeError`: ``` >>> pickle.loads(pickle.dumps(f)) TypeError: a class that defines __slots__ without defining __getstate__ cannot be pickled ``` This is actually incorrect. This message comes from the oldest protocol, which is the default. You can select the latest protocol with the `-1` argument. In Python 2.7 this would be `2` (which was introduced in 2.3), and in 3.6 it is `4`. ``` >>> pickle.loads(pickle.dumps(f, -1)) <__main__.Foo object at 0x1129C770> ``` in Python 2.7: ``` >>> pickle.loads(pickle.dumps(f, 2)) <__main__.Foo object at 0x1129C770> ``` in Python 3.6 ``` >>> pickle.loads(pickle.dumps(f, 4)) <__main__.Foo object at 0x1129C770> ``` So I would keep this in mind, as it is a solved problem. ## Critique of the (until Oct 2, 2016) accepted answer The first paragraph is half short explanation, half predictive. Here's the only part that actually answers the question > The proper use of `__slots__` is to save space in objects. Instead of having a dynamic dict that allows adding attributes to objects at anytime, there is a static structure which does not allow additions after creation. This saves the overhead of one dict for every object that uses slots The second half is wishful thinking, and off the mark: > While this is sometimes a useful optimization, it would be completely unnecessary if the Python interpreter was dynamic enough so that it would only require the dict when there actually were additions to the object. Python actually does something similar to this, only creating the `__dict__` when it is accessed, but creating lots of objects with no data is fairly ridiculous. The second paragraph oversimplifies and misses actual reasons to avoid `__slots__`. The below is *not* a real reason to avoid slots (for *actual* reasons, see the rest of my answer above.): > They change the behavior of the objects that have slots in a way that can be abused by control freaks and static typing weenies. It then goes on to discuss other ways of accomplishing that perverse goal with Python, not discussing anything to do with `__slots__`. The third paragraph is more wishful thinking. Together it is mostly off-the-mark content that the answerer didn't even author and contributes to ammunition for critics of the site. # Memory usage evidence Create some normal objects and slotted objects: ``` >>> class Foo(object): pass >>> class Bar(object): __slots__ = () ``` Instantiate a million of them: ``` >>> foos = [Foo() for f in xrange(1000000)] >>> bars = [Bar() for b in xrange(1000000)] ``` Inspect with `guppy.hpy().heap()`: ``` >>> guppy.hpy().heap() Partition of a set of 2028259 objects. Total size = 99763360 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1000000 49 64000000 64 64000000 64 __main__.Foo 1 169 0 16281480 16 80281480 80 list 2 1000000 49 16000000 16 96281480 97 __main__.Bar 3 12284 1 987472 1 97268952 97 str ... ``` Access the regular objects and their `__dict__` and inspect again: ``` >>> for f in foos: ... f.__dict__ >>> guppy.hpy().heap() Partition of a set of 3028258 objects. Total size = 379763480 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1000000 33 280000000 74 280000000 74 dict of __main__.Foo 1 1000000 33 64000000 17 344000000 91 __main__.Foo 2 169 0 16281480 4 360281480 95 list 3 1000000 33 16000000 4 376281480 99 __main__.Bar 4 12284 0 987472 0 377268952 99 str ... ``` This is consistent with the history of Python, from [Unifying types and classes in Python 2.2](https://www.python.org/download/releases/2.2.2/descrintro/) > If you subclass a built-in type, extra space is automatically added to the instances to accomodate `__dict__` and `__weakrefs__`. (The `__dict__` is not initialized until you use it though, so you shouldn't worry about the space occupied by an empty dictionary for each instance you create.) If you don't need this extra space, you can add the phrase "`__slots__ = []`" to your class.
Usage of __slots__?
472,000
306
2009-01-23T05:37:23Z
30,613,834
13
2015-06-03T07:43:05Z
[ "python" ]
What is the purpose of [`__slots__`](https://docs.python.org/3/reference/datamodel.html#slots) in Python — especially with respect to when would I want to use it and when not?
In addition to the other answers, here is an example of using `__slots__`: ``` >>> class Test(object): #Must be new-style class! ... __slots__ = ['x', 'y'] ... >>> pt = Test() >>> dir(pt) ['__class__', '__delattr__', '__doc__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__slots__', '__str__', 'x', 'y'] >>> pt.x Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: x >>> pt.x = 1 >>> pt.x 1 >>> pt.z = 2 Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Test' object has no attribute 'z' >>> pt.__dict__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Test' object has no attribute '__dict__' >>> pt.__slots__ ['x', 'y'] ``` So, to implement `__slots__`, it only takes an extra line (and making your class a new-style class if it isn't already). This way you can [reduce the memory footprint of those classes 5-fold](http://dev.svetlyak.ru/using-slots-for-optimisation-in-python-en/), at the expense of having to write custom pickle code, if and when that becomes necessary.
How to read the header with pycurl
472,179
16
2009-01-23T07:21:48Z
472,243
22
2009-01-23T08:13:57Z
[ "python", "curl", "pycurl" ]
How do I read the response headers returned from a PyCurl request?
There are several solutions (by default, they are dropped). Here is an example using the option HEADERFUNCTION which lets you indicate a function to handle them. Other solutions are options WRITEHEADER (not compatible with WRITEFUNCTION) or setting HEADER to True so that they are transmitted with the body. ``` #!/usr/bin/python import pycurl import sys class Storage: def __init__(self): self.contents = '' self.line = 0 def store(self, buf): self.line = self.line + 1 self.contents = "%s%i: %s" % (self.contents, self.line, buf) def __str__(self): return self.contents retrieved_body = Storage() retrieved_headers = Storage() c = pycurl.Curl() c.setopt(c.URL, 'http://www.demaziere.fr/eve/') c.setopt(c.WRITEFUNCTION, retrieved_body.store) c.setopt(c.HEADERFUNCTION, retrieved_headers.store) c.perform() c.close() print retrieved_headers print retrieved_body ```
How to specify uniqueness for a tuple of field in a Django model
472,392
15
2009-01-23T09:50:59Z
472,688
31
2009-01-23T12:16:04Z
[ "python", "django", "django-models" ]
Is there a way to specify a Model in Django such that is ensures that pair of fields in unique in the table, in a way similar to the "unique=True" attribute for similar field? Or do I need to check this constraint in the clean() method?
There is a META option called unique\_together. For example: ``` class MyModel(models.Model): field1 = models.BlahField() field2 = models.FooField() field3 = models.BazField() class Meta: unique_together = ("field1", "field2") ``` More info on the Django [documentation](http://docs.djangoproject.com/en/dev/ref/models/options/#unique-together) page.
select single item from a collection : Python
472,575
2
2009-01-23T11:16:35Z
473,337
12
2009-01-23T15:35:10Z
[ "python", "iterator", "generator" ]
I created a utility function to return the expected single item from an generator expression ``` print one(name for name in ('bob','fred') if name=='bob') ``` Is this a good way to go about it? ``` def one(g): try: val = g.next() try: g.next() except StopIteration: return val else: raise Exception('Too many values') except StopIteration: raise Exception('No values') ```
A simpler solution is to use tuple unpacking. This will already do everything you want, including checking that it contains exactly one item. Single item: ``` >>> name, = (name for name in ('bob','fred') if name=='bob') >>> name 'bob' ``` Too many items: ``` >>> name, = (name for name in ('bob','bob') if name=='bob') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: too many values to unpack ``` No items: ``` >>> name, = (name for name in ('fred','joe') if name=='bob') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: need more than 0 values to unpack ```
Binary data with pyserial(python serial port)
472,977
15
2009-01-23T14:02:43Z
473,057
9
2009-01-23T14:25:28Z
[ "python", "binary", "serial-port", "pyserial" ]
serial.write() method in pyserial seems to only send string data. I have arrays like [0xc0,0x04,0x00] and want to be able to send/receive them via the serial port? Are there any separate methods for raw I/O? I think I might need to change the arrays to ['\xc0','\x04','\x00'], still, null character might pose a problem.
You need to convert your data to a string ``` "\xc0\x04\x00" ``` Null characters are not a problem in Python -- strings are not null-terminated the zero byte behaves just like another byte `"\x00"`. One way to do this: ``` >>> import array >>> array.array('B', [0xc0, 0x04, 0x00]).tostring() '\xc0\x04\x00' ```
Binary data with pyserial(python serial port)
472,977
15
2009-01-23T14:02:43Z
1,281,992
12
2009-08-15T14:19:51Z
[ "python", "binary", "serial-port", "pyserial" ]
serial.write() method in pyserial seems to only send string data. I have arrays like [0xc0,0x04,0x00] and want to be able to send/receive them via the serial port? Are there any separate methods for raw I/O? I think I might need to change the arrays to ['\xc0','\x04','\x00'], still, null character might pose a problem.
An alternative method, without using the `array` module: ``` def a2s(arr): """ Array of integer byte values --> binary string """ return ''.join(chr(b) for b in arr) ```
Check if a given key already exists in a dictionary and increment it
473,099
163
2009-01-23T14:35:39Z
473,108
182
2009-01-23T14:38:49Z
[ "python", "dictionary" ]
Given a dictionary, how can I find out if a given key in that dictionary has already been set to a non-None value? I.e., I want to do this: ``` my_dict = {} if (my_dict[key] != None): my_dict[key] = 1 else: my_dict[key] += 1 ``` I.e., I want to increment the value if there's already one there, or set it to 1 otherwise.
You are looking for [`collections.defaultdict`](http://docs.python.org/library/collections.html#defaultdict) (available for Python 2.5+). This ``` from collections import defaultdict my_dict = defaultdict(int) my_dict[key] += 1 ``` will do what you want. For regular Python `dict`s, if there is no value for a given key, you will *not* get `None` when accessing the dict -- a `KeyError` will be raised. So if you want to use a regular `dict`, instead of your code you would use ``` if key in my_dict: my_dict[key] += 1 else: my_dict[key] = 1 ```
Check if a given key already exists in a dictionary and increment it
473,099
163
2009-01-23T14:35:39Z
473,114
41
2009-01-23T14:41:38Z
[ "python", "dictionary" ]
Given a dictionary, how can I find out if a given key in that dictionary has already been set to a non-None value? I.e., I want to do this: ``` my_dict = {} if (my_dict[key] != None): my_dict[key] = 1 else: my_dict[key] += 1 ``` I.e., I want to increment the value if there's already one there, or set it to 1 otherwise.
You need the `key in dict` idiom for that. ``` if key in my_dict and not (my_dict[key] is None): # do something else: # do something else ``` However, you should probably consider using `defaultdict` (as dF suggested).
Check if a given key already exists in a dictionary and increment it
473,099
163
2009-01-23T14:35:39Z
473,227
8
2009-01-23T15:09:59Z
[ "python", "dictionary" ]
Given a dictionary, how can I find out if a given key in that dictionary has already been set to a non-None value? I.e., I want to do this: ``` my_dict = {} if (my_dict[key] != None): my_dict[key] = 1 else: my_dict[key] += 1 ``` I.e., I want to increment the value if there's already one there, or set it to 1 otherwise.
Agreed with cgoldberg. How I do it is: ``` try: dict[key] += 1 except KeyError: dict[key] = 1 ``` So either do it as above, or use a default dict as others have suggested. Don't use if statements. That's not Pythonic.
Check if a given key already exists in a dictionary and increment it
473,099
163
2009-01-23T14:35:39Z
473,344
152
2009-01-23T15:39:05Z
[ "python", "dictionary" ]
Given a dictionary, how can I find out if a given key in that dictionary has already been set to a non-None value? I.e., I want to do this: ``` my_dict = {} if (my_dict[key] != None): my_dict[key] = 1 else: my_dict[key] += 1 ``` I.e., I want to increment the value if there's already one there, or set it to 1 otherwise.
I prefer to do this in one line of code. ``` my_dict = {} my_dict[some_key] = my_dict.get(some_key, 0) + 1 ``` Dictionaries have a function, get, which takes two parameters - the key you want, and a default value if it doesn't exist. I prefer this method to defaultdict as you only want to handle the case where the key doesn't exist in this one line of code, not everywhere.
Check if a given key already exists in a dictionary and increment it
473,099
163
2009-01-23T14:35:39Z
473,501
10
2009-01-23T16:17:20Z
[ "python", "dictionary" ]
Given a dictionary, how can I find out if a given key in that dictionary has already been set to a non-None value? I.e., I want to do this: ``` my_dict = {} if (my_dict[key] != None): my_dict[key] = 1 else: my_dict[key] += 1 ``` I.e., I want to increment the value if there's already one there, or set it to 1 otherwise.
As you can see from the many answers, there are several solutions. One instance of LBYL (look before you leap) has not been mentioned yet, the has\_key() method: ``` my_dict = {} def add (key): if my_dict.has_key(key): my_dict[key] += 1 else: my_dict[key] = 1 if __name__ == '__main__': add("foo") add("bar") add("foo") print my_dict ```
Check if a given key already exists in a dictionary and increment it
473,099
163
2009-01-23T14:35:39Z
2,198,921
9
2010-02-04T10:32:38Z
[ "python", "dictionary" ]
Given a dictionary, how can I find out if a given key in that dictionary has already been set to a non-None value? I.e., I want to do this: ``` my_dict = {} if (my_dict[key] != None): my_dict[key] = 1 else: my_dict[key] += 1 ``` I.e., I want to increment the value if there's already one there, or set it to 1 otherwise.
To answer the question "*how can I find out if a given index in that dict has already been set to a non-None value*", I would prefer this: ``` try: nonNone = my_dict[key] is not None except KeyError: nonNone = False ``` This conforms to the already invoked concept of EAFP (easier to ask forgiveness then permission). It also avoids the duplicate key lookup in the dictionary as it would in `key in my_dict and my_dict[key] is not None` what is interesting if lookup is expensive. For the actual *problem* that you have posed, i.e. incrementing an int if it exists, or setting it to a default value otherwise, I also recommend the ``` my_dict[key] = my_dict.get(key, default) + 1 ``` as in the answer of Andrew Wilkinson. There is a third solution if you are storing modifyable objects in your dictionary. A common example for this is a [multimap](http://en.wikipedia.org/wiki/Multimap), where you store a list of elements for your keys. In that case, you can use: ``` my_dict.setdefault(key, []).append(item) ``` If a value for key does not exist in the dictionary, the setdefault method will set it to the second parameter of setdefault. It behaves just like a standard my\_dict[key], returning the value for the key (which may be the newly set value).
Check if a given key already exists in a dictionary and increment it
473,099
163
2009-01-23T14:35:39Z
7,924,128
14
2011-10-28T01:03:34Z
[ "python", "dictionary" ]
Given a dictionary, how can I find out if a given key in that dictionary has already been set to a non-None value? I.e., I want to do this: ``` my_dict = {} if (my_dict[key] != None): my_dict[key] = 1 else: my_dict[key] += 1 ``` I.e., I want to increment the value if there's already one there, or set it to 1 otherwise.
I personally like using `setdefault()` ``` my_dict = {} my_dict.setdefault(some_key, 0) my_dict[some_key] += 1 ```
How do you create a daemon in Python?
473,620
185
2009-01-23T16:48:06Z
473,702
114
2009-01-23T17:06:11Z
[ "python", "daemon" ]
[Searching on Google](http://www.google.co.uk/search?q=python+daemon) reveals x2 code snippets. The first result is to [this code recipe](http://code.activestate.com/recipes/278731/) which has a lot of documentation and explanation, along with some useful discussion underneath. However, [another code sample](http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/), whilst not containing so much documentation, includes sample code for passing commands such as start, stop and restart. It also creates a PID file which can be handy for checking if the daemon is already running etc. These samples both explain how to create the daemon. Are there any additional things that need to be considered? Is one sample better than the other, and why?
Sander Marechal's [code sample](http://web.archive.org/web/20131017130434/http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/) is superior to the original, which was originally posted in 2004. I once contributed a daemonizer for Pyro, but would probably use Sander's code if I had to do it over.
How do you create a daemon in Python?
473,620
185
2009-01-23T16:48:06Z
688,448
133
2009-03-27T03:38:06Z
[ "python", "daemon" ]
[Searching on Google](http://www.google.co.uk/search?q=python+daemon) reveals x2 code snippets. The first result is to [this code recipe](http://code.activestate.com/recipes/278731/) which has a lot of documentation and explanation, along with some useful discussion underneath. However, [another code sample](http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/), whilst not containing so much documentation, includes sample code for passing commands such as start, stop and restart. It also creates a PID file which can be handy for checking if the daemon is already running etc. These samples both explain how to create the daemon. Are there any additional things that need to be considered? Is one sample better than the other, and why?
There are **many fiddly things** to take care of when becoming a [well-behaved daemon process](http://www.python.org/dev/peps/pep-3143/#correct-daemon-behaviour): * prevent core dumps (many daemons run as root, and core dumps can contain sensitive information) * behave correctly inside a `chroot` gaol * set UID, GID, working directory, umask, and other process parameters appropriately for the use case * relinquish elevated `suid`, `sgid` privileges * close all open file descriptors, with exclusions depending on the use case * behave correctly if started inside an already-detached context, such as `init`, `inetd`, etc. * set up signal handlers for sensible daemon behaviour, but also with specific handlers determined by the use case * redirect the standard streams `stdin`, `stdout`, `stderr` since a daemon process no longer has a controlling terminal * handle a PID file as a cooperative advisory lock, which is a whole can of worms in itself with many contradictory but valid ways to behave * allow proper cleanup when the process is terminated * actually become a daemon process without leading to zombies Some of these are **standard**, as described in canonical Unix literature (*Advanced Programming in the UNIX Environment*, by the late W. Richard Stevens, Addison-Wesley, 1992). Others, such as stream redirection and [PID file handling](http://stackoverflow.com/questions/688343/reference-for-proper-handling-of-pid-file-on-unix), are **conventional behaviour** most daemon users would expect but that are less standardised. All of these are covered by the **[PEP 3143](http://www.python.org/dev/peps/pep-3143) “Standard daemon process library” specification**. The [python-daemon](http://pypi.python.org/pypi/python-daemon/) reference implementation works on Python 2.7 or later, and Python 3.2 or later.
How do you create a daemon in Python?
473,620
185
2009-01-23T16:48:06Z
5,412,949
38
2011-03-23T23:28:03Z
[ "python", "daemon" ]
[Searching on Google](http://www.google.co.uk/search?q=python+daemon) reveals x2 code snippets. The first result is to [this code recipe](http://code.activestate.com/recipes/278731/) which has a lot of documentation and explanation, along with some useful discussion underneath. However, [another code sample](http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/), whilst not containing so much documentation, includes sample code for passing commands such as start, stop and restart. It also creates a PID file which can be handy for checking if the daemon is already running etc. These samples both explain how to create the daemon. Are there any additional things that need to be considered? Is one sample better than the other, and why?
Note the [python-daemon](http://pypi.python.org/pypi/python-daemon/) package which solves a lot of problems behind daemons out of the box. Among other features it enables to (from Debian package description): * Detach the process into its own process group. * Set process environment appropriate for running inside a chroot. * Renounce suid and sgid privileges. * Close all open file descriptors. * Change the working directory, uid, gid, and umask. * Set appropriate signal handlers. * Open new file descriptors for stdin, stdout, and stderr. * Manage a specified PID lock file. * Register cleanup functions for at-exit processing.
How do you create a daemon in Python?
473,620
185
2009-01-23T16:48:06Z
9,047,339
74
2012-01-28T17:33:44Z
[ "python", "daemon" ]
[Searching on Google](http://www.google.co.uk/search?q=python+daemon) reveals x2 code snippets. The first result is to [this code recipe](http://code.activestate.com/recipes/278731/) which has a lot of documentation and explanation, along with some useful discussion underneath. However, [another code sample](http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/), whilst not containing so much documentation, includes sample code for passing commands such as start, stop and restart. It also creates a PID file which can be handy for checking if the daemon is already running etc. These samples both explain how to create the daemon. Are there any additional things that need to be considered? Is one sample better than the other, and why?
Here's my basic 'Howdy World' Python daemon that I start with, when I'm developing a new daemon application. ``` #!/usr/bin/python import time from daemon import runner class App(): def __init__(self): self.stdin_path = '/dev/null' self.stdout_path = '/dev/tty' self.stderr_path = '/dev/tty' self.pidfile_path = '/tmp/foo.pid' self.pidfile_timeout = 5 def run(self): while True: print("Howdy! Gig'em! Whoop!") time.sleep(10) app = App() daemon_runner = runner.DaemonRunner(app) daemon_runner.do_action() ``` Note that you'll need the `python-deaemon` library. In Ubuntu, you would: ``` sudo apt-get install python-daemon ``` Then just start it with `./howdy.py start`, and stop it with `./howdy.py stop`.
How do you create a daemon in Python?
473,620
185
2009-01-23T16:48:06Z
17,255,175
17
2013-06-22T20:50:01Z
[ "python", "daemon" ]
[Searching on Google](http://www.google.co.uk/search?q=python+daemon) reveals x2 code snippets. The first result is to [this code recipe](http://code.activestate.com/recipes/278731/) which has a lot of documentation and explanation, along with some useful discussion underneath. However, [another code sample](http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/), whilst not containing so much documentation, includes sample code for passing commands such as start, stop and restart. It also creates a PID file which can be handy for checking if the daemon is already running etc. These samples both explain how to create the daemon. Are there any additional things that need to be considered? Is one sample better than the other, and why?
An alternative -- create a normal, non-daemonized Python program then externally daemonize it using [supervisord](http://supervisord.org/ "supervisord"). This can save a lot of headaches, and is \*nix- and language-portable.
Shuffle an array with python, randomize array item order with python
473,973
120
2009-01-23T18:34:29Z
473,983
242
2009-01-23T18:37:27Z
[ "python", "arrays", "random", "order" ]
What's the easiest way to shuffle an array with python?
``` import random random.shuffle(array) ```
Shuffle an array with python, randomize array item order with python
473,973
120
2009-01-23T18:34:29Z
473,988
75
2009-01-23T18:38:14Z
[ "python", "arrays", "random", "order" ]
What's the easiest way to shuffle an array with python?
``` import random random.shuffle(array) ```
Shuffle an array with python, randomize array item order with python
473,973
120
2009-01-23T18:34:29Z
8,582,589
16
2011-12-20T22:05:30Z
[ "python", "arrays", "random", "order" ]
What's the easiest way to shuffle an array with python?
The other answers are the easiest, however it's a bit annoying that the `random.shuffle` method doesn't actually return anything - it just sorts the given list. If you want to chain calls or just be able to declare a shuffled array in one line you can do: ``` import random def my_shuffle(array): random.shuffle(array) return array ``` Then you can do lines like: ``` for suit in my_shuffle(['hearts', 'spades', 'clubs', 'diamonds']): ```