title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Which is faster in Python: x**.5 or math.sqrt(x)? | 327,002 | 116 | 2008-11-29T01:24:09Z | 327,009 | 13 | 2008-11-29T01:28:24Z | [
"python",
"performance"
] | I've been wondering this for some time. As the title say, which is faster, the actual function or simply raising to the half power?
**UPDATE**
This is not a matter of premature optimization. This is simply a question of how the underlying code actually works. What is the theory of how Python code works?
I sent Guido van Rossum an email cause I really wanted to know the differences in these methods.
### My email:
> There are at least 3 ways to do a square root in Python: math.sqrt, the
> '\*\*' operator and pow(x,.5). I'm just curious as to the differences in
> the implementation of each of these. When it comes to efficiency which
> is better?
### His response:
> pow and \*\* are equivalent; math.sqrt doesn't work for complex numbers,
> and links to the C sqrt() function. As to which one is
> faster, I have no idea... | How many square roots are you really performing? Are you trying to write some 3D graphics engine in Python? If not, then why go with code which is cryptic over code that is easy to read? The time difference is would be less than anybody could notice in just about any application I could forsee. I really don't mean to put down your question, but it seems that you're going a little too far with premature optimization. |
Which is faster in Python: x**.5 or math.sqrt(x)? | 327,002 | 116 | 2008-11-29T01:24:09Z | 327,011 | 57 | 2008-11-29T01:32:11Z | [
"python",
"performance"
] | I've been wondering this for some time. As the title say, which is faster, the actual function or simply raising to the half power?
**UPDATE**
This is not a matter of premature optimization. This is simply a question of how the underlying code actually works. What is the theory of how Python code works?
I sent Guido van Rossum an email cause I really wanted to know the differences in these methods.
### My email:
> There are at least 3 ways to do a square root in Python: math.sqrt, the
> '\*\*' operator and pow(x,.5). I'm just curious as to the differences in
> the implementation of each of these. When it comes to efficiency which
> is better?
### His response:
> pow and \*\* are equivalent; math.sqrt doesn't work for complex numbers,
> and links to the C sqrt() function. As to which one is
> faster, I have no idea... | As per comments, I've updated the code:
```
import time
import math
def timeit1():
s = time.time()
for i in xrange(750000):
z=i**.5
print "Took %f seconds" % (time.time() - s)
def timeit2(arg=math.sqrt):
s = time.time()
for i in xrange(750000):
z=arg(i)
print "Took %f seconds" % (time.time() - s)
timeit1()
timeit2()
```
Now the `math.sqrt` function is directly in a local argument, meaning it has the fastest lookup possible.
**UPDATE:** The python version seems to matter here. I used to think that `timeit1` would be faster, since when python parses "i\*\*.5" it knows, syntactically, which method to call (`__pow__` or some variant), so it doesn't have to go through the overhead of lookup that the `math.sqrt` variant does. But I might be wrong:
**Python 2.5:** 0.191000 vs. 0.224000
**Python 2.6:** 0.195000 vs. 0.139000
Also psyco seems to deal with `math.sqrt` better:
**Python 2.5 + Psyco 2.0:** 0.109000 vs. 0.043000
**Python 2.6 + Psyco 2.0:** 0.128000 vs. 0.067000
---
```
| Interpreter | x**.5, | sqrt, | sqrt faster, % |
| | seconds | seconds | |
|----------------+---------+---------+----------------|
| Python 3.2rc1+ | 0.32 | 0.27 | 19 |
| Python 3.1.2 | 0.136 | 0.088 | 55 |
| Python 3.0.1 | 0.155 | 0.102 | 52 |
| Python 2.7 | 0.132 | 0.079 | 67 |
| Python 2.6.6 | 0.121 | 0.075 | 61 |
| PyPy 1.4.1 | 0.083 | 0.0159 | 422 |
| Jython 2.5.1 | 0.132 | 0.22 | -40 |
| Python 2.5.5 | 0.129 | 0.125 | 3 |
| Python 2.4.6 | 0.131 | 0.123 | 7 |
#+TBLFM: $4=100*($2-$3)/$3;%.0f
```
Table results produced on machine:
```
$ uname -vms
Linux #42-Ubuntu SMP Thu Dec 2 02:41:37 UTC 2010 x86_64
$ cat /proc/cpuinfo | grep 'model name' | head -1
model name : Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
```
To reproduce results:
* get source: [`git clone git://gist.github.com/783011.git gist-783011`](https://gist.github.com/783011)
* install [`tox`](http://codespeak.net/tox/): `pip install tox`
* run `tox` from the directory with `tox.ini` file. |
Which is faster in Python: x**.5 or math.sqrt(x)? | 327,002 | 116 | 2008-11-29T01:24:09Z | 327,024 | 8 | 2008-11-29T01:45:49Z | [
"python",
"performance"
] | I've been wondering this for some time. As the title say, which is faster, the actual function or simply raising to the half power?
**UPDATE**
This is not a matter of premature optimization. This is simply a question of how the underlying code actually works. What is the theory of how Python code works?
I sent Guido van Rossum an email cause I really wanted to know the differences in these methods.
### My email:
> There are at least 3 ways to do a square root in Python: math.sqrt, the
> '\*\*' operator and pow(x,.5). I'm just curious as to the differences in
> the implementation of each of these. When it comes to efficiency which
> is better?
### His response:
> pow and \*\* are equivalent; math.sqrt doesn't work for complex numbers,
> and links to the C sqrt() function. As to which one is
> faster, I have no idea... | In these micro-benchmarks, math.sqrt will be slower, because of the slight time it takes to lookup the sqrt in the math namespace. You can improve it slightly with
```
from math import sqrt
```
Even then though, running a few variations through timeit, show a slight (4-5%) performance advantage for "x\*\*.5"
interestingly, doing
```
import math
sqrt = math.sqrt
```
sped it up even more, to within 1% difference in speed, with very little statistical significance.
I will repeat Kibbee, and say that this is probably a premature optimization. |
Which is faster in Python: x**.5 or math.sqrt(x)? | 327,002 | 116 | 2008-11-29T01:24:09Z | 327,048 | 13 | 2008-11-29T02:16:47Z | [
"python",
"performance"
] | I've been wondering this for some time. As the title say, which is faster, the actual function or simply raising to the half power?
**UPDATE**
This is not a matter of premature optimization. This is simply a question of how the underlying code actually works. What is the theory of how Python code works?
I sent Guido van Rossum an email cause I really wanted to know the differences in these methods.
### My email:
> There are at least 3 ways to do a square root in Python: math.sqrt, the
> '\*\*' operator and pow(x,.5). I'm just curious as to the differences in
> the implementation of each of these. When it comes to efficiency which
> is better?
### His response:
> pow and \*\* are equivalent; math.sqrt doesn't work for complex numbers,
> and links to the C sqrt() function. As to which one is
> faster, I have no idea... | * first rule of optimization: *don't do it*
* second rule: *don't do it*, yet
Here's some timings (Python 2.5.2, Windows):
```
$ python -mtimeit -s"from math import sqrt; x = 123" "x**.5"
1000000 loops, best of 3: 0.445 usec per loop
$ python -mtimeit -s"from math import sqrt; x = 123" "sqrt(x)"
1000000 loops, best of 3: 0.574 usec per loop
$ python -mtimeit -s"import math; x = 123" "math.sqrt(x)"
1000000 loops, best of 3: 0.727 usec per loop
```
This test shows that `x**.5` is slightly faster than `sqrt(x)`.
For the Python 3.0 the result is the opposite:
```
$ \Python30\python -mtimeit -s"from math import sqrt; x = 123" "x**.5"
1000000 loops, best of 3: 0.803 usec per loop
$ \Python30\python -mtimeit -s"from math import sqrt; x = 123" "sqrt(x)"
1000000 loops, best of 3: 0.695 usec per loop
$ \Python30\python -mtimeit -s"import math; x = 123" "math.sqrt(x)"
1000000 loops, best of 3: 0.761 usec per loop
```
`math.sqrt(x)` is always faster than `x**.5` on another machine (Ubuntu, Python 2.6 and 3.1):
```
$ python -mtimeit -s"from math import sqrt; x = 123" "x**.5"
10000000 loops, best of 3: 0.173 usec per loop
$ python -mtimeit -s"from math import sqrt; x = 123" "sqrt(x)"
10000000 loops, best of 3: 0.115 usec per loop
$ python -mtimeit -s"import math; x = 123" "math.sqrt(x)"
10000000 loops, best of 3: 0.158 usec per loop
$ python3.1 -mtimeit -s"from math import sqrt; x = 123" "x**.5"
10000000 loops, best of 3: 0.194 usec per loop
$ python3.1 -mtimeit -s"from math import sqrt; x = 123" "sqrt(x)"
10000000 loops, best of 3: 0.123 usec per loop
$ python3.1 -mtimeit -s"import math; x = 123" "math.sqrt(x)"
10000000 loops, best of 3: 0.157 usec per loop
``` |
Multiple Django Admin Sites on one Apache... When I log into one I get logged out of the other | 327,142 | 3 | 2008-11-29T04:01:45Z | 327,307 | 7 | 2008-11-29T07:27:35Z | [
"python",
"django",
"admin"
] | I have two Django projects and applications running on the same Apache installation. Both projects and both applications have the same name, for example myproject.myapplication. They are each in separately named directories so it looks like .../dir1/myproject/myapplication and .../dir2/myproject/myapplication.
Everything about the actual public facing applications works fine. When I log into either of the admin sites it seems ok, but if I switch and do any work on the opposite admin site I get logged out of the first one. In short I can't be logged into both admin sites at once. Any help would be appreciated. | Set the [SESSION\_COOKIE\_DOMAIN](http://docs.djangoproject.com/en/dev/topics/http/sessions/#session-cookie-domain) option. You need to set the domain for each of your sites so the cookies don't override each other.
You can also use SESSION\_COOKIE\_NAME to make the cookie names different for each site. |
in python, is there a one line pythonic way to get a list of keys from a dictionary in sorted order? | 327,191 | 2 | 2008-11-29T05:03:14Z | 327,210 | 17 | 2008-11-29T05:20:37Z | [
"python",
"iterator",
"syntactic-sugar"
] | The list sort method is a modifier function that returns None.
So if I want to iterate through all of the keys in a dictionary I cannot do:
```
for k in somedictionary.keys().sort():
dosomething()
```
instead, i must:
```
keys = somedictionary.keys()
keys.sort()
for k in keys:
dosomething()
```
Is there a pretty way to iterate through these keys in sorted order without having to break it up in to multiple steps? | ```
for k in sorted(somedictionary.keys()):
doSomething(k)
```
Note that you can also get all of the keys and values sorted by keys like this:
```
for k, v in sorted(somedictionary.iteritems()):
doSomething(k, v)
``` |
Memory Efficient Alternatives to Python Dictionaries | 327,223 | 32 | 2008-11-29T05:33:26Z | 327,295 | 26 | 2008-11-29T07:06:58Z | [
"python",
"memory",
"data-structures"
] | In one of my current side projects, I am scanning through some text looking at the frequency of word triplets. In my first go at it, I used the default dictionary three levels deep. In other words, topDictionary[word1][word2][word3] returns the number of times these words appear in the text, topdictionary[word1][word2] returns a dictionary with all the words that appeared following words 1 and 2, etc.
This functions correctly, but it is very memory intensive. In my initial tests it used something like 20 times the memory of just storing the triplets in a text file, which seems like an overly large amount of memory overhead.
My suspicion is that many of these dictionaries are being created with many more slots than are actually being used, so I want to replace the dictionaries with something else that is more memory efficient when used in this manner. I would strongly prefer a solution that allows key lookups along the lines of the dictionaries.
From what I know of data structures, a balanced binary search tree using something like red-black or AVL would probably be ideal, but I would really prefer not to implement them myself. If possible, I'd prefer to stick with standard python libraries, but I'm definitely open to other alternatives if they would work best.
So, does anyone have any suggestions for me?
Edited to add:
Thanks for the responses so far. A few of the answers so far have suggested using tuples, which didn't really do much for me when I condensed the first two words into a tuple. I am hesitant to use all three as a key since I want it to be easy to look up all third words given the first two. (ie I want something like the result of topDict[word1,word2].keys() ).
The current dataset I am playing around with is the most recent version of [Wikipedia For Schools](http://www.soschildrensvillages.org.uk/charity-news/wikipedia-for-schools.htm). The results of parsing the first thousand pages, for example, is something like 11MB for a text file where each line is the three words and the count all tab separated. Storing the text in the dictionary format I am now using takes around 185MB. I know that there will be some additional overhead for pointers and whatnot, but the difference seems excessive.
Once again, thank you all for the responses so far. | Some measurements. I took 10MB of free e-book text and computed trigram frequencies, producing a 24MB file. Storing it in different simple Python data structures took this much space in kB, measured as RSS from running ps, where d is a dict, keys and freqs are lists, and a,b,c,freq are the fields of a trigram record:
```
295760 S. Lott's answer
237984 S. Lott's with keys interned before passing in
203172 [*] d[(a,b,c)] = int(freq)
203156 d[a][b][c] = int(freq)
189132 keys.append((a,b,c)); freqs.append(int(freq))
146132 d[intern(a),intern(b)][intern(c)] = int(freq)
145408 d[intern(a)][intern(b)][intern(c)] = int(freq)
83888 [*] d[a+' '+b+' '+c] = int(freq)
82776 [*] d[(intern(a),intern(b),intern(c))] = int(freq)
68756 keys.append((intern(a),intern(b),intern(c))); freqs.append(int(freq))
60320 keys.append(a+' '+b+' '+c); freqs.append(int(freq))
50556 pair array
48320 squeezed pair array
33024 squeezed single array
```
The entries marked [\*] have no efficient way to look up a pair (a,b); they're listed only because others have suggested them (or variants of them). (I was sort of irked into making this because the top-voted answers were not helpful, as the table shows.)
'Pair array' is the scheme below in my original answer ("I'd start with the array with keys
being the first two words..."), where the value table for each pair is
represented as a single string. 'Squeezed pair array' is the same,
leaving out the frequency values that are equal to 1 (the most common
case). 'Squeezed single array' is like squeezed pair array, but gloms key and value together as one string (with a separator character). The squeezed single array code:
```
import collections
def build(file):
pairs = collections.defaultdict(list)
for line in file: # N.B. file assumed to be already sorted
a, b, c, freq = line.split()
key = ' '.join((a, b))
pairs[key].append(c + ':' + freq if freq != '1' else c)
out = open('squeezedsinglearrayfile', 'w')
for key in sorted(pairs.keys()):
out.write('%s|%s\n' % (key, ' '.join(pairs[key])))
def load():
return open('squeezedsinglearrayfile').readlines()
if __name__ == '__main__':
build(open('freqs'))
```
I haven't written the code to look up values from this structure (use bisect, as mentioned below), or implemented the fancier compressed structures also described below.
**Original answer:** A simple sorted array of strings, each string being a space-separated concatenation of words, searched using the bisect module, should be worth trying for a start. This saves space on pointers, etc. It still wastes space due to the repetition of words; there's a standard trick to strip out common prefixes, with another level of index to get them back, but that's rather more complex and slower. (The idea is to store successive chunks of the array in a compressed form that must be scanned sequentially, along with a random-access index to each chunk. Chunks are big enough to compress, but small enough for reasonable access time. The particular compression scheme applicable here: if successive entries are 'hello george' and 'hello world', make the second entry be '6world' instead. (6 being the length of the prefix in common.) Or maybe you could get away with using [zlib](http://www.python.org/doc/2.5.2/lib/module-zlib.html)? Anyway, you can find out more in this vein by looking up dictionary structures used in full-text search.) So specifically, I'd start with the array with keys being the first two words, with a parallel array whose entries list the possible third words and their frequencies. It might still suck, though -- I think you may be out of luck as far as batteries-included memory-efficient options.
Also, binary tree structures are *not* recommended for memory efficiency here. E.g., [this paper](http://www.cdf.toronto.edu/~csc148h/fall/assignment3/bursttries.pdf) tests a variety of data structures on a similar problem (unigrams instead of trigrams though) and finds a hashtable to beat all of the tree structures by that measure.
I should have mentioned, as someone else did, that the sorted array could be used just for the wordlist, not bigrams or trigrams; then for your 'real' data structure, whatever it is, you use integer keys instead of strings -- indices into the wordlist. (But this keeps you from exploiting common prefixes except in the wordlist itself. Maybe I shouldn't suggest this after all.) |
Memory Efficient Alternatives to Python Dictionaries | 327,223 | 32 | 2008-11-29T05:33:26Z | 327,313 | 8 | 2008-11-29T07:36:31Z | [
"python",
"memory",
"data-structures"
] | In one of my current side projects, I am scanning through some text looking at the frequency of word triplets. In my first go at it, I used the default dictionary three levels deep. In other words, topDictionary[word1][word2][word3] returns the number of times these words appear in the text, topdictionary[word1][word2] returns a dictionary with all the words that appeared following words 1 and 2, etc.
This functions correctly, but it is very memory intensive. In my initial tests it used something like 20 times the memory of just storing the triplets in a text file, which seems like an overly large amount of memory overhead.
My suspicion is that many of these dictionaries are being created with many more slots than are actually being used, so I want to replace the dictionaries with something else that is more memory efficient when used in this manner. I would strongly prefer a solution that allows key lookups along the lines of the dictionaries.
From what I know of data structures, a balanced binary search tree using something like red-black or AVL would probably be ideal, but I would really prefer not to implement them myself. If possible, I'd prefer to stick with standard python libraries, but I'm definitely open to other alternatives if they would work best.
So, does anyone have any suggestions for me?
Edited to add:
Thanks for the responses so far. A few of the answers so far have suggested using tuples, which didn't really do much for me when I condensed the first two words into a tuple. I am hesitant to use all three as a key since I want it to be easy to look up all third words given the first two. (ie I want something like the result of topDict[word1,word2].keys() ).
The current dataset I am playing around with is the most recent version of [Wikipedia For Schools](http://www.soschildrensvillages.org.uk/charity-news/wikipedia-for-schools.htm). The results of parsing the first thousand pages, for example, is something like 11MB for a text file where each line is the three words and the count all tab separated. Storing the text in the dictionary format I am now using takes around 185MB. I know that there will be some additional overhead for pointers and whatnot, but the difference seems excessive.
Once again, thank you all for the responses so far. | Use tuples.
Tuples can be keys to dictionaries, so you don't need to nest dictionaries.
```
d = {}
d[ word1, word2, word3 ] = 1
```
Also as a plus, you could use defaultdict
* so that elements that don't have entries always return 0
* and so that u can say `d[w1,w2,w3] += 1` without checking if the key already exists or not
example:
```
from collections import defaultdict
d = defaultdict(int)
d["first","word","tuple"] += 1
```
If you need to find all words "word3" that are tupled with (word1,word2) then search for it in dictionary.keys() using list comprehension
if you have a tuple, t, you can get the first two items using slices:
```
>>> a = (1,2,3)
>>> a[:2]
(1, 2)
```
a small example for searching tuples with list comprehensions:
```
>>> b = [(1,2,3),(1,2,5),(3,4,6)]
>>> search = (1,2)
>>> [a[2] for a in b if a[:2] == search]
[3, 5]
```
You see here, we got a list of all items that appear as the third item in the tuples that start with (1,2) |
How are Python's Built In Dictionaries Implemented | 327,311 | 146 | 2008-11-29T07:35:31Z | 327,378 | 8 | 2008-11-29T09:22:46Z | [
"python",
"data-structures",
"dictionary"
] | Does anyone know how the built in dictionary type for python is implemented? My understanding is that it is some sort of hash table, but I haven't been able to find any sort of definitive answer. | [Pure Python Dictionary Implementation](http://pybites.blogspot.com/2008/10/pure-python-dictionary-implementation.html)
> For those curious about how CPython's dict implementation works, I've written a Python implementation using the same algorithms. |
How are Python's Built In Dictionaries Implemented | 327,311 | 146 | 2008-11-29T07:35:31Z | 334,953 | 8 | 2008-12-02T18:29:16Z | [
"python",
"data-structures",
"dictionary"
] | Does anyone know how the built in dictionary type for python is implemented? My understanding is that it is some sort of hash table, but I haven't been able to find any sort of definitive answer. | Here's a link to the [actual implementation](http://svn.python.org/view/python/trunk/Objects/dictobject.c?rev=66801&view=auto "dict implementation") in the python SVN repository. That should be the most definite answer. |
How are Python's Built In Dictionaries Implemented | 327,311 | 146 | 2008-11-29T07:35:31Z | 2,996,689 | 37 | 2010-06-08T11:00:37Z | [
"python",
"data-structures",
"dictionary"
] | Does anyone know how the built in dictionary type for python is implemented? My understanding is that it is some sort of hash table, but I haven't been able to find any sort of definitive answer. | Python Dictionaries use [Open addressing](http://en.wikipedia.org/wiki/Hash_table#Open_addressing) ([reference inside Beautiful code](http://books.google.co.in/books?id=gJrmszNHQV4C&lpg=PP1&hl=sv&pg=PA298#v=onepage&q&f=false))
**NB!** *Open addressing*, a.k.a *closed hashing* should, as noted in Wikipedia, not be confused with its opposite *open hashing*! (which we see in the accepted answer).
Open addressing means that the dict uses array slots, and when an object's primary position is taken in the dict, the object's spot is sought at a different index in the same array, using a "perturbation" scheme, where the object's hash value plays part. |
How are Python's Built In Dictionaries Implemented | 327,311 | 146 | 2008-11-29T07:35:31Z | 8,682,049 | 27 | 2011-12-30T17:15:04Z | [
"python",
"data-structures",
"dictionary"
] | Does anyone know how the built in dictionary type for python is implemented? My understanding is that it is some sort of hash table, but I haven't been able to find any sort of definitive answer. | At PyCon 2010, Brandon Craig Rhodes gave an [excellent talk](http://www.youtube.com/watch?v=C4Kc8xzcA68) about the Python dictionary. It provides a great overview of the dictionary implementation with examples and visuals. If you have 45 minutes (or even just 15), I would recommend watching the talk before proceeding to the actual implementation. |
How are Python's Built In Dictionaries Implemented | 327,311 | 146 | 2008-11-29T07:35:31Z | 9,022,835 | 241 | 2012-01-26T17:52:00Z | [
"python",
"data-structures",
"dictionary"
] | Does anyone know how the built in dictionary type for python is implemented? My understanding is that it is some sort of hash table, but I haven't been able to find any sort of definitive answer. | Here is everything about Python dicts that I was able to put together (probably more than anyone would like to know; but the answer is comprehensive).
* Python dictionaries are implemented as **hash tables**.
* Hash tables must allow for **hash collisions** i.e. even if two distinct keys have the same hash value, the table's implementation must have a strategy to insert and retrieve the key and value pairs unambiguously.
* Python `dict` uses **open addressing** to resolve hash collisions (explained below) (see [dictobject.c:296-297](http://hg.python.org/cpython/file/52f68c95e025/Objects/dictobject.c#l296)).
* Python hash table is just a contiguous block of memory (sort of like an array, so you can do an `O(1)` lookup by index).
* **Each slot in the table can store one and only one entry.** This is important.
* Each **entry** in the table actually a combination of the three values: **< hash, key, value >**. This is implemented as a C struct (see [dictobject.h:51-56](http://hg.python.org/cpython/file/52f68c95e025/Include/dictobject.h#l51)).
* The figure below is a logical representation of a Python hash table. In the figure below, `0, 1, ..., i, ...` on the left are indices of the **slots** in the hash table (they are just for illustrative purposes and are not stored along with the table obviously!).
```
# Logical model of Python Hash table
-+-----------------+
0| <hash|key|value>|
-+-----------------+
1| ... |
-+-----------------+
.| ... |
-+-----------------+
i| ... |
-+-----------------+
.| ... |
-+-----------------+
n| ... |
-+-----------------+
```
* When a new dict is initialized it starts with 8 *slots*. (see [dictobject.h:49](http://hg.python.org/cpython/file/52f68c95e025/Include/dictobject.h#l49))
* When adding entries to the table, we start with some slot, `i`, that is based on the hash of the key. CPython initially uses `i = hash(key) & mask` (where `mask = PyDictMINSIZE - 1`, but that's not really important). Just note that the initial slot, i, that is checked depends on the *hash* of the key.
* If that slot is empty, the entry is added to the slot (by entry, I mean, `<hash|key|value>`). But what if that slot is occupied!? Most likely because another entry has the same hash (hash collision!)
* If the slot is occupied, CPython (and even PyPy) compares the **the hash AND the key** (by compare I mean `==` comparison not the `is` comparison) of the entry in the slot against the key of the current entry to be inserted ([dictobject.c:337,344-345](http://hg.python.org/cpython/file/52f68c95e025/Objects/dictobject.c#l337)). If *both* match, then it thinks the entry already exists, gives up and moves on to the next entry to be inserted. If either hash or the key don't match, it starts **probing**.
* Probing just means it searches the slots by slot to find an empty slot. Technically we could just go one by one, `i+1, i+2, ...` and use the first available one (that's linear probing). But for reasons explained beautifully in the comments (see [dictobject.c:33-126](http://hg.python.org/cpython/file/52f68c95e025/Objects/dictobject.c#l33)), CPython uses **random probing**. In random probing, the next slot is picked in a pseudo random order. The entry is added to the first empty slot. For this discussion, the actual algorithm used to pick the next slot is not really important (see [dictobject.c:33-126](http://hg.python.org/cpython/file/52f68c95e025/Objects/dictobject.c#l33) for the algorithm for probing). What is important is that the slots are probed until first empty slot is found.
* The same thing happens for lookups, just starts with the initial slot i (where i depends on the hash of the key). If the hash and the key both don't match the entry in the slot, it starts probing, until it finds a slot with a match. If all slots are exhausted, it reports a fail.
* BTW, the `dict` will be resized if it is two-thirds full. This avoids slowing down lookups. (see [dictobject.h:64-65](http://hg.python.org/cpython/file/52f68c95e025/Include/dictobject.h#l64))
NOTE: I did the research on Python Dict implementation in response to my own [question](http://stackoverflow.com/questions/9010222/) about how multiple entries in a dict can have same hash values. I posted a slightly edited version of the response here because all the research is very relevant for this question as well. |
storing unbound python functions in a class object | 327,483 | 11 | 2008-11-29T12:21:47Z | 327,488 | 15 | 2008-11-29T12:28:33Z | [
"python",
"function"
] | I'm trying to do the following in python:
In a file called foo.py:
```
# simple function that does something:
def myFunction(a,b,c):
print "call to myFunction:",a,b,c
# class used to store some data:
class data:
fn = None
# assign function to the class for storage.
data.fn = myFunction
```
And then in a file called bar.py:
import foo
```
d = foo.data
d.fn(1,2,3)
```
However, I get the following error:
> TypeError: unbound method f() must be called with data instance as first argument (got int instance instead)
This is fair enough I suppose - python is treating d.myFunction as a class method. However, I want it to treat it as a normal function - so I can call it without having to add an unused 'self' parameter to the myFunction definition.
So the question is:
***How can I store a function in a class object without the function becoming bound to that class?*** | ```
data.fn = staticmethod(myFunction)
```
should do the trick. |
Storing and updating lists in Python dictionaries: why does this happen? | 327,534 | 19 | 2008-11-29T13:40:05Z | 327,548 | 57 | 2008-11-29T13:46:42Z | [
"python",
"dictionary",
"list"
] | I have a list of data that looks like the following:
```
// timestep,x_position,y_position
0,4,7
0,2,7
0,9,5
0,6,7
1,2,5
1,4,7
1,9,0
1,6,8
```
... and I want to make this look like:
```
0, (4,7), (2,7), (9,5), (6,7)
1, (2,5), (4,7), (9,0), (6.8)
```
My plan was to use a dictionary, where the value of t is the key for the dictionary, and the value against the key would be a list. I could then append each (x,y) to the list. Something like:
```
# where t = 0, c = (4,7), d = {}
# code 1
d[t].append(c)
```
Now this causes IDLE to fail. However, if I do:
```
# code 2
d[t] = []
d[t].append(c)
```
... this works.
So the question is: why does code 2 work, but code 1 doesn't?
PS Any improvement on what I'm planning on doing would be of great interest!! I think I will have to check the dictionary on each loop through the input to see if the dictionary key already exists, I guess by using something like max(d.keys()): if it is there, append data, if not create the empty list as the dictionary value, and then append data on the next loop through. | Let's look at
```
d[t].append(c)
```
What is the value of `d[t]`? Try it.
```
d = {}
t = 0
d[t]
```
What do you get? Oh. There's nothing in `d` that has a key of `t`.
Now try this.
```
d[t] = []
d[t]
```
Ahh. Now there's something in `d` with a key of `t`.
There are several things you can do.
1. Use example 2.
2. Use `setdefault`. `d.setdefault(t,[]).append(c)`.
3. Use [collections.defaultdict](http://www.python.org/doc/2.5.2/lib/defaultdict-objects.html). You'd use a `defaultdict(list)` instead of a simple dictionary, `{}`.
---
**Edit 1. Optimization**
Given input lines from a file in the above form: ts, x, y, the grouping process is needless. There's no reason to go from a simple list of ( ts, x, y ) to a more complex
list of ( ts, (x,y), (x,y), (x,y), ... ). The original list can be processed exactly as it arrived.
```
d= collections.defaultdict(list)
for ts, x, y in someFileOrListOrQueryOrWhatever:
d[ts].append( (x,y) )
```
---
**Edit 2. Answer Question**
"when initialising a dictionary, you need to tell the dictionary what the key-value data structure will look like?"
I'm not sure what the question means. Since, all dictionaries *are* key-value structures, the question's not very clear. So, I'll review the three alternatives, which may answer the question.
**Example 2**.
Initialization
```
d= {}
```
Use
```
if t not in d:
d[t] = list()
d[t].append( c )
```
Each dictionary value must be initialized to some useful structure. In this case, we check to see if the key is present; when the key is missing, we create the key and assign an empty list.
**Setdefault**
Initialization
```
d= {}
```
Use
```
d.setdefault(t,list()).append( c )
```
In this case, we exploit the `setdefault` method to either fetch a value associated with a key or create a new value associated with a missing key.
**default dict**
Initialization
```
import collections
d = collections.defaultdict(list)
```
Use
```
d[t].append( c )
```
The `defaultdict` uses an initializer function for missing keys. In this case, we provide the `list` function so that a new, empty list is created for a missing key. |
Storing and updating lists in Python dictionaries: why does this happen? | 327,534 | 19 | 2008-11-29T13:40:05Z | 327,575 | 9 | 2008-11-29T14:28:09Z | [
"python",
"dictionary",
"list"
] | I have a list of data that looks like the following:
```
// timestep,x_position,y_position
0,4,7
0,2,7
0,9,5
0,6,7
1,2,5
1,4,7
1,9,0
1,6,8
```
... and I want to make this look like:
```
0, (4,7), (2,7), (9,5), (6,7)
1, (2,5), (4,7), (9,0), (6.8)
```
My plan was to use a dictionary, where the value of t is the key for the dictionary, and the value against the key would be a list. I could then append each (x,y) to the list. Something like:
```
# where t = 0, c = (4,7), d = {}
# code 1
d[t].append(c)
```
Now this causes IDLE to fail. However, if I do:
```
# code 2
d[t] = []
d[t].append(c)
```
... this works.
So the question is: why does code 2 work, but code 1 doesn't?
PS Any improvement on what I'm planning on doing would be of great interest!! I think I will have to check the dictionary on each loop through the input to see if the dictionary key already exists, I guess by using something like max(d.keys()): if it is there, append data, if not create the empty list as the dictionary value, and then append data on the next loop through. | I think you want to use setdefault. It's a bit weird to use but does exactly what you need.
```
d.setdefault(t, []).append(c)
```
The `.setdefault` method will return the element (in our case, a list) that's bound to the dict's key `t` if that key exists. If it doesn't, it will bind an empty list to the key `t` and return it. So either way, a list will be there that the `.append` method can then append the tuple `c` to. |
Suggestion to implement a text Menu without switch case | 327,597 | 2 | 2008-11-29T15:01:28Z | 327,741 | 12 | 2008-11-29T17:32:04Z | [
"python"
] | I'm giving my first steps on Python. I saw that we don't have switch case statement,
so I would you guys implement a text Menu in python?
Thanks | You might do something like this:
```
def action1():
pass # put a function here
def action2():
pass # blah blah
def action3():
pass # and so on
def no_such_action():
pass # print a message indicating there's no such action
def main():
actions = {"foo": action1, "bar": action2, "baz": action3}
while True:
print_menu()
selection = raw_input("Your selection: ")
if "quit" == selection:
return
toDo = actions.get(selection, no_such_action)
toDo()
if __name__ == "__main__":
main()
```
This puts all your possible actions' functions into a dictionary, with the key being what you will input to run the function. It then retrieves the action input action from the list, unless the input action doesn't exist, in which case it retrieves no\_such\_action.
After you have a basic understanding of how this works, if you're considering doing a Serious Business command-lineâtype application, I would look at the [`cmd` framework](http://docs.python.org/library/cmd.html) for command-line applications. |
Read the last lineof the file | 327,776 | 3 | 2008-11-29T18:03:03Z | 327,825 | 9 | 2008-11-29T18:34:42Z | [
"python"
] | Imagine I have a file with
Xpto,50,30,60
Xpto,a,v,c
Xpto,1,9,0
Xpto,30,30,60
that txt file can be appended a lot of times and when I open the file I want only to get the values of the last line of the txt file... How can i do that on python? reading the last line?
Thanks | I think my answer from the [last time this came up](http://stackoverflow.com/questions/260273/most-efficient-way-to-search-the-last-x-lines-of-a-file-in-python) was sadly overlooked. :-)
> If you're on a unix box,
> `os.popen("tail -10 " +
> filepath).readlines()` will probably
> be the fastest way. Otherwise, it
> depends on how robust you want it to
> be. The methods proposed so far will
> all fall down, one way or another.
> For robustness and speed in the most
> common case you probably want
> something like a logarithmic search:
> use file.seek to go to end of the file
> minus 1000 characters, read it in,
> check how many lines it contains, then
> to EOF minus 3000 characters, read in
> 2000 characters, count the lines, then
> EOF minus 7000, read in 4000
> characters, count the lines, etc.
> until you have as many lines as you
> need. But if you know for sure that
> it's always going to be run on files
> with sensible line lengths, you may
> not need that.
>
> You might also find some inspiration
> in the [source code](http://www.koders.com/c/fid8DEE98A42C35A1346FA89C328CC3BF94E25CF377.aspx) for the unix
> `tail` command. |
Django equivalent for count and group by | 327,807 | 82 | 2008-11-29T18:19:28Z | 327,987 | 56 | 2008-11-29T20:44:21Z | [
"python",
"django"
] | I have a model that looks like this:
```
class Category(models.Model):
name = models.CharField(max_length=60)
class Item(models.Model):
name = models.CharField(max_length=60)
category = models.ForeignKey(Category)
```
I want select count (just the count) of items for each category, so in SQL it would be as simple as this:
```
select category_id, count(id) from item group by category_id
```
Is there an equivalent of doing this "the Django way"? Or is plain SQL the only option? I am familiar with the *count( )* method in Django, however I don't see how *group by* would fit there. | (**Update**: Full ORM aggregation support is now included in [Django 1.1](http://docs.djangoproject.com/en/dev/releases/1.1/#aggregate-support). True to the below warning about using private APIs, the method documented here no longer works in post-1.1 versions of Django. I haven't dug in to figure out why; if you're on 1.1 or later you should use the real [aggregation API](http://docs.djangoproject.com/en/dev/topics/db/aggregation/) anyway.)
The core aggregation support was already there in 1.0; it's just undocumented, unsupported, and doesn't have a friendly API on top of it yet. But here's how you can use it anyway until 1.1 arrives (at your own risk, and in full knowledge that the query.group\_by attribute is not part of a public API and could change):
```
query_set = Item.objects.extra(select={'count': 'count(1)'},
order_by=['-count']).values('count', 'category')
query_set.query.group_by = ['category_id']
```
If you then iterate over query\_set, each returned value will be a dictionary with a "category" key and a "count" key.
You don't have to order by -count here, that's just included to demonstrate how it's done (it has to be done in the .extra() call, not elsewhere in the queryset construction chain). Also, you could just as well say count(id) instead of count(1), but the latter may be more efficient.
Note also that when setting .query.group\_by, the values must be actual DB column names ('category\_id') not Django field names ('category'). This is because you're tweaking the query internals at a level where everything's in DB terms, not Django terms. |
Django equivalent for count and group by | 327,807 | 82 | 2008-11-29T18:19:28Z | 1,317,837 | 120 | 2009-08-23T05:21:09Z | [
"python",
"django"
] | I have a model that looks like this:
```
class Category(models.Model):
name = models.CharField(max_length=60)
class Item(models.Model):
name = models.CharField(max_length=60)
category = models.ForeignKey(Category)
```
I want select count (just the count) of items for each category, so in SQL it would be as simple as this:
```
select category_id, count(id) from item group by category_id
```
Is there an equivalent of doing this "the Django way"? Or is plain SQL the only option? I am familiar with the *count( )* method in Django, however I don't see how *group by* would fit there. | Here, as I just discovered, is how to do this with the Django 1.1 aggregation API:
```
from django.db.models import Count
theanswer = Item.objects.values('category').annotate(Count('category'))
``` |
Django equivalent for count and group by | 327,807 | 82 | 2008-11-29T18:19:28Z | 1,341,667 | 51 | 2009-08-27T15:02:17Z | [
"python",
"django"
] | I have a model that looks like this:
```
class Category(models.Model):
name = models.CharField(max_length=60)
class Item(models.Model):
name = models.CharField(max_length=60)
category = models.ForeignKey(Category)
```
I want select count (just the count) of items for each category, so in SQL it would be as simple as this:
```
select category_id, count(id) from item group by category_id
```
Is there an equivalent of doing this "the Django way"? Or is plain SQL the only option? I am familiar with the *count( )* method in Django, however I don't see how *group by* would fit there. | Since I was a little confused about how grouping in Django 1.1 works I thought I'd elaborate here on how exactly you go about using it. First, to repeat what Michael said:
> Here, as I just discovered, is how to do this with the Django 1.1 aggregation API:
>
> ```
> from django.db.models import Count
> theanswer = Item.objects.values('category').annotate(Count('category'))
> ```
Note also that you need to `from django.db.models import Count`!
This will select only the categories and then add an annotation called `category__count`. Depending on the default ordering this may be all you need, **but if the default ordering uses a field other than `category` this will not work**. The reason for this is that the fields required for ordering are also selected and make each row unique, so you won't get stuff grouped how you want it. One quick way to fix this is to reset the ordering:
```
Item.objects.values('category').annotate(Count('category')).order_by()
```
This should produce exactly the results you want. To set the name of the annotation you can use:
```
...annotate(mycount = Count('category'))...
```
Then you will have an annotation called `mycount` in the results.
Everything else about grouping was very straightforward to me. Be sure to check out the [Django aggregation API](http://docs.djangoproject.com/en/dev/topics/db/aggregation) for more detailed info. |
Create a List that contain each Line of a File | 328,059 | 13 | 2008-11-29T21:55:53Z | 328,068 | 38 | 2008-11-29T22:03:34Z | [
"python"
] | I'm trying to open a file and create a list with each line read from the file.
```
i=0
List=[""]
for Line in inFile:
List[i]=Line.split(",")
i+=1
print List
```
But this sample code gives me an error because of the `i+=1` saying that `index is out of range`.
What's my problem here? How can I write the code in order to increment my list with every new Line in the InFile? | It's a lot easier than that:
```
List = open("filename.txt").readlines()
```
This returns a list of each line in the file. |
Create a List that contain each Line of a File | 328,059 | 13 | 2008-11-29T21:55:53Z | 31,923,407 | 10 | 2015-08-10T15:22:21Z | [
"python"
] | I'm trying to open a file and create a list with each line read from the file.
```
i=0
List=[""]
for Line in inFile:
List[i]=Line.split(",")
i+=1
print List
```
But this sample code gives me an error because of the `i+=1` saying that `index is out of range`.
What's my problem here? How can I write the code in order to increment my list with every new Line in the InFile? | I did it this way
```
lines_list = open('file.txt').read().splitlines()
```
Every line comes with its end of line characters (\n\r); this way the characters are removed. |
How to make a surface with a transparent background in pygame | 328,061 | 18 | 2008-11-29T21:58:40Z | 328,067 | 20 | 2008-11-29T22:02:45Z | [
"python",
"transparency",
"pygame"
] | Can someone give me some example code that creates a surface with a transparent background in pygame? | This should do it:
```
image = pygame.Surface([640,480], pygame.SRCALPHA, 32)
image = image.convert_alpha()
```
Make sure that the color depth (32) stays explicitly set else this will not work. |
How to make a surface with a transparent background in pygame | 328,061 | 18 | 2008-11-29T21:58:40Z | 353,739 | 8 | 2008-12-09T18:24:22Z | [
"python",
"transparency",
"pygame"
] | Can someone give me some example code that creates a surface with a transparent background in pygame? | You can also give it a colorkey, much like GIF file transparency. This is the most common way to make sprites. The original bitmap has the artwork, and has a certain color as background that will not be drawn, which is the colorkey:
```
surf.set_colorkey((255,0,255)) // Sets the colorkey to that hideous purple
```
Surfaces that uses colorkey instead of alpha are a lot faster to blit since they don't require any blend math. The SDL surface uses a simple bitmask when it has a colorkey set, which blits practically without overhead. |
How can you determine a point is between two other points on a line segment? | 328,107 | 67 | 2008-11-29T22:37:13Z | 328,110 | 24 | 2008-11-29T22:40:05Z | [
"python",
"math",
"geometry"
] | Let's say you have a two dimensional plane with 2 points (called a and b) on it represented by an x integer and a y integer for each point.
How can you determine if another point c is on the line segment defined by a and b?
I use python most, but examples in any language would be helpful. | Check if the cross product of `b-a` and `c-a` is`0`: that means all the points are collinear. If they are, check if `c`'s coordinates are between `a`'s and `b`'s. Use either the x or the y coordinates, as long as `a` and `b` are separate on that axis (or they're the same on both).
```
def is_on(a, b, c):
"Return true iff point c intersects the line segment from a to b."
# (or the degenerate case that all 3 points are coincident)
return (collinear(a, b, c)
and (within(a.x, c.x, b.x) if a.x != b.x else
within(a.y, c.y, b.y)))
def collinear(a, b, c):
"Return true iff a, b, and c all lie on the same line."
return (b.x - a.x) * (c.y - a.y) == (c.x - a.x) * (b.y - a.y)
def within(p, q, r):
"Return true iff q is between p and r (inclusive)."
return p <= q <= r or r <= q <= p
```
This answer used to be a mess of three updates. The worthwhile info from them: Brian Hayes's [chapter](http://bit-player.org/bph-publications/BeautifulCode-2007-Hayes.pdf) in *Beautiful Code* covers the design space for a collinearity-test function -- useful background. [Vincent's answer](http://stackoverflow.com/questions/328107/how-can-you-determine-a-point-is-between-two-other-points-on-a-line-segment#328337) helped to improve this one. And it was Hayes who suggested testing only one of the x or the y coordinates; originally the code had `and` in place of `if a.x != b.x else`. |
How can you determine a point is between two other points on a line segment? | 328,107 | 67 | 2008-11-29T22:37:13Z | 328,122 | 84 | 2008-11-29T22:46:45Z | [
"python",
"math",
"geometry"
] | Let's say you have a two dimensional plane with 2 points (called a and b) on it represented by an x integer and a y integer for each point.
How can you determine if another point c is on the line segment defined by a and b?
I use python most, but examples in any language would be helpful. | Check if the **cross product** of (b-a) and (c-a) is 0, as tells Darius Bacon, tells you if the points a, b and c are aligned.
But, as you want to know if c is between a and b, you also have to check that the **dot product** of (b-a) and (c-a) is *positive* and is *less* than the square of the distance between a and b.
In non-optimized pseudocode:
```
def isBetween(a, b, c):
crossproduct = (c.y - a.y) * (b.x - a.x) - (c.x - a.x) * (b.y - a.y)
if abs(crossproduct) > epsilon : return False # (or != 0 if using integers)
dotproduct = (c.x - a.x) * (b.x - a.x) + (c.y - a.y)*(b.y - a.y)
if dotproduct < 0 : return False
squaredlengthba = (b.x - a.x)*(b.x - a.x) + (b.y - a.y)*(b.y - a.y)
if dotproduct > squaredlengthba: return False
return True
``` |
How can you determine a point is between two other points on a line segment? | 328,107 | 67 | 2008-11-29T22:37:13Z | 328,193 | 26 | 2008-11-29T23:39:34Z | [
"python",
"math",
"geometry"
] | Let's say you have a two dimensional plane with 2 points (called a and b) on it represented by an x integer and a y integer for each point.
How can you determine if another point c is on the line segment defined by a and b?
I use python most, but examples in any language would be helpful. | Here's how I'd do it:
```
def distance(a,b):
return sqrt((a.x - b.x)**2 + (a.y - b.y)**2)
def is_between(a,c,b):
return distance(a,c) + distance(c,b) == distance(a,b)
``` |
Extracting text from HTML file using Python | 328,356 | 128 | 2008-11-30T02:28:04Z | 328,389 | 69 | 2008-11-30T03:23:58Z | [
"python",
"html",
"text",
"html-content-extraction"
] | I'd like to extract the text from an HTML file using Python. I want essentially the same output I would get if I copied the text from a browser and pasted it into notepad.
I'd like something more robust than using regular expressions that may fail on poorly formed HTML. I've seen many people recommend Beautiful Soup, but I've had a few problems using it. For one, it picked up unwanted text, such as JavaScript source. Also, it did not interpret HTML entities. For example, I would expect ' in HTML source to be converted to an apostrophe in text, just as if I'd pasted the browser content into notepad.
**Update** `html2text` looks promising. It handles HTML entities correctly and ignores JavaScript. However, it does not exactly produce plain text; it produces markdown that would then have to be turned into plain text. It comes with no examples or documentation, but the code looks clean.
---
Related questions:
* [Filter out HTML tags and resolve entities in python](http://stackoverflow.com/questions/37486/filter-out-html-tags-and-resolve-entities-in-python)
* [Convert XML/HTML Entities into Unicode String in Python](http://stackoverflow.com/questions/57708/convert-xmlhtml-entities-into-unicode-string-in-python) | [html2text](http://www.aaronsw.com/2002/html2text/) is a Python program that does a pretty good job at this. |
Extracting text from HTML file using Python | 328,356 | 128 | 2008-11-30T02:28:04Z | 1,463,802 | 8 | 2009-09-23T03:21:58Z | [
"python",
"html",
"text",
"html-content-extraction"
] | I'd like to extract the text from an HTML file using Python. I want essentially the same output I would get if I copied the text from a browser and pasted it into notepad.
I'd like something more robust than using regular expressions that may fail on poorly formed HTML. I've seen many people recommend Beautiful Soup, but I've had a few problems using it. For one, it picked up unwanted text, such as JavaScript source. Also, it did not interpret HTML entities. For example, I would expect ' in HTML source to be converted to an apostrophe in text, just as if I'd pasted the browser content into notepad.
**Update** `html2text` looks promising. It handles HTML entities correctly and ignores JavaScript. However, it does not exactly produce plain text; it produces markdown that would then have to be turned into plain text. It comes with no examples or documentation, but the code looks clean.
---
Related questions:
* [Filter out HTML tags and resolve entities in python](http://stackoverflow.com/questions/37486/filter-out-html-tags-and-resolve-entities-in-python)
* [Convert XML/HTML Entities into Unicode String in Python](http://stackoverflow.com/questions/57708/convert-xmlhtml-entities-into-unicode-string-in-python) | You can use html2text method in the stripogram library also.
```
from stripogram import html2text
text = html2text(your_html_string)
```
To install stripogram run sudo easy\_install stripogram |
Extracting text from HTML file using Python | 328,356 | 128 | 2008-11-30T02:28:04Z | 3,987,802 | 46 | 2010-10-21T13:14:38Z | [
"python",
"html",
"text",
"html-content-extraction"
] | I'd like to extract the text from an HTML file using Python. I want essentially the same output I would get if I copied the text from a browser and pasted it into notepad.
I'd like something more robust than using regular expressions that may fail on poorly formed HTML. I've seen many people recommend Beautiful Soup, but I've had a few problems using it. For one, it picked up unwanted text, such as JavaScript source. Also, it did not interpret HTML entities. For example, I would expect ' in HTML source to be converted to an apostrophe in text, just as if I'd pasted the browser content into notepad.
**Update** `html2text` looks promising. It handles HTML entities correctly and ignores JavaScript. However, it does not exactly produce plain text; it produces markdown that would then have to be turned into plain text. It comes with no examples or documentation, but the code looks clean.
---
Related questions:
* [Filter out HTML tags and resolve entities in python](http://stackoverflow.com/questions/37486/filter-out-html-tags-and-resolve-entities-in-python)
* [Convert XML/HTML Entities into Unicode String in Python](http://stackoverflow.com/questions/57708/convert-xmlhtml-entities-into-unicode-string-in-python) | Found myself facing just the same problem today. I wrote a very simple HTML parser to strip incoming content of all markups, returning the remaining text with only a minimum of formatting.
```
from HTMLParser import HTMLParser
from re import sub
from sys import stderr
from traceback import print_exc
class _DeHTMLParser(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self.__text = []
def handle_data(self, data):
text = data.strip()
if len(text) > 0:
text = sub('[ \t\r\n]+', ' ', text)
self.__text.append(text + ' ')
def handle_starttag(self, tag, attrs):
if tag == 'p':
self.__text.append('\n\n')
elif tag == 'br':
self.__text.append('\n')
def handle_startendtag(self, tag, attrs):
if tag == 'br':
self.__text.append('\n\n')
def text(self):
return ''.join(self.__text).strip()
def dehtml(text):
try:
parser = _DeHTMLParser()
parser.feed(text)
parser.close()
return parser.text()
except:
print_exc(file=stderr)
return text
def main():
text = r'''
<html>
<body>
<b>Project:</b> DeHTML<br>
<b>Description</b>:<br>
This small script is intended to allow conversion from HTML markup to
plain text.
</body>
</html>
'''
print(dehtml(text))
if __name__ == '__main__':
main()
``` |
Extracting text from HTML file using Python | 328,356 | 128 | 2008-11-30T02:28:04Z | 8,201,491 | 88 | 2011-11-20T12:34:09Z | [
"python",
"html",
"text",
"html-content-extraction"
] | I'd like to extract the text from an HTML file using Python. I want essentially the same output I would get if I copied the text from a browser and pasted it into notepad.
I'd like something more robust than using regular expressions that may fail on poorly formed HTML. I've seen many people recommend Beautiful Soup, but I've had a few problems using it. For one, it picked up unwanted text, such as JavaScript source. Also, it did not interpret HTML entities. For example, I would expect ' in HTML source to be converted to an apostrophe in text, just as if I'd pasted the browser content into notepad.
**Update** `html2text` looks promising. It handles HTML entities correctly and ignores JavaScript. However, it does not exactly produce plain text; it produces markdown that would then have to be turned into plain text. It comes with no examples or documentation, but the code looks clean.
---
Related questions:
* [Filter out HTML tags and resolve entities in python](http://stackoverflow.com/questions/37486/filter-out-html-tags-and-resolve-entities-in-python)
* [Convert XML/HTML Entities into Unicode String in Python](http://stackoverflow.com/questions/57708/convert-xmlhtml-entities-into-unicode-string-in-python) | **NOTE:** NTLK no longer supports `clean_html` function
Original answer below.
---
Use [NLTK](https://pypi.python.org/pypi/nltk)
I wasted my 4-5 hours fixing the issues with html2text. Luckily i could encounter NLTK.
It works magically.
```
import nltk
from urllib import urlopen
url = "http://news.bbc.co.uk/2/hi/health/2284783.stm"
html = urlopen(url).read()
raw = nltk.clean_html(html)
print(raw)
``` |
Extracting text from HTML file using Python | 328,356 | 128 | 2008-11-30T02:28:04Z | 16,423,634 | 10 | 2013-05-07T16:04:21Z | [
"python",
"html",
"text",
"html-content-extraction"
] | I'd like to extract the text from an HTML file using Python. I want essentially the same output I would get if I copied the text from a browser and pasted it into notepad.
I'd like something more robust than using regular expressions that may fail on poorly formed HTML. I've seen many people recommend Beautiful Soup, but I've had a few problems using it. For one, it picked up unwanted text, such as JavaScript source. Also, it did not interpret HTML entities. For example, I would expect ' in HTML source to be converted to an apostrophe in text, just as if I'd pasted the browser content into notepad.
**Update** `html2text` looks promising. It handles HTML entities correctly and ignores JavaScript. However, it does not exactly produce plain text; it produces markdown that would then have to be turned into plain text. It comes with no examples or documentation, but the code looks clean.
---
Related questions:
* [Filter out HTML tags and resolve entities in python](http://stackoverflow.com/questions/37486/filter-out-html-tags-and-resolve-entities-in-python)
* [Convert XML/HTML Entities into Unicode String in Python](http://stackoverflow.com/questions/57708/convert-xmlhtml-entities-into-unicode-string-in-python) | Here is a version of xperroni's answer which is a bit more complete. It skips script and style sections and translates charrefs (e.g., ') and HTML entities (e.g., &).
It also includes a trivial plain-text-to-html inverse converter.
```
"""
HTML <-> text conversions.
"""
from HTMLParser import HTMLParser, HTMLParseError
from htmlentitydefs import name2codepoint
import re
class _HTMLToText(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self._buf = []
self.hide_output = False
def handle_starttag(self, tag, attrs):
if tag in ('p', 'br') and not self.hide_output:
self._buf.append('\n')
elif tag in ('script', 'style'):
self.hide_output = True
def handle_startendtag(self, tag, attrs):
if tag == 'br':
self._buf.append('\n')
def handle_endtag(self, tag):
if tag == 'p':
self._buf.append('\n')
elif tag in ('script', 'style'):
self.hide_output = False
def handle_data(self, text):
if text and not self.hide_output:
self._buf.append(re.sub(r'\s+', ' ', text))
def handle_entityref(self, name):
if name in name2codepoint and not self.hide_output:
c = unichr(name2codepoint[name])
self._buf.append(c)
def handle_charref(self, name):
if not self.hide_output:
n = int(name[1:], 16) if name.startswith('x') else int(name)
self._buf.append(unichr(n))
def get_text(self):
return re.sub(r' +', ' ', ''.join(self._buf))
def html_to_text(html):
"""
Given a piece of HTML, return the plain text it contains.
This handles entities and char refs, but not javascript and stylesheets.
"""
parser = _HTMLToText()
try:
parser.feed(html)
parser.close()
except HTMLParseError:
pass
return parser.get_text()
def text_to_html(text):
"""
Convert the given text to html, wrapping what looks like URLs with <a> tags,
converting newlines to <br> tags and converting confusing chars into html
entities.
"""
def f(mo):
t = mo.group()
if len(t) == 1:
return {'&':'&', "'":''', '"':'"', '<':'<', '>':'>'}.get(t)
return '<a href="%s">%s</a>' % (t, t)
return re.sub(r'https?://[^] ()"\';]+|[&\'"<>]', f, text)
``` |
Extracting text from HTML file using Python | 328,356 | 128 | 2008-11-30T02:28:04Z | 24,618,186 | 44 | 2014-07-07T19:18:20Z | [
"python",
"html",
"text",
"html-content-extraction"
] | I'd like to extract the text from an HTML file using Python. I want essentially the same output I would get if I copied the text from a browser and pasted it into notepad.
I'd like something more robust than using regular expressions that may fail on poorly formed HTML. I've seen many people recommend Beautiful Soup, but I've had a few problems using it. For one, it picked up unwanted text, such as JavaScript source. Also, it did not interpret HTML entities. For example, I would expect ' in HTML source to be converted to an apostrophe in text, just as if I'd pasted the browser content into notepad.
**Update** `html2text` looks promising. It handles HTML entities correctly and ignores JavaScript. However, it does not exactly produce plain text; it produces markdown that would then have to be turned into plain text. It comes with no examples or documentation, but the code looks clean.
---
Related questions:
* [Filter out HTML tags and resolve entities in python](http://stackoverflow.com/questions/37486/filter-out-html-tags-and-resolve-entities-in-python)
* [Convert XML/HTML Entities into Unicode String in Python](http://stackoverflow.com/questions/57708/convert-xmlhtml-entities-into-unicode-string-in-python) | The best piece of code I found for extracting text without getting javascript or not wanted things :
```
import urllib
from bs4 import BeautifulSoup
url = "http://news.bbc.co.uk/2/hi/health/2284783.stm"
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
# kill all script and style elements
for script in soup(["script", "style"]):
script.extract() # rip it out
# get text
text = soup.get_text()
# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)
print(text)
```
You just have to install BeautifulSoup before :
```
pip install beautifulsoup4
``` |
Printing all instances of a class | 328,851 | 31 | 2008-11-30T13:23:48Z | 328,882 | 53 | 2008-11-30T13:56:48Z | [
"python",
"class"
] | With a class in Python, how do I define a function to print every single instance of the class in a format defined in the function? | I see two options in this case:
## Garbage collector
```
import gc
for obj in gc.get_objects():
if isinstance(obj, some_class):
dome_something(obj)
```
This has the disadvantage of being very slow when you have a lot of objects, but works with types over which you have no control.
## Use a mixin and weakrefs
```
from collections import defaultdict
import weakref
class KeepRefs(object):
__refs__ = defaultdict(list)
def __init__(self):
self.__refs__[self.__class__].append(weakref.ref(self))
@classmethod
def get_instances(cls):
for inst_ref in cls.__refs__[cls]:
inst = inst_ref()
if inst is not None:
yield inst
class X(KeepRefs):
def __init__(self, name):
super(X, self).__init__()
self.name = name
x = X("x")
y = X("y")
for r in X.get_instances():
print r.name
del y
for r in X.get_instances():
print r.name
```
In this case, all the references get stored as a weak reference in a list. If you create and delete a lot of instances frequently, you should clean up the list of weakrefs after iteration, otherwise there's going to be a lot of cruft.
Another problem in this case is that you have to make sure to call the base class constructor. You could also override `__new__`, but only the `__new__` method of the first base class is used on instantiation. This also works only on types that are under your control.
**Edit**: The method for printing all instances according to a specific format is left as an exercise, but it's basically just a variation on the `for`-loops. |
Printing all instances of a class | 328,851 | 31 | 2008-11-30T13:23:48Z | 9,460,070 | 10 | 2012-02-27T04:26:56Z | [
"python",
"class"
] | With a class in Python, how do I define a function to print every single instance of the class in a format defined in the function? | You'll want to create a static list on your class, and add a `weakref` to each instance so the garbage collector can clean up your instances when they're no longer needed.
```
import weakref
class A:
instances = []
def __init__(self, name=None):
self.__class__.instances.append(weakref.proxy(self))
self.name = name
a1 = A('a1')
a2 = A('a2')
a3 = A('a3')
a4 = A('a4')
for instance in A.instances:
print(instance.name)
``` |
Pure Python rational numbers module for 2.5 | 329,333 | 6 | 2008-11-30T20:42:50Z | 329,338 | 8 | 2008-11-30T20:47:25Z | [
"python",
"rational-numbers"
] | Has anybody seen such a thing? Small self-sufficient modules are preferred. | [SymPy](http://code.google.com/p/sympy/) is a symbolic maths library written entirely in Python and has full support for rational numbers. From the [tutorial](http://docs.sympy.org/tutorial.html):
```
>>> from sympy import *
>>> a = Rational(1,2)
>>> a
1/2
>>> a*2
1
>>> Rational(2)**50/Rational(10)**50
1/88817841970012523233890533447265625
```
There is also GMP for Python ([GMPY](http://gmpy.sourceforge.net/)) which, while not pure Python, is probably more efficient. |
Pure Python rational numbers module for 2.5 | 329,333 | 6 | 2008-11-30T20:42:50Z | 329,453 | 9 | 2008-11-30T21:57:16Z | [
"python",
"rational-numbers"
] | Has anybody seen such a thing? Small self-sufficient modules are preferred. | The [fractions module](http://docs.python.org/library/fractions.html) from 2.6 can be ripped out if necessary. Grab fractions.py, numbers.py, and abc.py; all pure python modules.
You can get the single files from here (2.6 branch, 2.7 does not work):
<http://hg.python.org/cpython/branches> |
How to find all built in libraries in Python | 329,498 | 5 | 2008-11-30T22:21:34Z | 329,510 | 16 | 2008-11-30T22:26:58Z | [
"python"
] | I've recently started with Python, and am enjoying the "batteries included" design. I'e already found out I can import time, math, re, urllib, but don't know how to know that something is builtin rather than writing it from scratch.
What's included, and where can I get other good quality libraries from? | Firstly, the [python libary reference](http://www.python.org/doc/2.5.2/lib/lib.html) gives a blow by blow of what's actually included. And the [global module index](http://docs.python.org/modindex.html) contains a neat, alphabetized summary of those same modules. If you have dependencies on a library, you can trivially test for the presence with a construct like:
```
try:
import foobar
except:
print 'No foobar module'
```
If you do this on startup for modules not necessarily present in the distribution you can bail with a sensible diagnostic.
The [Python Package Index](http://pypi.python.org/pypi) plays a role similar to that of CPAN in the perl world and has a list of many third party modules of one sort or another. Browsing and searching this should give you a feel for what's about. There are also utilities such as [Yolk](http://pypi.python.org/pypi/yolk) which allow you to query the Python Package Index and the installed packages on Python.
Other good online Python resources are:
* [www.python.org](http://www.python.org)
* The [comp.lang.python](http://groups.google.com/group/comp.lang.python) newsgroup - this is still very active.
* Various of the [items linked off](http://www.python.org/links/) the Python home page.
* Various home pages and blogs by python luminaries such as [The Daily Python URL](http://www.pythonware.com/daily/), [effbot.org](http://www.effbot.org/), [The Python Cookbook](http://code.activestate.com/recipes/langs/python/), [Ian Bicking's blog](http://blog.ianbicking.org/) (the guy responsible for SQLObject), and the [Many blogs and sites off planet.python.org.](http://planet.python.org/) |
How to find all built in libraries in Python | 329,498 | 5 | 2008-11-30T22:21:34Z | 329,518 | 11 | 2008-11-30T22:34:52Z | [
"python"
] | I've recently started with Python, and am enjoying the "batteries included" design. I'e already found out I can import time, math, re, urllib, but don't know how to know that something is builtin rather than writing it from scratch.
What's included, and where can I get other good quality libraries from? | run
```
pydoc -p 8080
```
and point your browser to <http://localhost:8080/>
You'll see everything that's installed and can spend lots of time discovering new things. :) |
Python CMS for my own website? | 329,706 | 5 | 2008-12-01T00:53:21Z | 329,718 | 9 | 2008-12-01T01:01:11Z | [
"python",
"django",
"content-management-system",
"web-testing"
] | I'm an accomplished web and database developer, and I'm interested in redesigning my own website.
I have the following content goals:
* Support a book I'm writing
* Move my blog to my own site (from blogger.com)
* Publish my articles (more persistent content than a blog)
* Host a forum with light use
* Embed slide sharing and screencasts
I have the following technology goals for implementing my site:
* Learn more Python and Django
* Leverage a CMS solution such as Pinax or Django-CMS
* Utilize a CSS framework, such as Blueprint or YUI
* I develop on a Mac OS X platform
* I'm comfortable developing in a CLI, but I'd like to practice Eclipse or NetBeans
* I'd like to employ testing during development
* Please, no Microsoft languages or tools
Any suggestions for technology choices that support these goals?
**Edit:** Apologies if the question above was too unclear or general. What I'm asking for is if folks have had experience doing a similar modest website, what would be recommendations for tools, frameworks, or techniques outside of those I listed?
* Is there another Python CMS that I should consider besides the two I listed? E.g. there may be a great Python solution, but it isn't built on top of Django.
* Perhaps all current Python CMS packages are too "alpha," and I'd be better off writing my own from scratch? Although I am up to it, I'd rather leverage an existing package.
* Given this kind of project, would you deploy a CMS with built-in (or plug-in) support for blogs, forums, etc. or would you rather design a simpler website and embed the more complex content management using other services, relying on your own website only as a dumb proxy or portal. E.g. one can re-publish Blogger.com content using the Google Gdata API. One can embed re-branded Nabble.com archives into any website, which may provide forum/mailinglist functionality more easily than running the forum itself.
* Sometimes a CMS package has its own CSS integrated, and using another CSS framework would be redundant or otherwise make no sense. Yes? No?
* Are there plugins for Django in Eclipse or Netbeans? I understand there's a pretty nice environment for Rails development in NetBeans, and I've read some people wish longingly for something similar for Django, but I don't know if these wishes have since been realized.
* What are some current preferred tools for unit and functional testing a Django application? Are these integrated with Eclipse or Netbeans? | 1. **Is there another Python CMS?** Yes, there is. Are they better than Django? From some perspective, yes. Should you change? No. Learn Django, it's as good as or better than most.
2. **Perhaps all current Python CMS packages are too "alpha."** A shocking statement, IMO. However, if you think you can do better, by all means, jump in. However, it's a huge amount of work, and your goal does not say "do a huge amount of work to invent Yet Another CMS Framework."
3. **Would you deploy a CMS with built-in (or plug-in) support for blogs, forums, etc.** I don't completely get this. There's content (i.e., blog postings, forum postings) and there's a web application (i.e., a blog site with forum comments). They're different beasts, web applications depend on CMS. A CMS can (generally) handle any kind of content; therefore, blogs aren't usually described as "plug-ins", they're just content. Maybe you want a pre-built content model for blogs or something? Not sure what your question really is.
* **relying on your own website [to] re-publish Blogger.com content**. Hard to know what to say here. It matches your goals to simply proxy or rebrand nabble. But it doesn't match your other goals because You won't learn very much Django, CMS, or any other technology. Since your first goal and your technology list don't match up well, I have no idea what you're planning to do: learn Django or work on your book and move your blog. Not sure what your question really is.
4. **Sometimes a CMS package has its own CSS integrated, and using another CSS framework would be redundant or otherwise make no sense.** Correct. Not sure what your question really is.
5. **Are there plugins for Django in Eclipse or Netbeans?** Is this a "let me google that for you" question? <http://pydev.sourceforge.net/> is the Eclipse plug-in for Python. <http://wiki.netbeans.org/Python> is information on the Python plug-in for Net Beans. Django is just Python, so a Django-specific plug-in doesn't really mean much. [Komodo Edit](http://www.activestate.com/store/download.aspx?prdGUID=20f4ed15-6684-4118-a78b-d37ff4058c5f) knows Django template syntax and can do some syntax highlighting. That's cool.
6. **What are some current preferred tools for unit and functional testing a Django application?** Python has unittest tools. They're part of Python. Not part of an IDE. Django has unit testing tools. They're part of Django -- which is just Python -- not part of an IDE. You simply run them and they produce a text log of what happened. |
In Python, is there a concise way to use a list comprehension with multiple iterators? | 329,886 | 6 | 2008-12-01T02:44:03Z | 329,904 | 14 | 2008-12-01T02:58:14Z | [
"python",
"iterator",
"list-comprehension"
] | Basically, I would like to build a list comprehension over the "cartesian product" of two iterators. Think about the following [Haskell code](http://www.haskell.org/haskellwiki/List_comprehension):
```
[(i,j) | i <- [1,2], j <- [1..4]]
```
which yields
```
[(1,1),(1,2),(1,3),(1,4),(2,1),(2,2),(2,3),(2,4)]
```
Can I obtain a similar behavior in Python in a concise way? | Are you asking about this?
```
[ (i,j) for i in range(1,3) for j in range(1,5) ]
``` |
In Python, is there a concise way to use a list comprehension with multiple iterators? | 329,886 | 6 | 2008-12-01T02:44:03Z | 329,978 | 7 | 2008-12-01T03:40:28Z | [
"python",
"iterator",
"list-comprehension"
] | Basically, I would like to build a list comprehension over the "cartesian product" of two iterators. Think about the following [Haskell code](http://www.haskell.org/haskellwiki/List_comprehension):
```
[(i,j) | i <- [1,2], j <- [1..4]]
```
which yields
```
[(1,1),(1,2),(1,3),(1,4),(2,1),(2,2),(2,3),(2,4)]
```
Can I obtain a similar behavior in Python in a concise way? | Cartesian product is in the [itertools module](http://docs.python.org/library/itertools.html#itertools.product) (in 2.6).
```
>>> import itertools
>>> list(itertools.product(range(1, 3), range(1, 5)))
[(1, 1), (1, 2), (1, 3), (1, 4), (2, 1), (2, 2), (2, 3), (2, 4)]
``` |
Python crypt module -- what's the correct use of salts? | 329,956 | 9 | 2008-12-01T03:31:45Z | 330,039 | 7 | 2008-12-01T04:20:23Z | [
"python",
"linux",
"cryptography",
"crypt"
] | First, context: I'm trying to create a command-line-based tool (Linux) that
requires login. Accounts on this tool have nothing to do with
system-level accounts -- none of this looks at /etc/passwd.
I am planning to store user accounts in a text file using the same format (roughly) as /etc/passwd.
Despite not using the system-level password files, using crypt seemed
to be a good practice to use, as opposed to storing passwords in
cleartext. (While crypt is certainly better than storing passwords in
cleartext, I'm open to other ways of doing this.)
My crypt knowledge is based on this:
<https://docs.python.org/2/library/crypt.html>
The documentation seems to ask for something that isn't possible: "it
is recommended to use the full crypted password as salt when checking
for a password."
Huh? If I'm creating the crypted password (as in, when creating a user
record) how can I use the crypted password as a salt? It
doesn't exist yet. (I'm assuming that you must use the same salt for creating and checking a password.)
I've tried using the plaintext password as a salt. This does
work, but has two problems; one easily overcome, and one serious:
1) The first two letters of the plaintext password are included in the
crypted password. You can fix this by not writing the first two
characters to the file:
```
user_record = '%s:%s:%s' % (user_name, crypted_pw[2:], user_type)
```
2) By using the plaintext password as the salt, you would seem to be
reducing the amount of entropy in the system. Possibly I'm
misunderstanding the purpose of the salt.
The best practice I've been able to derive is to use the first two
characters from the username as the salt. Would this be appropriate,
or is there something I've missed that makes that a bad move?
My understanding of a salt is that it prevents pre-computing password
hashes from a dictionary. I could use a standard salt for all
passwords (such as my initials, "JS,") but that seems to be less of a
burden for an attacker than using two characters from each user's username. | Python's crypt() is a wrapper for the system's crypt() function. From the Linux crypt() man page:
```
char *crypt(const char *key, const char *salt);
key is a userâs typed password.
salt is a two-character string chosen from the set [aâzAâZ0â9./].
This string is used to perturb the algorithm in one of 4096
different ways.
```
Emphasis is on "*two-character* string". Now, if you look at crypt()'s behavior in Python:
```
>>> crypt.crypt("Hello", "World")
'Wo5pEi/H5/mxU'
>>> crypt.crypt("Hello", "ABCDE")
'AB/uOsC7P93EI'
```
you discover that the first two characters of the result always coincide with the first two characters of the original salt, which indeed form the true two-character-salt itself.
That is, the result of crypt() has the form 2char-salt + encrypted-pass.
Hence, there is no difference in the result if instead of passing the two-character-salt or the original many-characters-salt you pass the whole encrypted password.
Note: the set [aâzAâZ0â9./] contains 64 characters, and 64\*64=4096. Here's how *two* characters relate to "*4096* different ways". |
Replace textarea with rich text editor in Django Admin? | 329,963 | 33 | 2008-12-01T03:32:46Z | 330,087 | 19 | 2008-12-01T05:12:28Z | [
"python",
"django",
"admin"
] | I would like to know the best way to replace a standard textarea field with a rich text editor in Django Admin? | There's an [add-on Django application](http://pypi.python.org/pypi/django-tinymce/) to provide [TinyMCE](http://tinymce.moxiecode.com/) support for Django admin forms without having to muck around with admin templates or Django newform internals. |
Replace textarea with rich text editor in Django Admin? | 329,963 | 33 | 2008-12-01T03:32:46Z | 337,114 | 12 | 2008-12-03T13:34:31Z | [
"python",
"django",
"admin"
] | I would like to know the best way to replace a standard textarea field with a rich text editor in Django Admin? | Take a look on this [snippet](http://www.djangosnippets.org/snippets/1035/) - basic idea is to include custom JS in your admin definitions which will replace standard text areas with rich-text editor.
For jQuery/FCKEditor such JS could look like that:
```
$(document).ready(function() {
$("textarea").each(function(n, obj) {
fck = new FCKeditor(obj.id) ;
fck.BasePath = "/admin-media/fckeditor/" ;
fck.ReplaceTextarea() ;
});
});
``` |
Replace textarea with rich text editor in Django Admin? | 329,963 | 33 | 2008-12-01T03:32:46Z | 346,257 | 9 | 2008-12-06T13:14:21Z | [
"python",
"django",
"admin"
] | I would like to know the best way to replace a standard textarea field with a rich text editor in Django Admin? | I'd say: define your own ModelAdmin class and overwrite the widget used for particular field, like:
```
class ArticleAdminModelForm(forms.ModelForm):
description = forms.CharField(widget=widgets.AdminWYMEditor)
class Meta:
model = models.Article
```
(AdminWYMEditor is a `forms.Textarea` subclass that adds WYMEditor with configuration specific to Django admin app).
See [this blog post by Jannis Leidel](http://web.archive.org/web/20101123194618/http://jannisleidel.com/2008/11/wysiwym-editor-widget-django-admin-interface/) to see how this widget can be implemented. |
How would one make Python objects persistent in a web-app? | 330,367 | 6 | 2008-12-01T09:26:07Z | 330,574 | 8 | 2008-12-01T11:29:48Z | [
"python",
"web-applications",
"concurrency",
"persistence"
] | I'm writing a reasonably complex web application. The Python backend runs an algorithm whose state depends on data stored in several interrelated database tables which does not change often, plus user specific data which does change often. The algorithm's per-user state undergoes many small changes as a user works with the application. This algorithm is used often during each user's work to make certain important decisions.
For performance reasons, re-initializing the state on every request from the (semi-normalized) database data quickly becomes non-feasible. It would be highly preferable, for example, to cache the state's Python object in some way so that it can simply be used and/or updated whenever necessary. However, since this is a web application, there several processes serving requests, so using a global variable is out of the question.
I've tried serializing the relevant object (via pickle) and saving the serialized data to the DB, and am now experimenting with caching the serialized data via memcached. However, this still has the significant overhead of serializing and deserializing the object often.
I've looked at shared memory solutions but the only relevant thing I've found is [POSH](http://poshmodule.sourceforge.net/). However POSH doesn't seem to be widely used and I don't feel easy integrating such an experimental component into my application.
I need some advice! This is my first shot at developing a web application, so I'm hoping this is a common enough issue that there are well-known solutions to such problems. At this point solutions which assume the Python back-end is running on a single server would be sufficient, but extra points for solutions which scale to multiple servers as well :)
Notes:
* I have this application working, currently live and with active users. I started out without doing any premature optimization, and then optimized as needed. I've done the measuring and testing to make sure the above mentioned issue is the actual bottleneck. I'm sure pretty sure I could squeeze more performance out of the current setup, but I wanted to ask if there's a better way.
* The setup itself is still a work in progress; assume that the system's architecture can be whatever suites your solution. | Be cautious of premature optimization.
Addition: The "Python backend runs an algorithm whose state..." is the session in the web framework. That's it. Let the Django framework maintain session state in cache. Period.
"The algorithm's per-user state undergoes many small changes as a user works with the application." Most web frameworks offer a cached session object. Often it is very high performance. See Django's [session documentation](http://docs.djangoproject.com/en/dev/topics/http/sessions/#topics-http-sessions) for this.
Advice. [Revised]
It appears you have something that works. Leverage to learn your framework, learn the tools, and learn what knobs you can turn without breaking a sweat. Specifically, using session state.
Second, fiddle with caching, session management, and things that are easy to adjust, and see if you have enough speed. Find out whether MySQL socket or named pipe is faster by trying them out. These are the no-programming optimizations.
Third, measure performance to find your actual bottleneck. Be prepared to provide (and defend) the measurements as fine-grained enough to be useful and stable enough to providing meaningful comparison of alternatives.
For example, show the performance difference between persistent sessions and cached sessions. |
How to quickly parse a list of strings | 330,900 | 14 | 2008-12-01T13:51:42Z | 330,908 | 7 | 2008-12-01T13:53:57Z | [
"python"
] | If I want to split a list of words separated by a delimiter character, I can use
```
>>> 'abc,foo,bar'.split(',')
['abc', 'foo', 'bar']
```
But how to easily and quickly do the same thing if I also want to handle quoted-strings which can contain the delimiter character ?
```
In: 'abc,"a string, with a comma","another, one"'
Out: ['abc', 'a string, with a comma', 'another, one']
```
Related question: [How can i parse a comma delimited string into a list (caveat)?](http://stackoverflow.com/questions/118096/how-can-i-parse-a-comma-delimited-string-into-a-list-caveat) | The [CSV module](http://www.python.org/doc/2.5.2/lib/module-csv.html) should be able to do that for you |
How to quickly parse a list of strings | 330,900 | 14 | 2008-12-01T13:51:42Z | 330,924 | 32 | 2008-12-01T13:59:22Z | [
"python"
] | If I want to split a list of words separated by a delimiter character, I can use
```
>>> 'abc,foo,bar'.split(',')
['abc', 'foo', 'bar']
```
But how to easily and quickly do the same thing if I also want to handle quoted-strings which can contain the delimiter character ?
```
In: 'abc,"a string, with a comma","another, one"'
Out: ['abc', 'a string, with a comma', 'another, one']
```
Related question: [How can i parse a comma delimited string into a list (caveat)?](http://stackoverflow.com/questions/118096/how-can-i-parse-a-comma-delimited-string-into-a-list-caveat) | ```
import csv
input = ['abc,"a string, with a comma","another, one"']
parser = csv.reader(input)
for fields in parser:
for i,f in enumerate(fields):
print i,f # in Python 3 and up, print is a function; use: print(i,f)
```
Result:
```
0 abc
1 a string, with a comma
2 another, one
``` |
Python style | 331,767 | 3 | 2008-12-01T18:40:32Z | 331,776 | 17 | 2008-12-01T18:44:39Z | [
"python",
"coding-style"
] | Simple beginner question:
I've created a small python script to toggle between two files I'm using for testing.
My question is, what is a good python format style for the following code:
```
import filecmp
import shutil
local = "local.txt"
remote = "remote.txt"
config_file = "C:\some\path\file.txt"
shutil.copyfile( remote if( filecmp.cmp(local, config_file ) ) else local, config_file )
```
Or
```
shutil.copyfile( remote
if( filecmp.cmp(local, config_file ) )
else local,
config_file )
```
Or
```
tocopy = remote if( filecmp.cmp( local, config_file ) ) else local
shutil.copyfile( tocopy, config_file )
```
Or what?
Also, what is the preffered way to name var in python for many-word names, is it "to\_copy", "tocopy", "toCopy", "ToCopy"
Thanks. | For the conditional statement, I would probably go with:
```
if filecmp.cmp(local, config_file):
shutil.copyfile(remote, config_file)
else:
shutil.copyfile(local, config_file)
```
There's little need to use the inline `y if x else z` in this case, since the surrounding code is simple enough. |
How to determine if a page is being redirected | 331,855 | 4 | 2008-12-01T19:10:57Z | 331,890 | 9 | 2008-12-01T19:23:33Z | [
"python",
"http",
"http-headers"
] | I need to check whether a page is being redirected or not without actually downloading the content. I just need the final URL. What's the best way of doing this is Python?
Thanks! | If you specifically want to avoid downloading the content, you'll need to use the HEAD request method. I believe the `urllib` and `urllib2` libraries do not support HEAD requests, so you'll have to use the lower-level `httplib` library:
```
import httplib
h = httplib.HTTPConnection('www.example.com')
h.request('HEAD', '/')
response = h.getresponse()
// Check for 30x status code
if 300 <= response.status < 400:
// It's a redirect
location = response.getheader('Location')
``` |
Converting a PDF to a series of images with Python | 331,918 | 32 | 2008-12-01T19:31:07Z | 331,924 | 16 | 2008-12-01T19:33:44Z | [
"python",
"pdf",
"imagemagick",
"jpeg",
"python-imaging-library"
] | I'm attempting to use Python to convert a multi-page PDF into a series of JPEGs. I can split the PDF up into individual pages easily enough with available tools, but I haven't been able to find anything that can covert PDFs to images.
PIL does not work, as it can't read PDFs. The two options I've found are using either GhostScript or ImageMagick through the shell. This is not a viable option for me, since this program needs to be cross-platform, and I can't be sure either of those programs will be available on the machines it will be installed and used on.
Are there any Python libraries out there that can do this? | ImageMagick has [Python bindings](http://www.imagemagick.org/download/python/). |
Difference between class foo and class foo(object) in Python | 332,255 | 28 | 2008-12-01T21:17:40Z | 332,815 | 34 | 2008-12-02T01:59:21Z | [
"python"
] | I know `class foo(object)` is an old school way of defining a class. But I would like to understand in more detail the difference between these two. | Prior to python 2.2 there were essentially two different types of class: Those defined by C extensions and C coded builtins (types) and those defined by python class statements (classes). This led to problems when you wanted to mix python-types and builtin types. The most common reason for this is subclassing. If you wanted to subclass the list type in python code, you were out of luck, and so various workarounds were used instead, such as subclassing the pure python implementation of lists (in the UserList module) instead.
This was a fairly ugly, so in 2.2 there was a [move](http://www.python.org/dev/peps/pep-0252/) to unify python and builtin types, including the ability to [inherit](http://www.python.org/dev/peps/pep-0253/) from them. The result is "new style classes". These do have some incompatible differences to old-style classes however, so for backward compatability the bare class syntax creates an old-style class, while the new behaviour is obtained by inheriting from object. The most visible behaviour differences are:
* The method resolution order (MRO). There is a difference in behaviour in diamond-shaped inheritance hierarchies (where A inherits from both B and C, which both inherit from a common base class D. Previously, methods were looked up left-to right, depth first (ie A B D C D) However if C overloads a member of D, it won't be used by A (as it finds D's implementation first) This is bad for various styles of programming (eg. using mixin classes). New style classes will treat this situation as A B C D, (look at the `__mro__` attribute of a class to see the order it will search)
* The `__new__` constructor is added, which allows the class to act as a factory method, rather than return a new instance of the class. Useful for returning particular subclasses, or reusing immutable objects rather than creating new ones without having to change the creation interface.
* [Descriptors](https://docs.python.org/2/reference/datamodel.html#implementing-descriptors). These are the feature behind such things as properties, classmethods, staticmethods etc. Essentially, they provide a way to control what happens when you access or set a particular attribute on a (new style) class. |
How do you change the size of figures drawn with matplotlib? | 332,289 | 518 | 2008-12-01T21:24:44Z | 332,311 | 153 | 2008-12-01T21:28:56Z | [
"python",
"graph",
"matplotlib",
"plot",
"visualization"
] | How do you change the size of figure drawn with matplotlib? | The following seems to work:
```
from pylab import rcParams
rcParams['figure.figsize'] = 5, 10
```
This makes the figure's width 5 inches, and its height 10 **inches**.
The Figure class then uses this as the default value for one of its arguments. |
How do you change the size of figures drawn with matplotlib? | 332,289 | 518 | 2008-12-01T21:24:44Z | 334,462 | 43 | 2008-12-02T16:01:32Z | [
"python",
"graph",
"matplotlib",
"plot",
"visualization"
] | How do you change the size of figure drawn with matplotlib? | The first link in Google for `'matplotlib figure size'` is [AdjustingImageSize](http://www.scipy.org/Cookbook/Matplotlib/AdjustingImageSize) ([Google cache of the page](https://webcache.googleusercontent.com/search?q=cache:5oqjjm8c8UMJ:https://scipy.github.io/old-wiki/pages/Cookbook/Matplotlib/AdjustingImageSize.html+&cd=2&hl=en&ct=clnk&gl=fr)).
Here's a test script from the above page. It creates `test[1-3].png` files of different sizes of the same image:
```
#!/usr/bin/env python
"""
This is a small demo file that helps teach how to adjust figure sizes
for matplotlib
"""
import matplotlib
print "using MPL version:", matplotlib.__version__
matplotlib.use("WXAgg") # do this before pylab so you don'tget the default back end.
import pylab
import matplotlib.numerix as N
# Generate and plot some simple data:
x = N.arange(0, 2*N.pi, 0.1)
y = N.sin(x)
pylab.plot(x,y)
F = pylab.gcf()
# Now check everything with the defaults:
DPI = F.get_dpi()
print "DPI:", DPI
DefaultSize = F.get_size_inches()
print "Default size in Inches", DefaultSize
print "Which should result in a %i x %i Image"%(DPI*DefaultSize[0], DPI*DefaultSize[1])
# the default is 100dpi for savefig:
F.savefig("test1.png")
# this gives me a 797 x 566 pixel image, which is about 100 DPI
# Now make the image twice as big, while keeping the fonts and all the
# same size
F.set_size_inches( (DefaultSize[0]*2, DefaultSize[1]*2) )
Size = F.get_size_inches()
print "Size in Inches", Size
F.savefig("test2.png")
# this results in a 1595x1132 image
# Now make the image twice as big, making all the fonts and lines
# bigger too.
F.set_size_inches( DefaultSize )# resetthe size
Size = F.get_size_inches()
print "Size in Inches", Size
F.savefig("test3.png", dpi = (200)) # change the dpi
# this also results in a 1595x1132 image, but the fonts are larger.
```
Output:
```
using MPL version: 0.98.1
DPI: 80
Default size in Inches [ 8. 6.]
Which should result in a 640 x 480 Image
Size in Inches [ 16. 12.]
Size in Inches [ 16. 12.]
```
Two notes:
1. The module comments and the actual output differ.
2. [This answer](http://stackoverflow.com/questions/335896/how-to-complete-this-python-function-to-save-in-the-same-folder#336001) allows easily to combine all three images in one image file to see the difference in sizes. |
How do you change the size of figures drawn with matplotlib? | 332,289 | 518 | 2008-12-01T21:24:44Z | 638,443 | 380 | 2009-03-12T12:41:35Z | [
"python",
"graph",
"matplotlib",
"plot",
"visualization"
] | How do you change the size of figure drawn with matplotlib? | [figure](http://matplotlib.sourceforge.net/api/figure_api.html#matplotlib.figure.Figure) tells you the call signature:
```
figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
```
So `figure(figsize=(1,1))` creates an inch-by-inch image, which will be 80-by-80 pixels unless you also give a different dpi argument. |
How do you change the size of figures drawn with matplotlib? | 332,289 | 518 | 2008-12-01T21:24:44Z | 4,306,340 | 328 | 2010-11-29T17:30:40Z | [
"python",
"graph",
"matplotlib",
"plot",
"visualization"
] | How do you change the size of figure drawn with matplotlib? | If you've already got the figure created you can quickly do this:
```
fig = matplotlib.pyplot.gcf()
fig.set_size_inches(18.5, 10.5)
fig.savefig('test2png.png', dpi=100)
```
To propagate the size change to an existing gui window add `forward=True`
```
fig.set_size_inches(18.5, 10.5, forward=True)
``` |
How do you change the size of figures drawn with matplotlib? | 332,289 | 518 | 2008-12-01T21:24:44Z | 24,073,700 | 62 | 2014-06-06T03:24:00Z | [
"python",
"graph",
"matplotlib",
"plot",
"visualization"
] | How do you change the size of figure drawn with matplotlib? | Please try a simple code as following:
```
from matplotlib import pyplot as plt
plt.figure(figsize=(1,1))
x = [1,2,3]
plt.plot(x, x)
plt.show()
```
You need to set the figure size before you plot. |
Integer (representing a sequence of bits) reinterpretation as Character Array in Python | 333,097 | 4 | 2008-12-02T05:03:37Z | 333,116 | 9 | 2008-12-02T05:16:21Z | [
"python"
] | I've written some C code that I would like to port to python, as I feel python is a better 'concept' language.
In my C code, I use memory reinterpretation to achieve my goals, for example:
```
sizeof(int) is 4 byte
sizeof(char) is 1 byte
char c[4]={0x01,0x30,0x00,0x80};
int* i=(int*)c;
*i has the value 0x80003001
```
similarly if i have:
```
int* j = (int*)malloc(sizeof(int));
char* c = (char*)j;
*j = 0x78FF00AA;
c is now {0xAA, 0x00, 0xFF, 0x78}
```
I would like to do a similar thing in python, I realise that I can use bit operations to accomplish this:
```
chararray=[]
num=1234567890
size=8
while len(chararray) < size:
char = chr( (num & 255 ) )
num = num >> 8
chararray.append(char)
```
However I was hoping there would be a faster way of accomplishing this.
Does python have anything similar to C's union? | You can use the [struct module](http://www.python.org/doc/2.5.2/lib/module-struct.html):
```
import struct
# Pack a Python long as if it was a C unsigned integer, little endian
bytes = struct.pack("<I", 0x78FF00AA)
print [hex(ord(byte)) for byte in bytes]
['0xaa', '0x0', '0xff', '0x78']
```
Read the documentation page to find about datatypes, and pay attention to endianness. |
string.split(text) or text.split() what's the difference? | 333,706 | 9 | 2008-12-02T11:41:05Z | 333,715 | 11 | 2008-12-02T11:46:16Z | [
"python"
] | There is one thing that I do not understand...
Imagine you have a **text** = "hello world" and you want to split it.
In some places I see people that want to split the **text** doing:
```
string.split(text)
```
In other places I see people just doing:
```
text.split()
```
Whatâs the difference? Why you do in one way or in the other way? Can you give me a theorist explanation about that? | The `string.split(stringobj)` is a feature of the `string` module, which must be imported separately. Once upon a time, that was the only way to split a string. That's some old code you're looking at.
The `stringobj.split()` is a feature of a string object, `stringobj`, which is more recent than the `string` module. But pretty old, nonetheless. That's the current practice. |
string.split(text) or text.split() what's the difference? | 333,706 | 9 | 2008-12-02T11:41:05Z | 333,727 | 20 | 2008-12-02T11:49:56Z | [
"python"
] | There is one thing that I do not understand...
Imagine you have a **text** = "hello world" and you want to split it.
In some places I see people that want to split the **text** doing:
```
string.split(text)
```
In other places I see people just doing:
```
text.split()
```
Whatâs the difference? Why you do in one way or in the other way? Can you give me a theorist explanation about that? | Interestingly, the docstrings for the two are not completely the same in Python 2.5.1:
```
>>> import string
>>> help(string.split)
Help on function split in module string:
split(s, sep=None, maxsplit=-1)
split(s [,sep [,maxsplit]]) -> list of strings
Return a list of the words in the string s, using sep as the
delimiter string. If maxsplit is given, splits at no more than
maxsplit places (resulting in at most maxsplit+1 words). If sep
is not specified or is None, any whitespace string is a separator.
(split and splitfields are synonymous)
>>> help("".split)
Help on built-in function split:
split(...)
S.split([sep [,maxsplit]]) -> list of strings
Return a list of the words in the string S, using sep as the
delimiter string. If maxsplit is given, at most maxsplit
splits are done. If sep is not specified or is None, any
whitespace string is a separator.
```
Digging deeper, you'll see that the two forms are completely equivalent, as [string.split(s)](http://svn.python.org/view/python/tags/r251/Lib/string.py?rev=54864&view=markup) actually calls [s.split()](http://svn.python.org/view/python/tags/r251/Objects/unicodeobject.c?rev=54864&view=markup) (search for the *split*-functions). |
How to detect that Python code is being executed through the debugger? | 333,995 | 12 | 2008-12-02T13:58:30Z | 334,090 | 13 | 2008-12-02T14:25:10Z | [
"python"
] | Is there a simple way to detect, within Python code, that this code is being executed through the Python debugger?
I have a small Python application that uses Java code (thanks to JPype). When I'm debugging the Python part, I'd like the embedded JVM to be given debug options too. | Python debuggers (as well as profilers and coverage tools) use the `sys.settrace` function (in the `sys` module) to register a callback that gets called when interesting events happen.
If you're using Python 2.6, you can call `sys.gettrace()` to get the current trace callback function. If it's not `None` then you can assume you should be passing debug parameters to the JVM.
It's not clear how you could do this pre 2.6. |
How to detect that Python code is being executed through the debugger? | 333,995 | 12 | 2008-12-02T13:58:30Z | 338,391 | 7 | 2008-12-03T19:13:56Z | [
"python"
] | Is there a simple way to detect, within Python code, that this code is being executed through the Python debugger?
I have a small Python application that uses Java code (thanks to JPype). When I'm debugging the Python part, I'd like the embedded JVM to be given debug options too. | A solution working with Python 2.4 (it should work with any version superior to 2.1) and Pydev:
```
import inspect
def isdebugging():
for frame in inspect.stack():
if frame[1].endswith("pydevd.py"):
return True
return False
```
The same should work with pdb by simply replacing `pydevd.py` with `pdb.py`. As do3cc suggested, it tries to find the debugger within the stack of the caller.
Useful links:
* [The Python Debugger](https://docs.python.org/library/pdb.html)
* [The interpreter stack](https://docs.python.org/library/inspect.html#the-interpreter-stack) |
Passing a dictionary to a function in python as keyword parameters | 334,655 | 165 | 2008-12-02T16:49:11Z | 334,666 | 258 | 2008-12-02T16:53:34Z | [
"python",
"function",
"dictionary",
"parameters",
"keyword"
] | I'd like to call a function in python using a dictionary.
Here is some code:
```
d = dict(param='test')
def f(param):
print param
f(d)
```
This prints `{'param': 'test'}` but I'd like it to just print `test`.
I'd like it to work similarly for more parameters:
```
d = dict(p1=1, p2=2)
def f2(p1,p2):
print p1, p2
f2(d)
```
Is this possible? | Figured it out for myself in the end. It is simple, I was just missing the \*\* operator to unpack the dictionary
So my example becomes:
```
d = dict(p1=1, p2=2)
def f2(p1,p2):
print p1, p2
f2(**d)
``` |
Passing a dictionary to a function in python as keyword parameters | 334,655 | 165 | 2008-12-02T16:49:11Z | 335,626 | 19 | 2008-12-02T22:03:11Z | [
"python",
"function",
"dictionary",
"parameters",
"keyword"
] | I'd like to call a function in python using a dictionary.
Here is some code:
```
d = dict(param='test')
def f(param):
print param
f(d)
```
This prints `{'param': 'test'}` but I'd like it to just print `test`.
I'd like it to work similarly for more parameters:
```
d = dict(p1=1, p2=2)
def f2(p1,p2):
print p1, p2
f2(d)
```
Is this possible? | In python, this is called "unpacking", and you can find a bit about it in the [tutorial](https://docs.python.org/2/tutorial/controlflow.html#unpacking-argument-lists). The documentation of it sucks, I agree, especially because of how fantasically useful it is. |
Does PyS60 produce sis files that are native? | 334,765 | 9 | 2008-12-02T17:30:18Z | 386,298 | 14 | 2008-12-22T13:59:10Z | [
"python",
"symbian",
"s60",
"pys60"
] | I am currently looking at developing a mobile apps for the S60 platform and is specifically looking at PyS60. It seems to suggest that the it can be compiled into native .sis files without the need for an embedded python interpreter. Reading through the documentations I could not find any statements where this is explicitly mentioned. While I am right now downloading the SDKs, Emulators, and the whole bunch of tool chains needed to test the development out on Linux, I thought I would ask here a bit while I am doing that. | Once you've written your code in python, you can convert this to a .sis file using ensymble.
<http://code.google.com/p/ensymble/>
This software allows you to make your .py file into a .sis file using the py2sis option - however, it won't be much use on any phone without python installed, so you may also need to use ensymble to merge your newly-created .sis with the .sis file for python, with a command like
./ensymble.py mergesis --verbose your-script-name.sis PythonForS60-1-4-5-3rdEd.sis final-app-name.sis
the resulting final-app-name.sis file will install both your file and also python. |
What's the best way to get the size of a folder and all the files inside from python? | 335,078 | 6 | 2008-12-02T19:09:50Z | 335,105 | 10 | 2008-12-02T19:17:59Z | [
"python"
] | If there will be a small number of files it should be easy with a recursive function to pass through all the files and add the size but what if there are lots of files, and by lots i really mean lots of files. | You mean something like this?
```
import os
for path, dirs, files in os.walk( root ):
for f in files:
print path, f, os.path.getsize( os.path.join( path, f ) )
``` |
detecting if a module is imported inside the app engine environment | 335,399 | 4 | 2008-12-02T20:48:08Z | 335,464 | 11 | 2008-12-02T21:13:13Z | [
"python",
"google-app-engine"
] | What I want to do is patch an existing python module that uses urllib2 to run on app engine, but I don't want to break it so it can be used elsewhere. So I'm looking for a quick solution to test if the module is imported in the app engine environment or not. Catching ImportError on urllib2 might not be the best solution. | You could simply use sys.modules to test if a module has been imported (I'm using unicodedata as an example):
```
>>> import sys
>>> 'unicodedata' in sys.modules
False
>>> import unicodedata
>>> 'unicodedata' in sys.modules
True
``` |
Lists in ConfigParser | 335,695 | 77 | 2008-12-02T22:29:05Z | 335,754 | 67 | 2008-12-02T22:56:44Z | [
"python",
"configparser"
] | The typical ConfigParser generated file looks like:
```
[Section]
bar=foo
[Section 2]
bar2= baz
```
Now, is there a way to index lists like, for instance:
```
[Section 3]
barList={
item1,
item2
}
```
Related question: [Pythonâs ConfigParser unique keys per section](http://stackoverflow.com/questions/287757/pythons-configparser-unique-keys-per-section)
? Thanks in advance | There is nothing stopping you from packing the list into a delimited string and then unpacking it once you get the string from the config. If you did it this way your config section would look like:
```
[Section 3]
barList=item1,item2
```
It's not pretty but it's functional for most simple lists. |
Lists in ConfigParser | 335,695 | 77 | 2008-12-02T22:29:05Z | 8,048,529 | 60 | 2011-11-08T09:50:51Z | [
"python",
"configparser"
] | The typical ConfigParser generated file looks like:
```
[Section]
bar=foo
[Section 2]
bar2= baz
```
Now, is there a way to index lists like, for instance:
```
[Section 3]
barList={
item1,
item2
}
```
Related question: [Pythonâs ConfigParser unique keys per section](http://stackoverflow.com/questions/287757/pythons-configparser-unique-keys-per-section)
? Thanks in advance | Coming late to this party, but I recently implemented this with a dedicated section in a config file for a list:
```
[paths]
path1 = /some/path/
path2 = /another/path/
...
```
and using `config.items( "paths" )` to get an iterable list of path items, like so:
```
path_items = config.items( "paths" )
for key, path in path_items:
#do something with path
```
Hope this helps other folk Googling this question ;) |
Lists in ConfigParser | 335,695 | 77 | 2008-12-02T22:29:05Z | 9,735,884 | 91 | 2012-03-16T10:48:22Z | [
"python",
"configparser"
] | The typical ConfigParser generated file looks like:
```
[Section]
bar=foo
[Section 2]
bar2= baz
```
Now, is there a way to index lists like, for instance:
```
[Section 3]
barList={
item1,
item2
}
```
Related question: [Pythonâs ConfigParser unique keys per section](http://stackoverflow.com/questions/287757/pythons-configparser-unique-keys-per-section)
? Thanks in advance | Also a bit late, but maybe helpful for some.
I am using a combination of ConfigParser and JSON:
```
[Foo]
fibs: [1,1,2,3,5,8,13]
```
just read it with:
```
>>> json.loads(config.get("Foo","fibs"))
[1, 1, 2, 3, 5, 8, 13]
```
You can even break lines if your list is long (thanks @peter-smit):
```
[Bar]
files_to_check = [
"/path/to/file1",
"/path/to/file2",
"/path/to/another file with space in the name"
]
```
Of course i could just use JSON, but i find config files much more readable, and the [DEFAULT] Section very handy. |
Lists in ConfigParser | 335,695 | 77 | 2008-12-02T22:29:05Z | 11,866,695 | 34 | 2012-08-08T14:26:45Z | [
"python",
"configparser"
] | The typical ConfigParser generated file looks like:
```
[Section]
bar=foo
[Section 2]
bar2= baz
```
Now, is there a way to index lists like, for instance:
```
[Section 3]
barList={
item1,
item2
}
```
Related question: [Pythonâs ConfigParser unique keys per section](http://stackoverflow.com/questions/287757/pythons-configparser-unique-keys-per-section)
? Thanks in advance | One thing a lot of people don't know is that multi-line configuration-values are allowed. For example:
```
;test.ini
[hello]
barlist =
item1
item2
```
The value of `config.get('hello','barlist')` will now be:
```
"\nitem1\nitem2"
```
Which you easily can split with the splitlines method (don't forget to filter empty items).
If we look to a big framework like Pyramid they are using this technique:
```
def aslist_cronly(value):
if isinstance(value, string_types):
value = filter(None, [x.strip() for x in value.splitlines()])
return list(value)
def aslist(value, flatten=True):
""" Return a list of strings, separating the input based on newlines
and, if flatten=True (the default), also split on spaces within
each line."""
values = aslist_cronly(value)
if not flatten:
return values
result = []
for value in values:
subvalues = value.split()
result.extend(subvalues)
return result
```
[Source](https://github.com/Pylons/pyramid/blob/1c37b5487d229d1896ec983de1377f3282825739/pyramid/settings.py#L44)
Myself, I would maybe extend the ConfigParser if this is a common thing for you:
```
class MyConfigParser(ConfigParser):
def getlist(self,section,option):
value = self.get(section,option)
return list(filter(None, (x.strip() for x in value.splitlines())))
def getlistint(self,section,option):
return [int(x) for x in self.getlist(section,option)]
```
Note that there are a few things to look out for when using this technique
1. New lines that are items should start with whitespace (e.g. a space or a tab)
2. All following lines that start with whitespace are considered to be part of the previous item. Also if it has an = sign or if it starts with a ; following the whitespace. |
Lists in ConfigParser | 335,695 | 77 | 2008-12-02T22:29:05Z | 22,675,825 | 8 | 2014-03-27T00:20:03Z | [
"python",
"configparser"
] | The typical ConfigParser generated file looks like:
```
[Section]
bar=foo
[Section 2]
bar2= baz
```
Now, is there a way to index lists like, for instance:
```
[Section 3]
barList={
item1,
item2
}
```
Related question: [Pythonâs ConfigParser unique keys per section](http://stackoverflow.com/questions/287757/pythons-configparser-unique-keys-per-section)
? Thanks in advance | I landed here seeking to consume this...
```
[global]
spys = [email protected], [email protected]
```
The answer is to split it on the comma and strip the spaces:
```
SPYS = [e.strip() for e in parser.get('global', 'spys').split(',')]
```
To get a list result:
```
['[email protected]', '[email protected]']
```
It may not answer the OP's question exactly but might be the simple answer some people are looking for. |
How to handle a glade project with many windows | 336,013 | 7 | 2008-12-03T01:39:50Z | 339,212 | 9 | 2008-12-04T00:13:52Z | [
"python",
"gtk",
"pygtk",
"glade"
] | I'm working on a PyGTK/glade application that currently has 16 windows/dialogs and is about 130KB, and will eventually have around 25 windows/dialogs and be around 200KB. Currently, I'm storing all the windows in one monolithic glade file. When I run a window I call it like...
```
self.wTree = gtk.glade.XML("interface.glade", "WindowXYZ")
```
I wonder if it would be a better idea to split each window into it's own glade file. Instead of one glade file with 25 windows/dialogs I'd have 25 glade files with one window/dialog each and call it like so:
```
self.wTree = gtk.glade.XML("windowxyz.glade")
```
What do you guys think is the best way to do this? Is one method more resource intensive than another? One thing that would be nice about going to individual glade files is that naming widgets would be easier. For example, I name all my OK buttons "windowxyz\_ok", but I could change it to simply "ok" instead. Makes things simpler. The downside is that it may be a bit less convenient to make changes to different windows.
I'm open to any and all arguments. Thanks! | In my projects, I always have one window per glade file. I'd recommend the same for your project.
The following are the two main reasons:
* It will be faster and use less memory, since each call to gtk.glade.XML() parses the whole thing. Sure you can pass in the root argument to avoid creating the widget tree for all windows, but you'd still have to *parse* all the XML, even if you're not interested in it.
* Conceptually its easier to understand if have one toplevel per window. You easily know which filename a given dialog/window is in just by looking at the filename. |
Python: Invalid Token | 336,181 | 13 | 2008-12-03T04:01:12Z | 336,189 | 35 | 2008-12-03T04:04:50Z | [
"python",
"octal"
] | Some of you may recognize this as Project Euler's problem number 11. The one with the grid.
I'm trying to replicate the grid in a large multidimensional array, But it's giving me a syntax error and i'm not sure why
```
grid = [
[ 08, 02, 22, 97, 38, 15, 00, 40, 00, 75, 04, 05, 07, 78, 52, 12, 50, 77, 91, 08 ],
[ 49, 49, 99, 40, 17, 81, 18, 57, 60, 87, 17, 40, 98, 43, 69, 48, 04, 56, 62, 00 ],
[ 81, 49, 31, 73, 55, 79, 14, 29, 93, 71, 40, 67, 53, 88, 30, 03, 49, 13, 36, 65 ],
...
```
And I get this error:
```
File "D:\development\Python\ProjectEuler\p11.py", line 3
[ 08, 02, 22, 97, 38, 15, 00, 40, 00, 75, 04, 05, 07, 78, 52, 12, 50, 77, 91 , 08 ],
^ SyntaxError: invalid token
```
Why is it throwing an error before the comma? | I think when you start a literal number with a 0, it interprets it as an octal number and you can't have an '8' in an octal number. |
How to implement a minimal server for AJAX in Python? | 336,866 | 35 | 2008-12-03T11:37:28Z | 336,894 | 9 | 2008-12-03T11:50:13Z | [
"python",
"ajax",
"user-interface"
] | I want to create a very simple HTML/AJAX based GUI for a Python program. So the frontend is a HTML page which communicates with the program via AJAX. Can you give me a minimal implementation for the server-side using the python `SimpleHTTPServer.SimpleHTTPRequestHandler`?
A simple example would be a textfield and a button. When the button is pressed the content of the field is send to the server which then sends back a corresponding answer. I am aware that there are many powerful solutions for this in Python, but I would like to keep this very simple.
I already found some nice examples for such a server (e.g. [here](http://msdl.cs.mcgill.ca/people/julien/04Ajax)), but so far I could not come up with a truly minimal one.
In case you wonder why I want to implement the GUI in such a way: My focus for this application is to display lots of data in a nice layout with only minimal interaction - so using HTML+CSS seems most convenient (and I have been already using it for non-interactive data display). | Use the [WSGI reference implementation](https://docs.python.org/2/library/wsgiref.html#module-wsgiref.simple_server). In the long run, you'll be happier.
```
from wsgiref.simple_server import make_server, demo_app
httpd = make_server('', 8000, demo_app)
print "Serving HTTP on port 8000..."
# Respond to requests until process is killed
httpd.serve_forever()
```
The demo\_app is relatively easy to write; it handles your Ajax requests. |
How to implement a minimal server for AJAX in Python? | 336,866 | 35 | 2008-12-03T11:37:28Z | 338,519 | 45 | 2008-12-03T20:01:15Z | [
"python",
"ajax",
"user-interface"
] | I want to create a very simple HTML/AJAX based GUI for a Python program. So the frontend is a HTML page which communicates with the program via AJAX. Can you give me a minimal implementation for the server-side using the python `SimpleHTTPServer.SimpleHTTPRequestHandler`?
A simple example would be a textfield and a button. When the button is pressed the content of the field is send to the server which then sends back a corresponding answer. I am aware that there are many powerful solutions for this in Python, but I would like to keep this very simple.
I already found some nice examples for such a server (e.g. [here](http://msdl.cs.mcgill.ca/people/julien/04Ajax)), but so far I could not come up with a truly minimal one.
In case you wonder why I want to implement the GUI in such a way: My focus for this application is to display lots of data in a nice layout with only minimal interaction - so using HTML+CSS seems most convenient (and I have been already using it for non-interactive data display). | O.K., I think I can now answer my own question. Here is an example implementation for calculating the square of a number on the server. Please let me know if there are any improvements or misconceptions.
the python server file:
```
import threading
import webbrowser
import BaseHTTPServer
import SimpleHTTPServer
FILE = 'frontend.html'
PORT = 8080
class TestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
"""The test example handler."""
def do_POST(self):
"""Handle a post request by returning the square of the number."""
length = int(self.headers.getheader('content-length'))
data_string = self.rfile.read(length)
try:
result = int(data_string) ** 2
except:
result = 'error'
self.wfile.write(result)
def open_browser():
"""Start a browser after waiting for half a second."""
def _open_browser():
webbrowser.open('http://localhost:%s/%s' % (PORT, FILE))
thread = threading.Timer(0.5, _open_browser)
thread.start()
def start_server():
"""Start the server."""
server_address = ("", PORT)
server = BaseHTTPServer.HTTPServer(server_address, TestHandler)
server.serve_forever()
if __name__ == "__main__":
open_browser()
start_server()
```
...and the HTML file (I call it 'frontend.html', unfortunately the name has to appear in the JavaScript code as well):
```
<html>
<head>
<title>AJAX test</title>
</head>
<body>
<script type="text/javascript">
function xml_http_post(url, data, callback) {
var req = false;
try {
// Firefox, Opera 8.0+, Safari
req = new XMLHttpRequest();
}
catch (e) {
// Internet Explorer
try {
req = new ActiveXObject("Msxml2.XMLHTTP");
}
catch (e) {
try {
req = new ActiveXObject("Microsoft.XMLHTTP");
}
catch (e) {
alert("Your browser does not support AJAX!");
return false;
}
}
}
req.open("POST", url, true);
req.onreadystatechange = function() {
if (req.readyState == 4) {
callback(req);
}
}
req.send(data);
}
function test_button() {
var data = document.test_form.test_text.value;
xml_http_post("frontend.html", data, test_handle)
}
function test_handle(req) {
var elem = document.getElementById('test_result')
elem.innerHTML = req.responseText
}
</script>
<form name=test_form>
sqr(
<input type="text" name="test_text" value="0" size="4">
) =
<span id="test_result">0</span>
<input type=button onClick="test_button();" value="start" title="start">
</form>
</body>
</html>
```
Of course it would be much more convenient to use [jQuery](http://api.jquery.com/jQuery.ajax/) for the XML request, but in the interest of simplicity I'll leave it like that.
Finally an alternative implementation using WSGI (unfortunately I didn't see a way to fall back on the standard file-serving handler if the request is not a POST):
```
import threading
import webbrowser
from wsgiref.simple_server import make_server
FILE = 'frontend.html'
PORT = 8080
def test_app(environ, start_response):
if environ['REQUEST_METHOD'] == 'POST':
try:
request_body_size = int(environ['CONTENT_LENGTH'])
request_body = environ['wsgi.input'].read(request_body_size)
except (TypeError, ValueError):
request_body = "0"
try:
response_body = str(int(request_body) ** 2)
except:
response_body = "error"
status = '200 OK'
headers = [('Content-type', 'text/plain')]
start_response(status, headers)
return [response_body]
else:
response_body = open(FILE).read()
status = '200 OK'
headers = [('Content-type', 'text/html'),
('Content-Length', str(len(response_body)))]
start_response(status, headers)
return [response_body]
def open_browser():
"""Start a browser after waiting for half a second."""
def _open_browser():
webbrowser.open('http://localhost:%s/%s' % (PORT, FILE))
thread = threading.Timer(0.5, _open_browser)
thread.start()
def start_server():
"""Start the server."""
httpd = make_server("", PORT, test_app)
httpd.serve_forever()
if __name__ == "__main__":
open_browser()
start_server()
``` |
Python optparse metavar | 336,963 | 21 | 2008-12-03T12:36:12Z | 336,992 | 25 | 2008-12-03T12:48:58Z | [
"python",
"optparse"
] | I am not sure what `optparse`'s `metavar` parameter is used for. I see it is used all around, but I can't see its use.
Can someone make it clear to me? Thanks. | As @[Guillaume](#336975) says, it's used for generating help. If you want to have an option that takes an argument, such as a filename, you can add the `metavar` parameter to the `add_option` call so your preferred argument name/descriptor is output in the help message. From [the current module documentation](http://docs.python.org/library/optparse.html#generating-help):
```
usage = "usage: %prog [options] arg1 arg2"
parser = OptionParser(usage=usage)
parser.add_option("-f", "--filename",
metavar="FILE", help="write output to FILE"),
```
would produce help like this:
```
usage: <yourscript> [options] arg1 arg2
options:
-f FILE, --filename=FILE
```
The "FILE" after the "-f" and the "--filename" comes from the metavar. |
Python: item for item until stopterm in item? | 337,223 | 3 | 2008-12-03T14:11:57Z | 337,285 | 10 | 2008-12-03T14:28:58Z | [
"python"
] | Disclaimer: I'm fairly new to python!
If I want all the lines of a file until (edit: and including) the line containing some string `stopterm`, is there a way of using the list syntax for it? I was hoping there would be something like:
```
usefullines = [line for line in file until stopterm in line]
```
For now, I've got
```
usefullines = []
for line in file:
usefullines.append(line)
if stopterm in line:
break
```
It's not the end of the world, but since there rest of Python syntax is so straightforward, I was hoping for a 1 thought->1 Python line mapping. | ```
from itertools import takewhile
usefullines = takewhile(lambda x: not re.search(stopterm, x), lines)
from itertools import takewhile
usefullines = takewhile(lambda x: stopterm not in x, lines)
```
Here's a way that keeps the stopterm line:
```
def useful_lines(lines, stopterm):
for line in lines:
if stopterm in line:
yield line
break
yield line
usefullines = useful_lines(lines, stopterm)
# or...
for line in useful_lines(lines, stopterm):
# ... do stuff
pass
``` |
How to get a row-by-row MySQL ResultSet in python | 337,479 | 41 | 2008-12-03T15:23:49Z | 337,706 | 45 | 2008-12-03T16:17:12Z | [
"python",
"mysql"
] | MySQL ResultSets are by default retrieved completely from the server before any work can be done. In cases of huge result sets this becomes unusable. I would like instead to actually retrieve the rows one by one from the server.
In Java, following the instructions [here](http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-implementation-notes.html) (under "ResultSet"), I create a statement like this:
```
stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);
```
This works nicely in Java. My question is: is there a way to do the same in python?
One thing I tried is to limit the query to a 1000 rows at a time, like this:
```
start_row = 0
while True:
cursor = conn.cursor()
cursor.execute("SELECT item FROM items LIMIT %d,1000" % start_row)
rows = cursor.fetchall()
if not rows:
break
start_row += 1000
# Do something with rows...
```
However, this seems to get slower the higher start\_row is.
And no, using `fetchone()` instead of `fetchall()` doesn't change anything.
## Clarification:
The naive code I use to reproduce this problem looks like this:
```
import MySQLdb
conn = MySQLdb.connect(user="user", passwd="password", db="mydb")
cur = conn.cursor()
print "Executing query"
cur.execute("SELECT * FROM bigtable");
print "Starting loop"
row = cur.fetchone()
while row is not None:
print ", ".join([str(c) for c in row])
row = cur.fetchone()
cur.close()
conn.close()
```
On a ~700,000 rows table, this code runs quickly. But on a ~9,000,000 rows table it prints "Executing Query" and then hangs for a long long time. That is why it makes no difference if I use `fetchone()` or `fetchall()`. | I think you have to connect passing `cursorclass = MySQLdb.cursors.SSCursor`:
```
MySQLdb.connect(user="user",
passwd="password",
db="mydb",
cursorclass = MySQLdb.cursors.SSCursor
)
```
The default cursor fetches all the data at once, even if you don't use `fetchall`.
Edit: `SSCursor` or any other cursor class that supports server side resultsets - check the module docs on `MySQLdb.cursors`. |
How to get a row-by-row MySQL ResultSet in python | 337,479 | 41 | 2008-12-03T15:23:49Z | 337,922 | 16 | 2008-12-03T17:09:52Z | [
"python",
"mysql"
] | MySQL ResultSets are by default retrieved completely from the server before any work can be done. In cases of huge result sets this becomes unusable. I would like instead to actually retrieve the rows one by one from the server.
In Java, following the instructions [here](http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-implementation-notes.html) (under "ResultSet"), I create a statement like this:
```
stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);
```
This works nicely in Java. My question is: is there a way to do the same in python?
One thing I tried is to limit the query to a 1000 rows at a time, like this:
```
start_row = 0
while True:
cursor = conn.cursor()
cursor.execute("SELECT item FROM items LIMIT %d,1000" % start_row)
rows = cursor.fetchall()
if not rows:
break
start_row += 1000
# Do something with rows...
```
However, this seems to get slower the higher start\_row is.
And no, using `fetchone()` instead of `fetchall()` doesn't change anything.
## Clarification:
The naive code I use to reproduce this problem looks like this:
```
import MySQLdb
conn = MySQLdb.connect(user="user", passwd="password", db="mydb")
cur = conn.cursor()
print "Executing query"
cur.execute("SELECT * FROM bigtable");
print "Starting loop"
row = cur.fetchone()
while row is not None:
print ", ".join([str(c) for c in row])
row = cur.fetchone()
cur.close()
conn.close()
```
On a ~700,000 rows table, this code runs quickly. But on a ~9,000,000 rows table it prints "Executing Query" and then hangs for a long long time. That is why it makes no difference if I use `fetchone()` or `fetchall()`. | The limit/offset solution runs in quadratic time because mysql has to rescan the rows to find the offset. As you suspected, the default cursor stores the entire result set on the client, which may consume a lot of memory.
Instead you can use a server side cursor, which keeps the query running and fetches results as necessary. The cursor class can be customized by supplying a default to the connection call itself, or by supplying a class to the cursor method each time.
```
from MySQLdb import cursors
cursor = conn.cursor(cursors.SSCursor)
```
But that's not the whole story. In addition to storing the mysql result, the default client-side cursor actually fetches every row regardless. This behavior is undocumented, and very unfortunate. It means full python objects are created for all rows, which consumes far more memory than the original mysql result.
In most cases, a result stored on the client wrapped as an iterator would yield the best speed with reasonable memory usage. But you'll have to roll your own if you want that. |
Dynamic Keyword Arguments in Python? | 337,688 | 25 | 2008-12-03T16:12:22Z | 337,714 | 30 | 2008-12-03T16:19:03Z | [
"python"
] | Does python have the ability to create dynamic keywords?
For example:
```
qset.filter(min_price__usd__range=(min_price, max_price))
```
I want to be able to change the **usd** part based on a selected currency. | Yes, It does. Use `**kwargs` in a function definition.
Example:
```
def f(**kwargs):
print kwargs.keys()
f(a=2, b="b") # -> ['a', 'b']
f(**{'d'+'e': 1}) # -> ['de']
```
But why do you need that? |
Dynamic Keyword Arguments in Python? | 337,688 | 25 | 2008-12-03T16:12:22Z | 337,733 | 10 | 2008-12-03T16:22:38Z | [
"python"
] | Does python have the ability to create dynamic keywords?
For example:
```
qset.filter(min_price__usd__range=(min_price, max_price))
```
I want to be able to change the **usd** part based on a selected currency. | You can easily do this by declaring your function like this:
```
def filter(**kwargs):
```
your function will now be passed a dictionary called kwargs that contains the keywords and values passed to your function. Note that, syntactically, the word `kwargs` is meaningless; the `**` is what causes the dynamic keyword behavior.
You can also do the reverse. If you are calling a function, and you have a dictionary that corresponds to the arguments, you can do
```
someFunction(**theDictionary)
```
There is also the lesser used \*foo variant, which causes you to receive an array of arguments. This is similar to normal C vararg arrays. |
Dynamic Keyword Arguments in Python? | 337,688 | 25 | 2008-12-03T16:12:22Z | 337,735 | 19 | 2008-12-03T16:22:51Z | [
"python"
] | Does python have the ability to create dynamic keywords?
For example:
```
qset.filter(min_price__usd__range=(min_price, max_price))
```
I want to be able to change the **usd** part based on a selected currency. | If I understand what you're asking correctly,
```
qset.filter(**{
'min_price_' + selected_currency + '_range' :
(min_price, max_price)})
```
does what you need. |
Python, Popen and select - waiting for a process to terminate or a timeout | 337,863 | 21 | 2008-12-03T16:53:17Z | 337,912 | 15 | 2008-12-03T17:06:30Z | [
"python",
"select",
"timeout",
"subprocess",
"popen"
] | I run a subprocess using:
```
p = subprocess.Popen("subprocess",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
```
This subprocess could either exit immediately with an error on stderr, or keep running. I want to detect either of these conditions - the latter by waiting for several seconds.
I tried this:
```
SECONDS_TO_WAIT = 10
select.select([],
[p.stdout, p.stderr],
[p.stdout, p.stderr],
SECONDS_TO_WAIT)
```
but it just returns:
```
([],[],[])
```
on either condition. What can I do? | Have you tried using the Popen.Poll() method. You could just do this:
```
p = subprocess.Popen("subprocess",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
time.sleep(SECONDS_TO_WAIT)
retcode = p.poll()
if retcode is not None:
# process has terminated
```
This will cause you to always wait 10 seconds, but if the failure case is rare this would be amortized over all the success cases.
---
Edit:
How about:
```
t_nought = time.time()
seconds_passed = 0
while(p.poll() is not None and seconds_passed < 10):
seconds_passed = time.time() - t_nought
if seconds_passed >= 10:
#TIMED OUT
```
This has the ugliness of being a busy wait, but I think it accomplishes what you want.
Additionally looking at the select call documentation again I think you may want to change it as follows:
```
SECONDS_TO_WAIT = 10
select.select([p.stderr],
[],
[p.stdout, p.stderr],
SECONDS_TO_WAIT)
```
Since you would typically want to read from stderr, you want to know when it has something available to read (ie the failure case).
I hope this helps. |
Python, Popen and select - waiting for a process to terminate or a timeout | 337,863 | 21 | 2008-12-03T16:53:17Z | 1,035,488 | 8 | 2009-06-23T21:58:58Z | [
"python",
"select",
"timeout",
"subprocess",
"popen"
] | I run a subprocess using:
```
p = subprocess.Popen("subprocess",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
```
This subprocess could either exit immediately with an error on stderr, or keep running. I want to detect either of these conditions - the latter by waiting for several seconds.
I tried this:
```
SECONDS_TO_WAIT = 10
select.select([],
[p.stdout, p.stderr],
[p.stdout, p.stderr],
SECONDS_TO_WAIT)
```
but it just returns:
```
([],[],[])
```
on either condition. What can I do? | This is what i came up with. Works when you need and don't need to timeout on thep process, but with a semi-busy loop.
```
def runCmd(cmd, timeout=None):
'''
Will execute a command, read the output and return it back.
@param cmd: command to execute
@param timeout: process timeout in seconds
@return: a tuple of three: first stdout, then stderr, then exit code
@raise OSError: on missing command or if a timeout was reached
'''
ph_out = None # process output
ph_err = None # stderr
ph_ret = None # return code
p = subprocess.Popen(cmd, shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
# if timeout is not set wait for process to complete
if not timeout:
ph_ret = p.wait()
else:
fin_time = time.time() + timeout
while p.poll() == None and fin_time > time.time():
time.sleep(1)
# if timeout reached, raise an exception
if fin_time < time.time():
# starting 2.6 subprocess has a kill() method which is preferable
# p.kill()
os.kill(p.pid, signal.SIGKILL)
raise OSError("Process timeout has been reached")
ph_ret = p.returncode
ph_out, ph_err = p.communicate()
return (ph_out, ph_err, ph_ret)
``` |
Python subprocess.call() fails when using pythonw.exe | 337,870 | 5 | 2008-12-03T16:54:17Z | 337,990 | 7 | 2008-12-03T17:26:53Z | [
"python",
"multithreading",
"subprocess"
] | I have some Python code that works correctly when I use python.exe to run it, but fails if I use pythonw.exe.
```
def runStuff(commandLine):
outputFileName = 'somefile.txt'
outputFile = open(outputFileName, "w")
try:
result = subprocess.call(commandLine, shell=True, stdout=outputFile)
except:
print 'Exception thrown:', str(sys.exc_info()[1])
myThread = threading.Thread(None, target=runStuff, commandLine=['whatever...'])
myThread.start()
```
The message I get is:
```
Exception thrown: [Error 6] The handle is invalid
```
However, if I don't specify the 'stdout' parameter, subprocess.call() starts okay.
I can see that pythonw.exe might be redirecting output itself, but I can't see why I'm blocked from specifying stdout for a new thread. | `sys.stdin` and `sys.stdout` handles are invalid because pythonw does not provide console support as it runs as a deamon, so default arguments of `subprocess.call()` are failing.
Deamon programs close stdin/stdout/stderr purposedly and use logging instead, so that you have to manage this yourself: I would suggest to use subprocess.PIPE.
If you *really* don't care about what the sub process says for errors and all, you could use `os.devnull` (I'm not really sure how portable it is?) but I wouldn't recommend that. |
TimedRotatingFileHandler Changing File Name? | 338,450 | 5 | 2008-12-03T19:37:51Z | 338,566 | 15 | 2008-12-03T20:21:41Z | [
"python",
"logging"
] | I am trying to implement the python logging handler called TimedRotatingFileHandler.
When it rolls over to midnight it appends the current day in the form: "YYYY-MM-DD".
```
LOGGING_MSG_FORMAT = '%(name)-14s > [%(levelname)s] [%(asctime)s] : %(message)s'
LOGGING_DATE_FORMAT = '%Y-%m-%d %H:%M:%S'
logging.basicConfig(
level=logging.DEBUG,
format=LOGGING_MSG_FORMAT,
datefmt=LOGGING_DATE_FORMAT
)
root_logger = logging.getLogger('')
logger = logging.handlers.TimedRotatingFileHandler("C:\\logs\\Rotate_Test",'midnight',1)
root_logger.addHandler(logger)
while True:
daemon_logger = logging.getLogger('TEST')
daemon_logger.info("SDFKLDSKLFFJKLSDD")
time.sleep(60)
```
The first log file created is called just "Rotate\_Test" then once it rolls over to the next day it changes the file name to: "Rotate\_Test.YYYY-MM-DD" Where YYYY-MM-DD is the current day.
How can i change how it alters the filename? I googled and looked at the API and found pretty much nothing. | "How can i change how it alters the filename?"
Since it isn't documented, I elected to read the source. This is what I concluded from reading the source of `logging/handlers.py`
```
handler = logging.handlers.TimedRotatingFileHandler("C:\\isis_ops\\logs\\Rotate_Test",'midnight',1)
handler.suffix = "%Y-%m-%d" # or anything else that strftime will allow
root_logger.addHandler(handler)
```
The suffix is the formatting string. |
python ImportError No module named | 338,768 | 186 | 2008-12-03T21:26:28Z | 338,790 | 11 | 2008-12-03T21:31:53Z | [
"python",
"importerror",
"python-import"
] | I am very new at Python and I am getting this error:
```
Traceback (most recent call last):
File "mountain.py", line 28, in ?
from toolkit.interface import interface
ImportError: No module named toolkit.interface
```
Python is installed in a local directory:
My directory tree is like this:
```
(local directory)/site-packages/toolkit/interface.py
```
My code is in here
```
(local directory)/site-packages/toolkit/examples/mountain.py
```
To run the example I do `python mountain.py`, and in the code I have:
```
from toolkit.interface import interface
```
And i get the error that I wrote, I have already checked `sys.path` and there I have the directory `/site-packages`, also I have the file `__init__.py.bin` in the toolkit folder to indicate to Python that this is a package. I also have a `__init__.py.bin` in the examples directory.
I do not know why Python cannot find the file when it is in `sys.path`, any ideas? Can be a permissions problem? Do I need execution permission? | To mark a directory as a package you need a file named `__init__.py`, does this help? |
python ImportError No module named | 338,768 | 186 | 2008-12-03T21:26:28Z | 338,858 | 33 | 2008-12-03T21:50:15Z | [
"python",
"importerror",
"python-import"
] | I am very new at Python and I am getting this error:
```
Traceback (most recent call last):
File "mountain.py", line 28, in ?
from toolkit.interface import interface
ImportError: No module named toolkit.interface
```
Python is installed in a local directory:
My directory tree is like this:
```
(local directory)/site-packages/toolkit/interface.py
```
My code is in here
```
(local directory)/site-packages/toolkit/examples/mountain.py
```
To run the example I do `python mountain.py`, and in the code I have:
```
from toolkit.interface import interface
```
And i get the error that I wrote, I have already checked `sys.path` and there I have the directory `/site-packages`, also I have the file `__init__.py.bin` in the toolkit folder to indicate to Python that this is a package. I also have a `__init__.py.bin` in the examples directory.
I do not know why Python cannot find the file when it is in `sys.path`, any ideas? Can be a permissions problem? Do I need execution permission? | Does
```
(local directory)/site-packages/toolkit
```
have a `__init__.py`?
To make import *walk* through your directories every directory must have a `__init__.py` file. |
python ImportError No module named | 338,768 | 186 | 2008-12-03T21:26:28Z | 339,220 | 159 | 2008-12-04T00:17:40Z | [
"python",
"importerror",
"python-import"
] | I am very new at Python and I am getting this error:
```
Traceback (most recent call last):
File "mountain.py", line 28, in ?
from toolkit.interface import interface
ImportError: No module named toolkit.interface
```
Python is installed in a local directory:
My directory tree is like this:
```
(local directory)/site-packages/toolkit/interface.py
```
My code is in here
```
(local directory)/site-packages/toolkit/examples/mountain.py
```
To run the example I do `python mountain.py`, and in the code I have:
```
from toolkit.interface import interface
```
And i get the error that I wrote, I have already checked `sys.path` and there I have the directory `/site-packages`, also I have the file `__init__.py.bin` in the toolkit folder to indicate to Python that this is a package. I also have a `__init__.py.bin` in the examples directory.
I do not know why Python cannot find the file when it is in `sys.path`, any ideas? Can be a permissions problem? Do I need execution permission? | Based on your comments to orip's post, I guess this is what happened:
1. You edited `__init__.py` on windows.
2. The windows editor added something non-printing, perhaps a carriage-return (end-of-line in Windows is CR/LF; in unix it is LF only), or perhaps a CTRL-Z (windows end-of-file).
3. You used WinSCP to copy the file to your unix box.
4. WinSCP thought: "This has something that's not basic text; I'll put a .bin extension to indicate binary data."
5. The missing `__init__.py` (now called `__init__.py.bin`) means python doesn't understand toolkit as a package.
6. You create `__init__.py` in the appropriate directory and everything works... ? |
python ImportError No module named | 338,768 | 186 | 2008-12-03T21:26:28Z | 339,333 | 17 | 2008-12-04T01:11:08Z | [
"python",
"importerror",
"python-import"
] | I am very new at Python and I am getting this error:
```
Traceback (most recent call last):
File "mountain.py", line 28, in ?
from toolkit.interface import interface
ImportError: No module named toolkit.interface
```
Python is installed in a local directory:
My directory tree is like this:
```
(local directory)/site-packages/toolkit/interface.py
```
My code is in here
```
(local directory)/site-packages/toolkit/examples/mountain.py
```
To run the example I do `python mountain.py`, and in the code I have:
```
from toolkit.interface import interface
```
And i get the error that I wrote, I have already checked `sys.path` and there I have the directory `/site-packages`, also I have the file `__init__.py.bin` in the toolkit folder to indicate to Python that this is a package. I also have a `__init__.py.bin` in the examples directory.
I do not know why Python cannot find the file when it is in `sys.path`, any ideas? Can be a permissions problem? Do I need execution permission? | I solved my own problem, I will write a summary of the things that were wrong and the solution:
The file needs to be called exactly `__init__.py`, if the extension is different such as my case `.py.bin` then python cannot move through the directories and then it cannot find the modules. To edit the files you need to use a Linux editor, such as vi or nano, if you use a windows editor this will write some hidden characters.
Another problem that was affecting was that I had another python version installed by the root, so if someone is working with a local installation of python, be sure that the python that is running the programs is the local python, to check this just do `which python`, and see if the executable is the one that is in your local directory. If not change the path but be sure that the local python directory is before than the other python. |
python ImportError No module named | 338,768 | 186 | 2008-12-03T21:26:28Z | 5,199,346 | 22 | 2011-03-04T21:14:56Z | [
"python",
"importerror",
"python-import"
] | I am very new at Python and I am getting this error:
```
Traceback (most recent call last):
File "mountain.py", line 28, in ?
from toolkit.interface import interface
ImportError: No module named toolkit.interface
```
Python is installed in a local directory:
My directory tree is like this:
```
(local directory)/site-packages/toolkit/interface.py
```
My code is in here
```
(local directory)/site-packages/toolkit/examples/mountain.py
```
To run the example I do `python mountain.py`, and in the code I have:
```
from toolkit.interface import interface
```
And i get the error that I wrote, I have already checked `sys.path` and there I have the directory `/site-packages`, also I have the file `__init__.py.bin` in the toolkit folder to indicate to Python that this is a package. I also have a `__init__.py.bin` in the examples directory.
I do not know why Python cannot find the file when it is in `sys.path`, any ideas? Can be a permissions problem? Do I need execution permission? | On \*nix, also make sure that PYTHONPATH is configured correctly, esp that it has the format
```
.:/usr/local/lib/python
```
(mind the .: at the beginning, so that it can search on the current directory, too) |
python ImportError No module named | 338,768 | 186 | 2008-12-03T21:26:28Z | 23,210,066 | 13 | 2014-04-22T03:52:48Z | [
"python",
"importerror",
"python-import"
] | I am very new at Python and I am getting this error:
```
Traceback (most recent call last):
File "mountain.py", line 28, in ?
from toolkit.interface import interface
ImportError: No module named toolkit.interface
```
Python is installed in a local directory:
My directory tree is like this:
```
(local directory)/site-packages/toolkit/interface.py
```
My code is in here
```
(local directory)/site-packages/toolkit/examples/mountain.py
```
To run the example I do `python mountain.py`, and in the code I have:
```
from toolkit.interface import interface
```
And i get the error that I wrote, I have already checked `sys.path` and there I have the directory `/site-packages`, also I have the file `__init__.py.bin` in the toolkit folder to indicate to Python that this is a package. I also have a `__init__.py.bin` in the examples directory.
I do not know why Python cannot find the file when it is in `sys.path`, any ideas? Can be a permissions problem? Do I need execution permission? | I ran into something very similar when I did this exercise in LPTHW; I could never get Python to recognise that I had files in the directory I was calling from. But I was able to get it to work in the end. What I did, and what I recommend, is to try this:
(NOTE: From your initial post, I am assuming you are using an \*NIX-based machine and are running things from the command line, so this advice is tailored to that. Since I run Ubuntu, this is what I did)
1) Change directory (cd) to the directory *above* the directory where your files are. In this case, you're trying to run the `mountain.py` file, and trying to call the `toolkit.interface.py` module, which are in separate directories. In this case, you would go to the directory that contains paths to both those files (or in other words, the closest directory that the paths of both those files share). Which in this case is the `toolkit` directory.
2) When you are in the `tookit` directory, enter this line of code on your command line:
`export PYTHONPATH=.`
This sets your PYTHONPATH to ".", which basically means that your PYTHONPATH will now look for any called files within the directory you are currently in, (and more to the point, in the *sub-directory branches* of the directory you are in. So it doesn't just look in your current directory, but in all the directories that are **in** your current directory).
3) After you've set your PYTHONPATH in the step above, run your module from your current directory (the `toolkit` directory). Python should now find and load the modules you specified.
Hope this helps. I was quite frustrated with this myself. |
Nicest way to pad zeroes to string | 339,007 | 647 | 2008-12-03T22:39:51Z | 339,012 | 76 | 2008-12-03T22:41:49Z | [
"python",
"string"
] | What is the nicest/shortest way to pad a string with zeroes to the left, so the string length has a specific length? | For numbers:
```
print "%05d" % number
```
See also: [Python: String formatting](http://docs.python.org/library/string.html#string-formatting).
**EDIT**: It's worth noting that as of yesterday December 3rd, 2008, this method of formatting is deprecated in favour of the `format` string method:
```
print("{0:05d}".format(number)) # or
print(format(number, "05d"))
```
See [PEP 3101](http://www.python.org/dev/peps/pep-3101/) for details. |
Nicest way to pad zeroes to string | 339,007 | 647 | 2008-12-03T22:39:51Z | 339,013 | 984 | 2008-12-03T22:42:04Z | [
"python",
"string"
] | What is the nicest/shortest way to pad a string with zeroes to the left, so the string length has a specific length? | Strings:
```
>>> n = '4'
>>> print n.zfill(3)
004
```
And for numbers:
```
>>> n = 4
>>> print '%03d' % n
004
>>> print format(4, '03') # python >= 2.6
004
>>> print '{0:03d}'.format(4) # python >= 2.6
004
>>> print '{foo:03d}'.format(foo=4) # python >= 2.6
004
>>> print('{:03d}'.format(4)) # python >= 2.7 + python3
004
>>> print('{0:03d}'.format(4)) # python 3
004
```
[String formatting documentation](https://docs.python.org/2/library/string.html#formatexamples). |
Nicest way to pad zeroes to string | 339,007 | 647 | 2008-12-03T22:39:51Z | 339,019 | 9 | 2008-12-03T22:44:58Z | [
"python",
"string"
] | What is the nicest/shortest way to pad a string with zeroes to the left, so the string length has a specific length? | ```
width = 10
x = 5
print "%0*d" % (width, x)
> 0000000005
```
See the print documentation for all the exciting details!
**Update for Python 3.x (7.5 years later)**
That last line should now be:
```
print("%0*d" % (width, x))
```
I.e. `print()` is now a function, not a statement. Note that I still prefer the Old School `printf()` style because, IMNSHO, it reads better, and because, um, I've been using that notation since January, 1980. Something ... old dogs .. something something ... new tricks. |
Nicest way to pad zeroes to string | 339,007 | 647 | 2008-12-03T22:39:51Z | 339,024 | 200 | 2008-12-03T22:47:13Z | [
"python",
"string"
] | What is the nicest/shortest way to pad a string with zeroes to the left, so the string length has a specific length? | Just use the [rjust](http://docs.python.org/2/library/stdtypes.html#string-methods) method of the string object.
This example will make a string of 10 characters long, padding as necessary.
```
>>> t = 'test'
>>> t.rjust(10, '0')
>>> '000000test'
``` |
Nicest way to pad zeroes to string | 339,007 | 647 | 2008-12-03T22:39:51Z | 6,196,350 | 26 | 2011-06-01T04:40:02Z | [
"python",
"string"
] | What is the nicest/shortest way to pad a string with zeroes to the left, so the string length has a specific length? | `str(n).zfill(width)` will work with `string`s, `int`s, `float`s... and is Python 2.*x* and 3.*x* compatible:
```
>>> n = 3
>>> str(n).zfill(5)
'00003'
>>> n = '3'
>>> str(n).zfill(5)
'00003'
>>> n = '3.0'
>>> str(n).zfill(5)
'003.0'
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.