title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
312,466
8
2008-11-23T12:40:39Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
If you know list size: ``` def SplitList(list, chunk_size): return [list[offs:offs+chunk_size] for offs in range(0, len(list), chunk_size)] ``` If you don't (an iterator): ``` def IterChunks(sequence, chunk_size): res = [] for item in sequence: res.append(item) if len(res) >= chunk_size: yield res res = [] if res: yield res # yield the last, incomplete, portion ``` In the latter case, it can be rephrased in a more beautiful way if you can be sure that the sequence always contains a whole number of chunks of given size (i.e. there is no incomplete last chunk).
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
312,467
56
2008-11-23T12:41:37Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
Here is a generator that work on arbitrary iterables: ``` def split_seq(iterable, size): it = iter(iterable) item = list(itertools.islice(it, size)) while item: yield item item = list(itertools.islice(it, size)) ``` Example: ``` >>> import pprint >>> pprint.pprint(list(split_seq(xrange(75), 10))) [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35, 36, 37, 38, 39], [40, 41, 42, 43, 44, 45, 46, 47, 48, 49], [50, 51, 52, 53, 54, 55, 56, 57, 58, 59], [60, 61, 62, 63, 64, 65, 66, 67, 68, 69], [70, 71, 72, 73, 74]] ```
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
312,644
204
2008-11-23T15:48:53Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
Directly from the (old) Python documentation (recipes for itertools): ``` from itertools import izip, chain, repeat def grouper(n, iterable, padvalue=None): "grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')" return izip(*[chain(iterable, repeat(padvalue, n-1))]*n) ``` The current version, as suggested by J.F.Sebastian: ``` from itertools import izip_longest # for Python 2.x #from itertools import zip_longest # for Python 3.x #from six.moves import zip_longest # for both (uses the six compat library) def grouper(n, iterable, padvalue=None): "grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')" return izip_longest(*[iter(iterable)]*n, fillvalue=padvalue) ``` I guess Guido's time machine works—worked—will work—will have worked—was working again. These solutions work because `[iter(iterable)]*n` (or the equivalent in the earlier version) creates *one* iterator, repeated `n` times in the list. `izip_longest` then effectively performs a round-robin of "each" iterator; because this is the same iterator, it is advanced by each such call, resulting in each such zip-roundrobin generating one tuple of `n` items.
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
1,751,478
409
2009-11-17T20:17:16Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
If you want something super simple: ``` def chunks(l, n): n = max(1, n) return (l[i:i+n] for i in xrange(0, len(l), n)) ```
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
3,125,186
38
2010-06-26T19:10:07Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
``` def chunk(input, size): return map(None, *([iter(input)] * size)) ```
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
3,226,719
25
2010-07-12T07:58:43Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
Simple yet elegant ``` l = range(1, 1000) print [l[x:x+10] for x in xrange(0, len(l), 10)] ``` or if you prefer: ``` chunks = lambda l, n: [l[x: x+n] for x in xrange(0, len(l), n)] chunks(l, 10) ```
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
5,711,993
11
2011-04-19T05:27:19Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
If you had a chunk size of 3 for example, you could do: ``` zip(*[iterable[i::3] for i in range(3)]) ``` source: <http://code.activestate.com/recipes/303060-group-a-list-into-sequential-n-tuples/> I would use this when my chunk size is fixed number I can type, e.g. '3', and would never change.
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
16,004,505
8
2013-04-14T21:26:06Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
A generator expression: ``` def chunks(seq, n): return (seq[i:i+n] for i in xrange(0, len(seq), n)) ``` eg. ``` print list(chunks(range(1, 1000), 10)) ```
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
16,315,158
14
2013-05-01T08:42:21Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
[more-itertools has a chunks iterator.](http://pythonhosted.org/more-itertools/api.html#more_itertools.chunked) It also has a lot more things, including all the recipes in the itertools documentation.
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
16,935,535
35
2013-06-05T08:54:26Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
I know this is kind of old but I don't why nobody mentioned `numpy.array_split`: ``` lst = range(50) In [26]: np.array_split(b,5) Out[26]: [array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]), array([20, 21, 22, 23, 24, 25, 26, 27, 28, 29]), array([30, 31, 32, 33, 34, 35, 36, 37, 38, 39]), array([40, 41, 42, 43, 44, 45, 46, 47, 48, 49])] ```
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
19,264,525
11
2013-10-09T06:17:29Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
I like the Python doc's version proposed by tzot and J.F.Sebastian a lot, but it has two shortcomings: * it is not very explicit * I usually don't want a fill value in the last chunk I'm using this one a lot in my code: ``` from itertools import islice def chunks(n, iterable): iterable = iter(iterable) while True: yield tuple(islice(iterable, n)) or iterable.next() ``` UPDATE: A lazy chunks version: ``` from itertools import chain, islice def chunks(n, iterable): iterable = iter(iterable) while True: yield chain([next(iterable)], islice(iterable, n-1)) ```
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
21,767,522
13
2014-02-13T23:07:17Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
## Critique of other answers here: None of these answers are evenly sized chunks, they all leave a runt chunk at the end, so they're not completely balanced. If you were using these functions to distribute work, you've built-in the prospect of one likely finishing well before the others, so it would sit around doing nothing while the others continued working hard. For example, the current top answer ends with: ``` [60, 61, 62, 63, 64, 65, 66, 67, 68, 69], [70, 71, 72, 73, 74]] ``` I just hate that runt at the end! Others, like `list(grouper(3, xrange(7)))`, and `chunk(xrange(7), 3)` both return: `[(0, 1, 2), (3, 4, 5), (6, None, None)]`. The `None`'s are just padding, and rather inelegant in my opinion. They are NOT evenly chunking the iterables. Why can't we divide these better? ## My Solution(s) Here's a balanced solution, adapted from a function I've used in production (Note in Python 3 to replace `xrange` with `range`): ``` def baskets_from(items, maxbaskets=25): baskets = [[] for _ in xrange(maxbaskets)] # in Python 3 use range for i, item in enumerate(items): baskets[i % maxbaskets].append(item) return filter(None, baskets) ``` And I created a generator that does the same if you put it into a list: ``` def iter_baskets_from(items, maxbaskets=3): '''generates evenly balanced baskets from indexable iterable''' item_count = len(items) baskets = min(item_count, maxbaskets) for x_i in xrange(baskets): yield [items[y_i] for y_i in xrange(x_i, item_count, baskets)] ``` And finally, since I see that all of the above functions return elements in a contiguous order (as they were given): ``` def iter_baskets_contiguous(items, maxbaskets=3, item_count=None): ''' generates balanced baskets from iterable, contiguous contents provide item_count if providing a iterator that doesn't support len() ''' item_count = item_count or len(items) baskets = min(item_count, maxbaskets) items = iter(items) floor = item_count // baskets ceiling = floor + 1 stepdown = item_count % baskets for x_i in xrange(baskets): length = ceiling if x_i < stepdown else floor yield [items.next() for _ in xrange(length)] ``` ## Output To test them out: ``` print(baskets_from(xrange(6), 8)) print(list(iter_baskets_from(xrange(6), 8))) print(list(iter_baskets_contiguous(xrange(6), 8))) print(baskets_from(xrange(22), 8)) print(list(iter_baskets_from(xrange(22), 8))) print(list(iter_baskets_contiguous(xrange(22), 8))) print(baskets_from('ABCDEFG', 3)) print(list(iter_baskets_from('ABCDEFG', 3))) print(list(iter_baskets_contiguous('ABCDEFG', 3))) print(baskets_from(xrange(26), 5)) print(list(iter_baskets_from(xrange(26), 5))) print(list(iter_baskets_contiguous(xrange(26), 5))) ``` Which prints out: ``` [[0], [1], [2], [3], [4], [5]] [[0], [1], [2], [3], [4], [5]] [[0], [1], [2], [3], [4], [5]] [[0, 8, 16], [1, 9, 17], [2, 10, 18], [3, 11, 19], [4, 12, 20], [5, 13, 21], [6, 14], [7, 15]] [[0, 8, 16], [1, 9, 17], [2, 10, 18], [3, 11, 19], [4, 12, 20], [5, 13, 21], [6, 14], [7, 15]] [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10, 11], [12, 13, 14], [15, 16, 17], [18, 19], [20, 21]] [['A', 'D', 'G'], ['B', 'E'], ['C', 'F']] [['A', 'D', 'G'], ['B', 'E'], ['C', 'F']] [['A', 'B', 'C'], ['D', 'E'], ['F', 'G']] [[0, 5, 10, 15, 20, 25], [1, 6, 11, 16, 21], [2, 7, 12, 17, 22], [3, 8, 13, 18, 23], [4, 9, 14, 19, 24]] [[0, 5, 10, 15, 20, 25], [1, 6, 11, 16, 21], [2, 7, 12, 17, 22], [3, 8, 13, 18, 23], [4, 9, 14, 19, 24]] [[0, 1, 2, 3, 4, 5], [6, 7, 8, 9, 10], [11, 12, 13, 14, 15], [16, 17, 18, 19, 20], [21, 22, 23, 24, 25]] ``` Notice that the contiguous generator provide chunks in the same length patterns as the other two, but the items are all in order, and they are as evenly divided as one may divide a list of discrete elements.
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
22,045,226
29
2014-02-26T15:02:00Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
I'm surprised nobody has thought of using `iter`'s [two-argument form](http://docs.python.org/2/library/functions.html#iter): ``` from itertools import islice def chunk(it, size): it = iter(it) return iter(lambda: tuple(islice(it, size)), ()) ``` Demo: ``` >>> list(chunk(range(14), 3)) [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13)] ``` This works with any iterable and produces output lazily. It returns tuples rather than iterators, but I think it has a certain elegance nonetheless. It also doesn't pad; if you want padding, a simple variation on the above will suffice: ``` from itertools import islice, chain, repeat def chunk_pad(it, size, padval=None): it = chain(iter(it), repeat(padval)) return iter(lambda: tuple(islice(it, size)), (padval,) * size) ``` Demo: ``` >>> list(chunk_pad(range(14), 3)) [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, None)] >>> list(chunk_pad(range(14), 3, 'a')) [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, 'a')] ``` Like the `izip_longest`-based solutions, the above *always* pads. As far as I know, there's no one- or two-line itertools recipe for a function that *optionally* pads. By combining the above two approaches, this one comes pretty close: ``` _no_padding = object() def chunk(it, size, padval=_no_padding): if padval == _no_padding: it = iter(it) sentinel = () else: it = chain(iter(it), repeat(padval)) sentinel = (padval,) * size return iter(lambda: tuple(islice(it, size)), sentinel) ``` Demo: ``` >>> list(chunk(range(14), 3)) [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13)] >>> list(chunk(range(14), 3, None)) [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, None)] >>> list(chunk(range(14), 3, 'a')) [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, 'a')] ``` I believe this is the shortest chunker proposed that offers optional padding.
How do you split a list into evenly sized chunks?
312,443
959
2008-11-23T12:15:52Z
29,009,933
18
2015-03-12T12:36:10Z
[ "python", "list", "split", "chunks" ]
I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive. I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators. This should work: ``` l = range(1, 1000) print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ] ``` I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though. Related question: [What is the most “pythonic” way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
I saw the most awesome Python-ish answer in a [duplicate](http://stackoverflow.com/questions/23286254/convert-list-to-a-list-of-tuples-python) of this question: ``` l = range(1,15) i = iter(l) print zip(i,i,i) ``` You can create n-tuple for any n.
No Module named django.core
312,549
62
2008-11-23T14:09:24Z
325,295
34
2008-11-28T08:40:35Z
[ "python", "django" ]
I have updated to latest Django version 1.0.2 after uninstalling my old Django version.But now when I run django-admin.py I get the following error. How can I resolve this? ``` Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\bin\django-admin.py", line 2, in <module> from django.core import management ImportError: No module named django.core ```
You must make sure that django is in your PYTHONPATH. To test, just do a `import django` from a python shell. There should be no output: ``` ActivePython 2.5.1.1 (ActiveState Software Inc.) based on Python 2.5.1 (r251:54863, May 1 2007, 17:47:05) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import django >>> ``` If you installed django via `setuptools` (`easy_install`, or with the `setup.py` included with django), then check in your `site-packages` if the `.pth` file (`easy-install.pth`, `django.pth`, ...) point to the correct folder. HIH.
No Module named django.core
312,549
62
2008-11-23T14:09:24Z
2,081,421
8
2010-01-17T15:21:48Z
[ "python", "django" ]
I have updated to latest Django version 1.0.2 after uninstalling my old Django version.But now when I run django-admin.py I get the following error. How can I resolve this? ``` Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\bin\django-admin.py", line 2, in <module> from django.core import management ImportError: No module named django.core ```
I'm sure it's related to something incorrect in my setup, but I am having the same problem, and it works properly if I call it thusly ``` c:\somedir>python c:\Python26\scripts\django-admin.py startproject mysite ```
No Module named django.core
312,549
62
2008-11-23T14:09:24Z
2,172,329
18
2010-01-31T16:08:18Z
[ "python", "django" ]
I have updated to latest Django version 1.0.2 after uninstalling my old Django version.But now when I run django-admin.py I get the following error. How can I resolve this? ``` Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\bin\django-admin.py", line 2, in <module> from django.core import management ImportError: No module named django.core ```
You can get around this problem by providing the full path to your django-admin.py file ``` python c:\python25\scripts\django-admin.py startproject mysite ```
No Module named django.core
312,549
62
2008-11-23T14:09:24Z
2,352,728
16
2010-02-28T21:21:39Z
[ "python", "django" ]
I have updated to latest Django version 1.0.2 after uninstalling my old Django version.But now when I run django-admin.py I get the following error. How can I resolve this? ``` Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\bin\django-admin.py", line 2, in <module> from django.core import management ImportError: No module named django.core ```
I encountered this problem today, it turned out that I had C:\Python26 in my path and .py files were associated to Python 3.1. Repairing the proper version of Python, either through Programs and Features or by running the .msi, will fix the associations.
No Module named django.core
312,549
62
2008-11-23T14:09:24Z
8,186,132
9
2011-11-18T17:07:18Z
[ "python", "django" ]
I have updated to latest Django version 1.0.2 after uninstalling my old Django version.But now when I run django-admin.py I get the following error. How can I resolve this? ``` Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\bin\django-admin.py", line 2, in <module> from django.core import management ImportError: No module named django.core ```
It was a PYTHONPATH environment variable issue for me, as others mentioned above, but noone has really shown how to set it for people that could use the extra instruction. **Linux (bash)** I set this variable in my bashrc file in my home folder (.bashrc is the file for me since my shell is /bin/bash). ``` vim ~/.bashrc export PYTHONPATH=/usr/local/python-2.7.2/lib/python2.7/site-packages:$PYTHONPATH source ~/.bashrc ``` The path should be wherever your django source is. Mine is located at /usr/local/python-2.7.2/lib/python2.7/site-packages/django, so I just specified /usr/local/python-2.7.2/lib/python2.7/site-packages without the django portion. **OSX** On OSX the path is `/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages` and you can add it to `/etc/profile`: ``` sudo echo "PYTHONPATH=/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages:$PYTHONPATH" >> /etc/profile source /etc/profile ```
No Module named django.core
312,549
62
2008-11-23T14:09:24Z
10,070,104
39
2012-04-09T07:13:06Z
[ "python", "django" ]
I have updated to latest Django version 1.0.2 after uninstalling my old Django version.But now when I run django-admin.py I get the following error. How can I resolve this? ``` Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\bin\django-admin.py", line 2, in <module> from django.core import management ImportError: No module named django.core ```
I have the same problem on Windows and it seems I've found the problem. I have both 2.7 and 3.x installed. It seems it has something to do with the associate program of .py: In commandline type: > assoc .py and the result is: .py=Python.File which means .py is associated with Python.File then I tried this: > ftype Python.File I got: Python.File="C:\Python32\python.exe" "%1" %\* which means in commandline .py is associated with my Python 3.2 installation -- and that's why I can't just type "django-admin.py blah blah" to use django. ALL you need to do is change the association: > ftype Python.File="C:\Python27\python.exe" "%1" %\* then everythong's okay!
Python, optparse and file mask
312,673
3
2008-11-23T16:12:42Z
312,677
8
2008-11-23T16:20:25Z
[ "python", "optparse" ]
``` if __name__=='__main__': parser = OptionParser() parser.add_option("-i", "--input_file", dest="input_filename", help="Read input from FILE", metavar="FILE") (options, args) = parser.parse_args() print options ``` result is ``` $ python convert.py -i video_* {'input_filename': 'video_1.wmv'} ``` there are video\_[1-6].wmv in the current folder. Question is why video\_\* become video\_1.wmv. What i'm doing wrong?
Python has nothing to do with this -- it's the shell. Call ``` $ python convert.py -i 'video_*' ``` and it will pass in that wildcard. The other six values were passed in as args, not attached to the `-i`, exactly as if you'd run `python convert.py -i video_1 video_2 video_3 video_4 video_5 video_6`, and the `-i` only attaches to the immediate next parameter. That said, your best bet might to be just read your input filenames from `args`, rather than using `options.input`.
Python: Problem with overloaded constructors
312,695
7
2008-11-23T16:41:56Z
312,727
36
2008-11-23T17:09:12Z
[ "python", "exception", "constructor-overloading" ]
WARNING: I have been learning Python for all of 10 minutes so apologies for any stupid questions! I have written the following code, however I get the following exception: > Message File > Name Line Position Traceback Node > 31 > exceptions.TypeError: this constructor takes no arguments ``` class Computer: name = "Computer1" ip = "0.0.0.0" screenSize = 17 def Computer(compName, compIp, compScreenSize): name = compName ip = compIp screenSize = compScreenSize printStats() return def Computer(): printStats() return def printStats(): print "Computer Statistics: --------------------------------" print "Name:" + name print "IP:" + ip print "ScreenSize:" , screenSize // cannot concatenate 'str' and 'tuple' objects print "-----------------------------------------------------" return comp1 = Computer() comp2 = Computer("The best computer in the world", "27.1.0.128",22) ``` Any thoughts?
I'm going to assume you're coming from a Java-ish background, so there are a few key differences to point out. ``` class Computer(object): """Docstrings are used kind of like Javadoc to document classes and members. They are the first thing inside a class or method. You probably want to extend object, to make it a "new-style" class. There are reasons for this that are a bit complex to explain.""" # everything down here is a static variable, unlike in Java or C# where # declarations here are for what members a class has. All instance # variables in Python are dynamic, unless you specifically tell Python # otherwise. defaultName = "belinda" defaultRes = (1024, 768) defaultIP = "192.168.5.307" def __init__(self, name=defaultName, resolution=defaultRes, ip=defaultIP): """Constructors in Python are called __init__. Methods with names like __something__ often have special significance to the Python interpreter. The first argument to any class method is a reference to the current object, called "self" by convention. You can use default function arguments instead of function overloading.""" self.name = name self.resolution = resolution self.ip = ip # and so on def printStats(self): """You could instead use a __str__(self, ...) function to return this string. Then you could simply do "print(str(computer))" if you wanted to.""" print "Computer Statistics: --------------------------------" print "Name:" + self.name print "IP:" + self.ip print "ScreenSize:" , self.resolution //cannot concatenate 'str' and 'tuple' objects print "-----------------------------------------------------" ```
Django authentication and Ajax - URLs that require login
312,925
45
2008-11-23T20:35:16Z
523,196
50
2009-02-07T05:20:02Z
[ "javascript", "python", "django", "authentication" ]
I want to add some [Ajax](http://en.wikipedia.org/wiki/Ajax%5F%28programming%29)-niceness to my Django-coded website. In my Django code, I use the `@login_required` decorator from `django.contrib.auth.decorators` to mark which view requires authentication. The default behavior when a not authenticated user clicks it is to redirect him/her to login page, and then pass the target page. What I saw on some sites, and really liked, is that when user clicks a link leading to a place restricted to logged-only users, instead of getting redirected to a login page, he/she gets a popup window (via JavaScript) asking him/her to log in or register. There's no redirection part, so no need for a user to use the "back" key if he/she decides he/she really doesn't like the website enough to waste the time registering. So, the qestion is: how would you manage the task of automatically marking some links as "restricted" so JavaScript can handle their `onclick` event and display a "please log in" popup?
I am facing the same issue, and, like you, I would like a simple decorator to wrap around a Django ajax view in order to handle authentication in the same way that I have other views. One approach that seems promising to me is to use such a decorator in conjunction with JavaScript that looks for a certain value in the response. Here is ~~first~~ revised draft of the decorator: ``` from functools import wraps def ajax_login_required(view_func): @wraps(view_func) def wrapper(request, *args, **kwargs): if request.user.is_authenticated(): return view_func(request, *args, **kwargs) json = simplejson.dumps({ 'not_authenticated': True }) return HttpResponse(json, mimetype='application/json') return wrapper ``` Here is the view: ``` @ajax_login_required def ajax_update_module(request, module_slug, action): # Etc ... return HttpResponse(json, mimetype='application/json') ``` And here is the JavaScript (jQuery): ``` $.post('/restricted-url/', data, function(json) { if (json.not_authenticated) { alert('Not authorized.'); // Or something in a message DIV return; } // Etc ... }); ``` --- **EDIT**: I've attempted to use `functools.wraps`, as suggested. I have not actually used this decorator in working code, so beware of possible bugs.
Comparing multiple dictionaries in Python
314,583
4
2008-11-24T16:02:37Z
314,633
7
2008-11-24T16:16:20Z
[ "python", "data-mining" ]
I'm new to Python and am running to a problem I can't google my way out of. I've built a GUI using wxPython and ObjectiveListView. In its very center, the GUI has a list control displaying data in X rows (the data is loaded by the user) and in five columns. When the user selects multiple entries from the list control (pressing CTRL or shift while clicking), the ObjectiveListView module gives me a list of dictionaries, the dictionaries containing the data in the rows of the list control. This is exactly what I want, good! The returned list looks something like this: ``` print MyList [{'id':1023, 'type':'Purchase', 'date':'23.8.2008', 'sum':'-21,90', 'target':'Apple Store'}, {'id':1024, 'type':'Purchase', 'date':'24.8.2008', 'sum':'-21,90', 'target':'Apple Store'}, {'id':23, 'type':'Purchase', 'date':'2.8.2008', 'sum':'-21,90', 'target':'Apple Store'}] ``` All the dictionaries have the same keys, but the values change. The 'id' value is unique. Here the problems start. I want to get the common values for all the items the user selected. In the above list they would be 'sum':'-21,90' and 'target':'Apple Store'. I don't know how to properly compare the dicts in the list. One big problem is that I don't know beforehand how many dicts the list contains, since it's decided by the user. I have a vague idea that list comprehensions would be the way to go, but I only know how to compare two lists with list comprehensions, not n lists. Any help would be appreciated.
``` >>> mysets = (set(x.items()) for x in MyList) >>> reduce(lambda a,b: a.intersection(b), mysets) set([('sum', '-21,90'), ('type', 'Purchase'), ('target', 'Apple Store')]) ``` First, I've created a generator that will convert the list of dicts into an iterable sequence of sets of key,value pairs. You could use a list comprehension here but this way doesn't convert your entire list into yet another list, useful if you don't know how big it will be. Then I've used reduce to apply a function that finds the common values between each set. It finds the intersection of set 1 & set 2, which is itself a set, then the intersection of that set & set 3 etc. The mysets generator will happily feed each set on demand to the reduce function until its done. I believe reduce has been deprecated as a built-in in Python 3.0, but should still be available in functools. You could of course make it a one-liner by replacing mysets in the reduce with the generator expression, but that reduces the readability IMO. In practice I'd probably even go one step further and break the lambda out into its own line as well: ``` >>> mysets = (set(x.items()) for x in MyList) >>> find_common = lambda a,b: a.intersection(b) >>> reduce(find_common, mysets) set([('sum', '-21,90'), ('type', 'Purchase'), ('target', 'Apple Store')]) ``` And if you need the end result to be a dict, just wrap it like so: ``` >>> dict(reduce(find_common, mysets)) {'sum': '-21,90', 'type': 'Purchase', 'target': 'Apple Store'} ``` dict can accept any iterator of key,value pairs, such as the set of tuples returned at the end.
How to debug Web2py applications?
315,165
16
2008-11-24T19:27:05Z
315,318
8
2008-11-24T20:22:22Z
[ "python", "debugging", "web2py" ]
Is it possible? By debug I mean setting breakpoints, inspect values and advance step by step.
I haven't used web2py, but if it runs in a terminal window, you can use standard pdb stuff. Add this line somewhere in your code: ``` import pdb; pdb.set_trace() ``` This will invoke the debugger and break. Then you can use [PDB](http://docs.python.org/lib/module-pdb.html) commands: n to step to the next line, l to list code, s to step into a function, p to print values, etc.
How to debug Web2py applications?
315,165
16
2008-11-24T19:27:05Z
318,501
9
2008-11-25T19:03:51Z
[ "python", "debugging", "web2py" ]
Is it possible? By debug I mean setting breakpoints, inspect values and advance step by step.
You can do remote debugging of python web apps over TCP/IP with [winpdb](http://winpdb.org/).
How to debug Web2py applications?
315,165
16
2008-11-24T19:27:05Z
806,233
8
2009-04-30T10:04:10Z
[ "python", "debugging", "web2py" ]
Is it possible? By debug I mean setting breakpoints, inspect values and advance step by step.
One can debug applications built on Web2py using the following set-up: 1. Eclipse IDE 2. Install Pydev into Eclipse 3. Set Breakpoints on your code as needed 4. Within Eclipse right-click the file web2py.py and select Debug As -> Python Run 5. When a breakpoint is hit Eclipse will jump to the breakpoint where you can inspect variables and step thru the code
Properly formatted example for Python iMAP email access?
315,362
9
2008-11-24T20:31:14Z
315,710
10
2008-11-24T22:18:52Z
[ "python", "email", "imap" ]
tldr: Can someone show me how to properly format this Python iMAP example so it works? from <https://docs.python.org/2.4/lib/imap4-example.html> > ``` > import getpass, imaplib > > M = imaplib.IMAP4() > M.login(getpass.getuser(), getpass.getpass()) > M.select() > typ, data = M.search(None, 'ALL') > for num in data[0].split(): > typ, data = M.fetch(num, '(RFC822)') > print 'Message %s\n%s\n' % (num, data[0][1]) > M.close() > M.logout() > ``` Assuming my email is "[email protected]" and the password is "password," how should this look? I tried `M.login(getpass.getuser([email protected]), getpass.getpass(password))` and it timed out. Complete newb here, so it's very likely I missed something obvious (like creating an iMAP object first? Not sure).
``` import imaplib # you want to connect to a server; specify which server server= imaplib.IMAP4_SSL('imap.googlemail.com') # after connecting, tell the server who you are server.login('[email protected]', 'password') # this will show you a list of available folders # possibly your Inbox is called INBOX, but check the list of mailboxes code, mailboxen= server.list() print mailboxen # if it's called INBOX, then… server.select("INBOX") ``` The rest of your code seems correct.
Properly formatted example for Python iMAP email access?
315,362
9
2008-11-24T20:31:14Z
316,457
10
2008-11-25T05:42:27Z
[ "python", "email", "imap" ]
tldr: Can someone show me how to properly format this Python iMAP example so it works? from <https://docs.python.org/2.4/lib/imap4-example.html> > ``` > import getpass, imaplib > > M = imaplib.IMAP4() > M.login(getpass.getuser(), getpass.getpass()) > M.select() > typ, data = M.search(None, 'ALL') > for num in data[0].split(): > typ, data = M.fetch(num, '(RFC822)') > print 'Message %s\n%s\n' % (num, data[0][1]) > M.close() > M.logout() > ``` Assuming my email is "[email protected]" and the password is "password," how should this look? I tried `M.login(getpass.getuser([email protected]), getpass.getpass(password))` and it timed out. Complete newb here, so it's very likely I missed something obvious (like creating an iMAP object first? Not sure).
Here is a script I used to use to grab logwatch info from my mailbox. [Presented at LFNW 2008](http://brianlane.com/articles/lfnw2008/) - ``` #!/usr/bin/env python ''' Utility to scan my mailbox for new mesages from Logwatch on systems and then grab useful info from the message and output a summary page. by Brian C. Lane <[email protected]> ''' import os, sys, imaplib, rfc822, re, StringIO server ='mail.brianlane.com' username='yourusername' password='yourpassword' M = imaplib.IMAP4_SSL(server) M.login(username, password) M.select() typ, data = M.search(None, '(UNSEEN SUBJECT "Logwatch")') for num in data[0].split(): typ, data = M.fetch(num, '(RFC822)') # print 'Message %s\n%s\n' % (num, data[0][1]) match = re.search( "^(Users logging in.*?)^\w", data[0][1], re.MULTILINE|re.DOTALL ) if match: file = StringIO.StringIO(data[0][1]) message = rfc822.Message(file) print message['from'] print match.group(1).strip() print '----' M.close() M.logout() ```
How do I search through a folder for the filename that matches a regular expression using Python?
315,381
2
2008-11-24T20:38:45Z
315,430
8
2008-11-24T20:51:04Z
[ "python", "regex" ]
I am having some difficulty writing a function that will search through a directory for a file that matches a specific regular expression (which I have compiled using 're.compile'). So my question is: How do I search through a directory (I plan to use os.walk) for a file that matches a specific regular expression? An example would be very much appreciated. Thanks in advance.
This will find all files starting with two digits and ending in gif, you can add the files into a global list, if you wish: ``` import re import os r = re.compile(r'\d{2}.+gif$') for root, dirs, files in os.walk('/home/vinko'): l = [os.path.join(root,x) for x in files if r.match(x)] if l: print l #Or append to a global list, whatever ```
Automagically expanding a Python list with formatted output
315,672
6
2008-11-24T22:02:29Z
315,684
16
2008-11-24T22:06:59Z
[ "python", "list", "mysql" ]
Does anyone know if there's a way to automatically expand a list in Python, separated by commas? I'm writing some Python code that uses the MySQLdb library, and I'm trying to dynamically update a list of rows in a MySQL database with certain key values. For instance, in the code below, I'd like to have the numeric values in the record\_ids list expand into a SQL "`IN`" clause. ``` import MySQLdb record_ids = [ 23, 43, 71, 102, 121, 241 ] mysql = MySQLdb.connect(user="username", passwd="secret", db="apps") mysql_cursor = mysql.cursor() sqlStmt="UPDATE apps.sometable SET lastmod=SYSDATE() where rec_id in ( %s )" mysql_cursor.execute( sqlStmt, record_ids ) mysql.commit() ``` Any help would be appreciated!
try: ``` ",".join( map(str, record_ids) ) ``` `",".join( list_of_strings )` joins a list of string by separating them with commas if you have a list of numbers, `map( str, list )` will convert it to a list of strings
Running a function periodically in twisted protocol
315,716
23
2008-11-24T22:21:17Z
316,559
36
2008-11-25T07:12:05Z
[ "python", "tcp", "twisted", "protocols" ]
I am looking for a way to periodically send some data over all clients connected to a TCP port. I am looking at twisted python and I am aware of reactor.callLater. But how do I use it to send some data to all connected clients periodically ? The data sending logic is in Protocol class and it is instantiated by the reactor as needed. I don't know how to tie it from reactor to all protocol instances...
You would probably want to do this in the Factory for the connections. The Factory is not automatically notified of every time a connection is made and lost, so you can notify it from the Protocol. Here is a complete example of how to use twisted.internet.task.LoopingCall in conjunction with a customised basic Factory and Protocol to announce that '10 seconds has passed' to every connection every 10 seconds. ``` from twisted.internet import reactor, protocol, task class MyProtocol(protocol.Protocol): def connectionMade(self): self.factory.clientConnectionMade(self) def connectionLost(self, reason): self.factory.clientConnectionLost(self) class MyFactory(protocol.Factory): protocol = MyProtocol def __init__(self): self.clients = [] self.lc = task.LoopingCall(self.announce) self.lc.start(10) def announce(self): for client in self.clients: client.transport.write("10 seconds has passed\n") def clientConnectionMade(self, client): self.clients.append(client) def clientConnectionLost(self, client): self.clients.remove(client) myfactory = MyFactory() reactor.listenTCP(9000, myfactory) reactor.run() ```
Python float to Decimal conversion
316,238
26
2008-11-25T02:53:44Z
316,253
25
2008-11-25T03:07:03Z
[ "python", "decimal" ]
Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first. This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal(""%.15f"% my\_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal. Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?
### Python <2.7 ``` "%.15g" % f ``` Or in Python 3.0: ``` format(f, ".15g") ``` ### Python 2.7+, 3.2+ Just pass the float to `Decimal` constructor directly.
Python float to Decimal conversion
316,238
26
2008-11-25T02:53:44Z
316,308
21
2008-11-25T03:40:02Z
[ "python", "decimal" ]
Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first. This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal(""%.15f"% my\_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal. Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?
You said in your question: > Can someone suggest a good way to > convert from float to Decimal > **preserving value as the user has > entered** But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.
Problem compiling MySQLdb for Python 2.6 on Win32
316,484
9
2008-11-25T06:14:20Z
319,007
9
2008-11-25T21:51:20Z
[ "python", "mysql", "winapi" ]
I'm using Django and Python 2.6, and I want to grow my application using a MySQL backend. Problem is that there isn't a win32 package for MySQLdb on Python 2.6. Now I'm no hacker, but I thought I might compile it myself using MSVC++9 Express. But I run into a problem that the compiler quickly can't find `config_win.h`, which I assume is a header file for MySQL so that the MySQLdb package can know what calls it can make into MySQL. Am I right? And if so, where do I get the header files for MySQL?
Thanks all! I found that I hadn't installed the developer components in MySQL. Once that was done the problem was solved and I easily compiled the MySQLdb for Python 2.6. I've made the package available at [my site](http://www.technicalbard.com/files/MySQL-python-1.2.2.win32-py2.6.exe).
Ping a site in Python?
316,866
64
2008-11-25T09:58:37Z
316,974
35
2008-11-25T10:49:58Z
[ "python", "network-programming", "ping" ]
The basic code is: ``` from Tkinter import * import os,sys ana= Tk() def ping1(): os.system('ping') a=Button(pen) ip=("192.168.0.1") a.config(text="PING",bg="white",fg="blue") a=ping1.ip ??? a.pack() ana.mainloop() ``` How could I ping a sites or address?
Depending on what you want to achive, you are probably easiest calling the system ping command.. Using the subprocess module is the best way of doing this, although you have to remember the ping command is different on different operating systems! ``` import subprocess host = "www.google.com" ping = subprocess.Popen( ["ping", "-c", "4", host], stdout = subprocess.PIPE, stderr = subprocess.PIPE ) out, error = ping.communicate() print out ``` You don't need to worry about shell-escape characters. For example.. ``` host = "google.com; `echo test` ``` ..will **not** execute the echo command. Now, to actually get the ping results, you could parse the `out` variable. Example output: ``` round-trip min/avg/max/stddev = 248.139/249.474/250.530/0.896 ms ``` Example regex: ``` import re matcher = re.compile("round-trip min/avg/max/stddev = (\d+.\d+)/(\d+.\d+)/(\d+.\d+)/(\d+.\d+)") print matcher.search(out).groups() # ('248.139', '249.474', '250.530', '0.896') ``` Again, remember the output will vary depending on operating system (and even the version of `ping`). This isn't ideal, but it will work fine in many situations (where you know the machines the script will be running on)
Ping a site in Python?
316,866
64
2008-11-25T09:58:37Z
317,021
9
2008-11-25T11:12:56Z
[ "python", "network-programming", "ping" ]
The basic code is: ``` from Tkinter import * import os,sys ana= Tk() def ping1(): os.system('ping') a=Button(pen) ip=("192.168.0.1") a.config(text="PING",bg="white",fg="blue") a=ping1.ip ??? a.pack() ana.mainloop() ``` How could I ping a sites or address?
It's hard to say what your question is, but there are some alternatives. If you mean to literally execute a request using the ICMP ping protocol, you can get an ICMP library and execute the ping request directly. Google "Python ICMP" to find things like this [icmplib](http://code.activestate.com/recipes/409689/). You might want to look at [scapy](http://www.secdev.org/projects/scapy/), also. This will be much faster than using `os.system("ping " + ip )`. If you mean to generically "ping" a box to see if it's up, you can use the echo protocol on port 7. For echo, you use the [socket](http://www.python.org/doc/2.5.2/lib/module-socket.html) library to open the IP address and port 7. You write something on that port, send a carriage return (`"\r\n"`) and then read the reply. If you mean to "ping" a web site to see if the site is running, you have to use the http protocol on port 80. For or properly checking a web server, you use [urllib2](http://www.python.org/doc/2.5.2/lib/module-urllib2.html) to open a specific URL. (`/index.html` is always popular) and read the response. There are still more potential meaning of "ping" including "traceroute" and "finger".
Ping a site in Python?
316,866
64
2008-11-25T09:58:37Z
317,172
37
2008-11-25T12:28:42Z
[ "python", "network-programming", "ping" ]
The basic code is: ``` from Tkinter import * import os,sys ana= Tk() def ping1(): os.system('ping') a=Button(pen) ip=("192.168.0.1") a.config(text="PING",bg="white",fg="blue") a=ping1.ip ??? a.pack() ana.mainloop() ``` How could I ping a sites or address?
You may find [Noah Gift's](http://noahgift.com/) presentation [Creating Agile Commandline Tools With Python](http://www.slideshare.net/noahgift/pycon2008-cli-noahgift). In it he combines subprocess, Queue and threading to develop solution that is capable of pinging hosts concurrently and speeding up the process. Below is a basic version before he adds command line parsing and some other features. The code to this version and others can be found [here](http://code.noahgift.com/pycon2008/pycon2008_cli_noahgift.zip) ``` #!/usr/bin/env python2.5 from threading import Thread import subprocess from Queue import Queue num_threads = 4 queue = Queue() ips = ["10.0.1.1", "10.0.1.3", "10.0.1.11", "10.0.1.51"] #wraps system ping command def pinger(i, q): """Pings subnet""" while True: ip = q.get() print "Thread %s: Pinging %s" % (i, ip) ret = subprocess.call("ping -c 1 %s" % ip, shell=True, stdout=open('/dev/null', 'w'), stderr=subprocess.STDOUT) if ret == 0: print "%s: is alive" % ip else: print "%s: did not respond" % ip q.task_done() #Spawn thread pool for i in range(num_threads): worker = Thread(target=pinger, args=(i, queue)) worker.setDaemon(True) worker.start() #Place work in queue for ip in ips: queue.put(ip) #Wait until worker threads are done to exit queue.join() ``` He is also author of: [Python for Unix and Linux System Administration](http://rads.stackoverflow.com/amzn/click/0596515820) [![](http://ecx.images-amazon.com/images/I/515qmR%2B4sjL._SL500_AA240_.jpg)](http://rads.stackoverflow.com/amzn/click/0596515820)
Ping a site in Python?
316,866
64
2008-11-25T09:58:37Z
317,206
76
2008-11-25T12:39:12Z
[ "python", "network-programming", "ping" ]
The basic code is: ``` from Tkinter import * import os,sys ana= Tk() def ping1(): os.system('ping') a=Button(pen) ip=("192.168.0.1") a.config(text="PING",bg="white",fg="blue") a=ping1.ip ??? a.pack() ana.mainloop() ``` How could I ping a sites or address?
See this [pure Python ping](https://pypi.python.org/pypi/python-ping/2011.10.17.376a019) by [Matthew Dixon Cowles](http://www.mondoinfo.com/) and [Jens Diemer](http://www.jensdiemer.de/). Also, remember that Python requires root to spawn ICMP (i.e. ping) sockets in linux. ``` import ping, socket try: ping.verbose_ping('www.google.com', count=3) delay = ping.Ping('www.wikipedia.org', timeout=2000).do() except socket.error, e: print "Ping Error:", e ``` The source code itself is easy to read, see the implementations of `verbose_ping` and of `Ping.do` for inspiration.
Ping a site in Python?
316,866
64
2008-11-25T09:58:37Z
1,165,094
7
2009-07-22T12:59:52Z
[ "python", "network-programming", "ping" ]
The basic code is: ``` from Tkinter import * import os,sys ana= Tk() def ping1(): os.system('ping') a=Button(pen) ip=("192.168.0.1") a.config(text="PING",bg="white",fg="blue") a=ping1.ip ??? a.pack() ana.mainloop() ``` How could I ping a sites or address?
I did something similar this way, as an inspiration: ``` import urllib import threading import time def pinger_urllib(host): """ helper function timing the retrival of index.html TODO: should there be a 1MB bogus file? """ t1 = time.time() urllib.urlopen(host + '/index.html').read() return (time.time() - t1) * 1000.0 def task(m): """ the actual task """ delay = float(pinger_urllib(m)) print '%-30s %5.0f [ms]' % (m, delay) # parallelization tasks = [] URLs = ['google.com', 'wikipedia.org'] for m in URLs: t = threading.Thread(target=task, args=(m,)) t.start() tasks.append(t) # synchronization point for t in tasks: t.join() ```
Get Element value with minidom with Python
317,413
78
2008-11-25T13:57:02Z
317,421
100
2008-11-25T13:59:13Z
[ "python", "dom", "minidom" ]
I am creating a GUI frontend for the Eve Online API in Python. I have successfully pulled the XML data from their server. I am trying to grab the value from a node called "name": ``` from xml.dom.minidom import parse dom = parse("C:\\eve.xml") name = dom.getElementsByTagName('name') print name ``` This seems to find the node, but the output is below: ``` [<DOM Element: name at 0x11e6d28>] ``` How could I get it to print the value of the node?
It should just be ``` name[0].firstChild.nodeValue ```
Get Element value with minidom with Python
317,413
78
2008-11-25T13:57:02Z
317,494
50
2008-11-25T14:21:08Z
[ "python", "dom", "minidom" ]
I am creating a GUI frontend for the Eve Online API in Python. I have successfully pulled the XML data from their server. I am trying to grab the value from a node called "name": ``` from xml.dom.minidom import parse dom = parse("C:\\eve.xml") name = dom.getElementsByTagName('name') print name ``` This seems to find the node, but the output is below: ``` [<DOM Element: name at 0x11e6d28>] ``` How could I get it to print the value of the node?
Probably something like this if it's the text part you want... ``` from xml.dom.minidom import parse dom = parse("C:\\eve.xml") name = dom.getElementsByTagName('name') print " ".join(t.nodeValue for t in name[0].childNodes if t.nodeType == t.TEXT_NODE) ``` The text part of a node is considered a node in itself placed as a child-node of the one you asked for. Thus you will want to go through all its children and find all child nodes that are text nodes. A node can have several text nodes; eg. ``` <name> blabla <somestuff>asdf</somestuff> znylpx </name> ``` You want both 'blabla' and 'znylpx'; hence the " ".join(). You might want to replace the space with a newline or so, or perhaps by nothing.
Get Element value with minidom with Python
317,413
78
2008-11-25T13:57:02Z
4,835,703
10
2011-01-29T07:28:23Z
[ "python", "dom", "minidom" ]
I am creating a GUI frontend for the Eve Online API in Python. I have successfully pulled the XML data from their server. I am trying to grab the value from a node called "name": ``` from xml.dom.minidom import parse dom = parse("C:\\eve.xml") name = dom.getElementsByTagName('name') print name ``` This seems to find the node, but the output is below: ``` [<DOM Element: name at 0x11e6d28>] ``` How could I get it to print the value of the node?
you can use something like this.It worked out for me ``` doc = parse('C:\\eve.xml') my_node_list = doc.getElementsByTagName("name") my_n_node = my_node_list[0] my_child = my_n_node.firstChild my_text = my_child.data print my_text ```
Standard C or Python libraries to compute standard deviation of normal distribution
317,963
4
2008-11-25T16:25:34Z
318,986
7
2008-11-25T21:44:54Z
[ "python", "c", "algorithm", "math", "probability" ]
Say we have normal distribution n(x): mean=0 and \int\_{-a}^{a} n(x) = P. What is the easiest way to compute standard deviation of such distribution? May be there are standard libraries for python or C, that are suitable for that task?
If X is normal with mean 0 and standard deviation sigma, it must hold ``` P = Prob[ -a <= X <= a ] = Prob[ -a/sigma <= N <= a/sigma ] = 2 Prob[ 0 <= N <= a/sigma ] = 2 ( Prob[ N <= a/sigma ] - 1/2 ) ``` where N is normal with mean 0 and standard deviation 1. Hence ``` P/2 + 1/2 = Prob[ N <= a/sigma ] = Phi(a/sigma) ``` Where Phi is the cumulative distribution function (cdf) of a normal variable with mean 0 and stddev 1. Now we need the *inverse* normal cdf (or the "percent point function"), which in Python is scipy.stats.norm.ppf(). Sample code: ``` from scipy.stats import norm P = 0.3456 a = 3.0 a_sigma = float(norm.ppf(P/2 + 0.5)) # a/sigma sigma = a/a_sigma # Here is the standard deviation ``` For example, we know that the probability of a N(0,1) variable falling int the interval [-1.1] is ~ 0.682 (the dark blue area in [this figure](http://en.wikipedia.org/wiki/Image:Standard_deviation_diagram.svg)). If you set P = 0.682 and a = 1.0 you obtain sigma ~ 1.0, which is indeed the standard deviation.
Python: converting strings for use with ctypes.c_void_p()
318,067
7
2008-11-25T16:49:18Z
318,140
10
2008-11-25T17:12:04Z
[ "python", "dll", "types", "ctypes" ]
given a string: ``` msg="hello world" ``` **How can I define this as a `ctypes.c_void_p()` data type?** the following code yields a "cannot be converted to pointer" exception: ``` data=ctypes.c_void_p(msg) ``` `data` is required to be a `void*` type in C, because it is being passed to a DLL. I'm assuming there is a way to pack/unpack the string using the `struct` package, but unfortunately I am very unfamiliar with this process.
Something like this? Using `ctypes.cast`? ``` >>> import ctypes >>> p1= ctypes.c_char_p("hi mom") >>> ctypes.cast( p1, ctypes.c_void_p ) c_void_p(11133300) ```
How do I do a case insensitive string comparison in Python?
319,426
270
2008-11-26T01:06:44Z
319,435
288
2008-11-26T01:09:22Z
[ "python", "comparison", "case-insensitive" ]
What's the best way to do case insensitive string comparison in Python? I would like to encapsulate comparison of a regular strings to a repository string using in a very simple and pythonic way. I also would like to have ability to look up values in a dict hashed by strings using regular python strings. Much obliged for advice.
Assuming ASCII strings: ``` string1 = 'Hello' string2 = 'hello' if string1.lower() == string2.lower(): print "The strings are the same (case insensitive)" else: print "The strings are not the same (case insensitive)" ```
How do I do a case insensitive string comparison in Python?
319,426
270
2008-11-26T01:06:44Z
11,573,384
43
2012-07-20T05:27:24Z
[ "python", "comparison", "case-insensitive" ]
What's the best way to do case insensitive string comparison in Python? I would like to encapsulate comparison of a regular strings to a repository string using in a very simple and pythonic way. I also would like to have ability to look up values in a dict hashed by strings using regular python strings. Much obliged for advice.
Using Python 2, calling `.lower()` on each string or Unicode object... ``` string1.lower() == string2.lower() ``` ...will work most of the time, but indeed doesn't work in the situations @tchrist has described. Assume we have a file called `unicode.txt` containing the two strings `Σίσυφος` and `ΣΊΣΥΦΟΣ`. With Python 2: ``` >>> utf8_bytes = open("unicode.txt", 'r').read() >>> print repr(utf8_bytes) '\xce\xa3\xce\xaf\xcf\x83\xcf\x85\xcf\x86\xce\xbf\xcf\x82\n\xce\xa3\xce\x8a\xce\xa3\xce\xa5\xce\xa6\xce\x9f\xce\xa3\n' >>> u = utf8_bytes.decode('utf8') >>> print u Σίσυφος ΣΊΣΥΦΟΣ >>> first, second = u.splitlines() >>> print first.lower() σίσυφος >>> print second.lower() σίσυφοσ >>> first.lower() == second.lower() False >>> first.upper() == second.upper() True ``` The Σ character has two lowercase forms, ς and σ, and `.lower()` won't help compare them case-insensitively. However, as of Python 3, all three forms will resolve to ς, and calling lower() on both strings will work correctly: ``` >>> s = open('unicode.txt', encoding='utf8').read() >>> print(s) Σίσυφος ΣΊΣΥΦΟΣ >>> first, second = s.splitlines() >>> print(first.lower()) σίσυφος >>> print(second.lower()) σίσυφος >>> first.lower() == second.lower() True >>> first.upper() == second.upper() True ``` So if you care about edge-cases like the three sigmas in Greek, use Python 3. (For reference, Python 2.7.3 and Python 3.3.0b1 are shown in the interpreter printouts above.)
How do I do a case insensitive string comparison in Python?
319,426
270
2008-11-26T01:06:44Z
29,247,821
164
2015-03-25T05:01:44Z
[ "python", "comparison", "case-insensitive" ]
What's the best way to do case insensitive string comparison in Python? I would like to encapsulate comparison of a regular strings to a repository string using in a very simple and pythonic way. I also would like to have ability to look up values in a dict hashed by strings using regular python strings. Much obliged for advice.
Comparing string in a case insensitive way seems like something that's trivial, but it's not. I will be using Python 3, since Python 2 is underdeveloped here. The first thing to note it that case-removing conversions in unicode aren't trivial. There is text for which `text.lower() != text.upper().lower()`, such as `"ß"`: ``` "ß".lower() #>>> 'ß' "ß".upper().lower() #>>> 'ss' ``` But let's say you wanted to caselessly compare `"BUSSE"` and `"Buße"`. Heck, you probably also want to compare `"BUSSE"` and `"BUẞE"` equal - that's the newer capital form. The recommended way is to use `casefold`: ``` help(str.casefold) #>>> Help on method_descriptor: #>>> #>>> casefold(...) #>>> S.casefold() -> str #>>> #>>> Return a version of S suitable for caseless comparisons. #>>> ``` Do not just use `lower`. If `casefold` is not available, doing `.upper().lower()` helps (but only somewhat). Then you should consider accents. If your font renderer is good, you probably think `"ê" == "ê"` - but it doesn't: ``` "ê" == "ê" #>>> False ``` This is because they are actually ``` import unicodedata [unicodedata.name(char) for char in "ê"] #>>> ['LATIN SMALL LETTER E WITH CIRCUMFLEX'] [unicodedata.name(char) for char in "ê"] #>>> ['LATIN SMALL LETTER E', 'COMBINING CIRCUMFLEX ACCENT'] ``` The simplest way to deal with this is `unicodedata.normalize`. You probably want to use **NFKD** normalization, but feel free to check the documentation. Then one does ``` unicodedata.normalize("NFKD", "ê") == unicodedata.normalize("NFKD", "ê") #>>> True ``` To finish up, here this is expressed in functions: ``` import unicodedata def normalize_caseless(text): return unicodedata.normalize("NFKD", text.casefold()) def caseless_equal(left, right): return normalize_caseless(left) == normalize_caseless(right) ```
How to upload a file with django (python) and s3?
319,923
11
2008-11-26T06:39:33Z
319,943
18
2008-11-26T06:57:11Z
[ "python", "django", "file-upload", "amazon-s3" ]
I'm looking for a way to upload a file to s3. I am using django. I am currently using amazon's python library for uploading along with the following code: View: ``` def submitpicture(request): fuser = request.session["login"] copied_data = request.POST.copy() copied_data.update(request.FILES) content_type = copied_data['file'].get('content-type') ffile = copied_data['file']['content'] key = '%s-%s' % (fuser, ''.join(copied_data['file']['filename'].split(' '))) site_s3.save_s3_data(key, ffile, content_type) ``` Template: ``` <form action="/submitpicture/" method="POST"> <input type="file" id="file" name="file" /> <input type="submit" value="submit" /> </form> ``` However, when I actually try to run it i get the following error: ``` "Key 'file' not found in <QueryDict: {}>" #MultiValueDictKeyError ``` I really don't see what I'm doing wrong. Can someone point me in the right direction? Edit: Just in case someone wonders, I am planning on adding some validation after I get the actual upload working.
You will have to provide the enctype attribute to the FORM element (I've been bitten by this before). For example, your FORM tag should look like: ``` <form action="/submitpicture/" method="POST" enctype="multipart/form-data" > ``` Without the enctype, you will find yourself with an empty request.FILES.
Ensuring subprocesses are dead on exiting Python program
320,232
36
2008-11-26T10:21:21Z
320,290
13
2008-11-26T10:56:21Z
[ "python", "subprocess", "kill", "zombie-process" ]
Is there a way to ensure all created subprocess are dead at exit time of a Python program? By subprocess I mean those created with subprocess.Popen(). If not, should I iterate over all of the issuing kills and then kills -9? anything cleaner?
The `subprocess.Popen.wait()` is the only way to assure that they're dead. Indeed, POSIX OS's require that you wait on your children. Many \*nix's will create a "zombie" process: a dead child for which the parent didn't wait. If the child is reasonably well-written, it terminates. Often, children read from PIPE's. Closing the input is a big hint to the child that it should close up shop and exit. If the child has bugs and doesn't terminate, you may have to kill it. You should fix this bug. If the child is a "serve-forever" loop, and is not designed to terminate, you should either kill it or provide some input or message which will force it to terminate. --- Edit. In standard OS's, you have `os.kill( PID, 9 )`. Kill -9 is harsh, BTW. If you can kill them with SIGABRT (6?) or SIGTERM (15) that's more polite. In Windows OS, you don't have an `os.kill` that works. Look at this [ActiveState Recipe](http://code.activestate.com/recipes/347462/) for terminating a process in Windows. We have child processes that are WSGI servers. To terminate them we do a GET on a special URL; this causes the child to clean up and exit.
Ensuring subprocesses are dead on exiting Python program
320,232
36
2008-11-26T10:21:21Z
320,712
29
2008-11-26T13:36:39Z
[ "python", "subprocess", "kill", "zombie-process" ]
Is there a way to ensure all created subprocess are dead at exit time of a Python program? By subprocess I mean those created with subprocess.Popen(). If not, should I iterate over all of the issuing kills and then kills -9? anything cleaner?
You can use [**atexit**](http://docs.python.org/library/atexit.html) for this, and register any clean up tasks to be run when your program exits. **atexit.register(func[, \*args[, \*\*kargs]])** In your cleanup process, you can also implement your own wait, and kill it when a your desired timeout occurs. ``` >>> import atexit >>> import sys >>> import time >>> >>> >>> >>> def cleanup(): ... timeout_sec = 5 ... for p in all_processes: # list of your processes ... p_sec = 0 ... for second in range(timeout_sec): ... if p.poll() == None: ... time.sleep(1) ... p_sec += 1 ... if p_sec >= timeout_sec: ... p.kill() # supported from python 2.6 ... print 'cleaned up!' ... >>> >>> atexit.register(cleanup) >>> >>> sys.exit() cleaned up! ``` **Note** -- Registered functions won't be run if this process (parent process) is killed. **The following windows method is no longer needed for python >= 2.6** Here's a way to kill a process in windows. Your Popen object has a pid attribute, so you can just call it by **success = win\_kill(p.pid)** (Needs [pywin32](http://sourceforge.net/projects/pywin32/) installed): ``` def win_kill(pid): '''kill a process by specified PID in windows''' import win32api import win32con hProc = None try: hProc = win32api.OpenProcess(win32con.PROCESS_TERMINATE, 0, pid) win32api.TerminateProcess(hProc, 0) except Exception: return False finally: if hProc != None: hProc.Close() return True ```
Ensuring subprocesses are dead on exiting Python program
320,232
36
2008-11-26T10:21:21Z
322,317
21
2008-11-26T22:02:49Z
[ "python", "subprocess", "kill", "zombie-process" ]
Is there a way to ensure all created subprocess are dead at exit time of a Python program? By subprocess I mean those created with subprocess.Popen(). If not, should I iterate over all of the issuing kills and then kills -9? anything cleaner?
On \*nix's, maybe using process groups can help you out - you can catch subprocesses spawned by your subprocesses as well. ``` if __name__ == "__main__": os.setpgrp() # create new process group, become its leader try: # some code finally: os.killpg(0, signal.SIGKILL) # kill all processes in my group ``` Another consideration is to escalate the signals: from SIGTERM (default signal for `kill`) to SIGKILL (a.k.a `kill -9`). Wait a short while between the signals to give the process a chance to exit cleanly before you `kill -9` it.
Currency formatting in Python
320,929
99
2008-11-26T14:43:33Z
320,951
133
2008-11-26T14:50:59Z
[ "python", "formatting", "currency" ]
I am looking to format a number like 188518982.18 to £188,518,982.18 using Python. How can I do this?
See the [locale](https://docs.python.org/2/library/locale.html) module. This does currency (and date) formatting. ``` >>> import locale >>> locale.setlocale( locale.LC_ALL, '' ) 'English_United States.1252' >>> locale.currency( 188518982.18 ) '$188518982.18' >>> locale.currency( 188518982.18, grouping=True ) '$188,518,982.18' ```
Currency formatting in Python
320,929
99
2008-11-26T14:43:33Z
1,082,547
15
2009-07-04T16:41:31Z
[ "python", "formatting", "currency" ]
I am looking to format a number like 188518982.18 to £188,518,982.18 using Python. How can I do this?
My locale settings seemed incomplete, so I had too look beyond this SO answer and found: <http://docs.python.org/library/decimal.html#recipes> OS-independent Just wanted to share here.
Currency formatting in Python
320,929
99
2008-11-26T14:43:33Z
3,393,776
63
2010-08-03T05:20:21Z
[ "python", "formatting", "currency" ]
I am looking to format a number like 188518982.18 to £188,518,982.18 using Python. How can I do this?
## New in 2.7 ``` >>> '{:20,.2f}'.format(18446744073709551616.0) '18,446,744,073,709,551,616.00' ``` <http://docs.python.org/dev/whatsnew/2.7.html#pep-0378>
Currency formatting in Python
320,929
99
2008-11-26T14:43:33Z
3,866,014
8
2010-10-05T17:08:55Z
[ "python", "formatting", "currency" ]
I am looking to format a number like 188518982.18 to £188,518,982.18 using Python. How can I do this?
If you are using OSX and have yet to set your locale module setting this first answer will not work you will receive the following error: ``` Traceback (most recent call last):File "<stdin>", line 1, in <module> File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/locale.py", line 221, in currency raise ValueError("Currency formatting is not possible using "ValueError: Currency formatting is not possible using the 'C' locale. ``` To remedy this you will have to do use the following: ``` locale.setlocale(locale.LC_ALL, 'en_US') ```
Currency formatting in Python
320,929
99
2008-11-26T14:43:33Z
8,851,191
28
2012-01-13T13:17:00Z
[ "python", "formatting", "currency" ]
I am looking to format a number like 188518982.18 to £188,518,982.18 using Python. How can I do this?
Not quite sure why it's not mentioned more online (or on this thread), but the [Babel](http://babel.pocoo.org/) package (and Django utilities) from the Edgewall guys is awesome for currency formatting (and lots of other i18n tasks). It's nice because it doesn't suffer from the need to do everything globally like the core Python locale module. The example the OP gave would simply be: ``` >>> import babel.numbers >>> import decimal >>> babel.numbers.format_currency( decimal.Decimal( "188518982.18" ), "GBP" ) £188,518,982.18 ```
Making functions non override-able
321,024
5
2008-11-26T15:08:29Z
321,119
9
2008-11-26T15:34:24Z
[ "python" ]
I know python functions are virtual by default. Let's say I have this: ``` class Foo: def __init__(self, args): do some stuff def goo(): print "You can overload me" def roo(): print "You cannot overload me" ``` I don't want them to be able to do this: ``` class Aoo(Foo): def roo(): print "I don't want you to be able to do this" ``` Is there a way to prevent users from overloading roo()?
Since Python has monkey patching, not only can you not make anything "private". Even if you could, someone could still monkeypatch in a new version of the method function. You can use this kind of name as a "don't go near" warning. ``` class Foo( object ): def _roo( self ): """Change this at your own risk.""" ``` That's the usual approach. Everyone can read your source. They were warned. If they boldly go where they were warned not to go, they get what they deserve. It doesn't work and you can't help them. You can try to make this intentionally obcure with inner classes and "hidden" implementation modules that are called by the "private" methods. But... everyone has your source. You can't *prevent* anything. You can only advise people of the consequences of their actions.
Making functions non override-able
321,024
5
2008-11-26T15:08:29Z
321,240
26
2008-11-26T16:07:04Z
[ "python" ]
I know python functions are virtual by default. Let's say I have this: ``` class Foo: def __init__(self, args): do some stuff def goo(): print "You can overload me" def roo(): print "You cannot overload me" ``` I don't want them to be able to do this: ``` class Aoo(Foo): def roo(): print "I don't want you to be able to do this" ``` Is there a way to prevent users from overloading roo()?
You can use a metaclass: ``` class NonOverridable(type): def __new__(self, name, bases, dct): if bases and "roo" in dct: raise SyntaxError, "Overriding roo is not allowed" return type.__new__(self, name, bases, dct) class foo: __metaclass__=NonOverridable ... ``` The metatype's **new** is called whenever a subclass is created; this will cause an error in the case you present. It will accept a definition of roo only if there are no base classes. You can make the approach more fancy by using annotations to declare which methods are final; you then need to inspect all bases and compute all final methods, to see whether any of them is overridden. This still doesn't prevent somebody monkey-patching a method into a class after it is defined; you can try to catch these by using a custom dictionary as the classes' dictionary (which might not work in all Python versions, as classes might require the class dictionary to be of the exact dict type).
Comparing XML in a unit test in Python
321,795
30
2008-11-26T19:09:08Z
321,893
13
2008-11-26T19:35:15Z
[ "python", "xml", "elementtree" ]
I have an object that can build itself from an XML string, and write itself out to an XML string. I'd like to write a unit test to test round tripping through XML, but I'm having trouble comparing the two XML versions. Whitespace and attribute order seem to be the issues. Any suggestions for how to do this? This is in Python, and I'm using ElementTree (not that that really matters here since I'm just dealing with XML in strings at this level).
First normalize 2 XML, then you can compare them. I've used the following using lxml ``` obj1 = objectify.fromstring(expect) expect = etree.tostring(obj1) obj2 = objectify.fromstring(xml) result = etree.tostring(obj2) self.assertEquals(expect, result) ```
Comparing XML in a unit test in Python
321,795
30
2008-11-26T19:09:08Z
321,941
7
2008-11-26T19:56:19Z
[ "python", "xml", "elementtree" ]
I have an object that can build itself from an XML string, and write itself out to an XML string. I'd like to write a unit test to test round tripping through XML, but I'm having trouble comparing the two XML versions. Whitespace and attribute order seem to be the issues. Any suggestions for how to do this? This is in Python, and I'm using ElementTree (not that that really matters here since I'm just dealing with XML in strings at this level).
If the problem is really just the whitespace and attribute order, and you have no other constructs than text and elements to worry about, you can parse the strings using a standard XML parser and compare the nodes manually. Here's an example using minidom, but you could write the same in etree pretty simply: ``` def isEqualXML(a, b): da, db= minidom.parseString(a), minidom.parseString(b) return isEqualElement(da.documentElement, db.documentElement) def isEqualElement(a, b): if a.tagName!=b.tagName: return False if sorted(a.attributes.items())!=sorted(b.attributes.items()): return False if len(a.childNodes)!=len(b.childNodes): return False for ac, bc in zip(a.childNodes, b.childNodes): if ac.nodeType!=bc.nodeType: return False if ac.nodeType==ac.TEXT_NODE and ac.data!=bc.data: return False if ac.nodeType==ac.ELEMENT_NODE and not isEqualElement(ac, bc): return False return True ``` If you need a more thorough equivalence comparison, covering the possibilities of other types of nodes including CDATA, PIs, entity references, comments, doctypes, namespaces and so on, you could use the DOM Level 3 Core method isEqualNode. Neither minidom nor etree have that, but pxdom is one implementation that supports it: ``` def isEqualXML(a, b): da, db= pxdom.parseString(a), pxdom.parseString(a) return da.isEqualNode(db) ``` (You may want to change some of the DOMConfiguration options on the parse if you need to specify whether entity references and CDATA sections match their replaced equivalents.) A slightly more roundabout way of doing it would be to parse, then re-serialise to canonical form and do a string comparison. Again pxdom supports the DOM Level 3 LS option ‘canonical-form’ which you could use to do this; an alternative way using the stdlib's minidom implementation is to use c14n. However you must have the PyXML extensions install for this so you still can't quite do it within the stdlib: ``` from xml.dom.ext import c14n def isEqualXML(a, b): da, bd= minidom.parseString(a), minidom.parseString(b) a, b= c14n.Canonicalize(da), c14n.Canonicalize(db) return a==b ```
Comparing XML in a unit test in Python
321,795
30
2008-11-26T19:09:08Z
7,060,342
15
2011-08-14T23:05:34Z
[ "python", "xml", "elementtree" ]
I have an object that can build itself from an XML string, and write itself out to an XML string. I'd like to write a unit test to test round tripping through XML, but I'm having trouble comparing the two XML versions. Whitespace and attribute order seem to be the issues. Any suggestions for how to do this? This is in Python, and I'm using ElementTree (not that that really matters here since I'm just dealing with XML in strings at this level).
This is an old question, but the accepted Kozyarchuk's answer doesn't work for me because of attributes order, and the minidom solution doesn't work as-is either (no idea why, I haven't debugged it). This is what I finally came up with: ``` from doctest import Example from lxml.doctestcompare import LXMLOutputChecker class XmlTest(TestCase): def assertXmlEqual(self, got, want): checker = LXMLOutputChecker() if not checker.check_output(want, got, 0): message = checker.output_difference(Example("", want), got, 0) raise AssertionError(message) ``` This also produces a diff that can be helpful in case of large xml files.
Does c# have anything comparable to Python's list comprehensions
323,032
12
2008-11-27T05:55:24Z
323,039
16
2008-11-27T05:59:36Z
[ "c#", "python", "list" ]
I want to generate a list in C#. I am missing python's list comprehensions. Is there a c# way to create collections on the fly like list comprehensions or generator statements do in python?
If you are using C# 3.0 (VS2008) then LINQ to Objects can do very similar things: ``` List<Foo> fooList = new List<Foo>(); IEnumerable<Foo> extract = from foo in fooList where foo.Bar > 10 select Foo.Name.ToUpper(); ```
Does c# have anything comparable to Python's list comprehensions
323,032
12
2008-11-27T05:55:24Z
323,058
13
2008-11-27T06:24:33Z
[ "c#", "python", "list" ]
I want to generate a list in C#. I am missing python's list comprehensions. Is there a c# way to create collections on the fly like list comprehensions or generator statements do in python?
Matt has mentioned query expressions. These are available for LINQ in general, by the way - not just LINQ to Objects. (For example, the same query applied to a LINQ to SQL datacontext would execute the filter and projection on the database.) The query expressions in C# 3 are simply syntactic sugar over writing normal C# code - although query expressions usually end up calling [extension methods](http://msdn.microsoft.com/en-us/library/bb383977.aspx). (They don't have to, and the compiler doesn't care, but they usually do.) There are various things you can do with collections which aren't available in C# query expressions, but which are supported by method calls, so it's worth being aware of both kinds of syntax. For instance, Matt's query expression of: ``` List<Foo> fooList = new List<Foo>(); IEnumerable<string> extract = from foo in fooList where foo.Bar > 10 select foo.Name.ToUpper(); ``` is "pre-processed" into: ``` List<Foo> fooList = new List<Foo>(); IEnumerable<string> extract = fooList.Where(foo => foo.Bar > 10) .Select(foo => foo.Name.ToUpper()); ``` If you want to (say) filter based on the index of the value in the original collection, you can use an [appropriate overload of Where](http://msdn.microsoft.com/en-us/library/bb549418.aspx) which is unavailable via query expressions: ``` List<Foo> fooList = new List<Foo>(); IEnumerable<string> extract = fooList.Where((foo, index) => foo.Bar > 10 + index) .Select(foo => foo.Name.ToUpper()); ``` Or you could find the length of the longest name matching the criteria: ``` List<Foo> fooList = new List<Foo>(); int longestName = fooList.Where((foo, index) => foo.Bar > 10 + index) .Select(foo => foo.Name) .Max(); ``` (You don't *have* to do the projection and max in separate methods - there's a `Max` overload which takes a projection as well.) My point is that using extension methods you can very easily build up sophisticated queries. You mention Python generators as well - C# has this in the form of [iterator blocks](http://stackoverflow.com/questions/317462/some-help-understanding-yield). Indeed, these are incredibly useful when implementing LINQ-like operators. (Because most of LINQ to Objects is based on extension methods, you can add your own operators which look "native" to LINQ - although you can't change the query expression syntax yourself.)
py2exe fails to generate an executable
323,424
48
2008-11-27T10:31:51Z
325,456
35
2008-11-28T10:36:56Z
[ "python", "wxpython", "py2exe" ]
I am using python 2.6 on XP. I have just installed py2exe, and I can successfully create a simple hello.exe from a hello.py. However, when I try using py2exe on my real program, py2exe produces a few information messages but fails to generate anything in the dist folder. My setup.py looks like this: ``` from distutils.core import setup import py2exe setup(console=['ServerManager.py']) ``` and the py2exe output looks like this: ``` python setup.py py2exe running py2exe creating C:\DevSource\Scripts\ServerManager\build creating C:\DevSource\Scripts\ServerManager\build\bdist.win32 ... ... creating C:\DevSource\Scripts\ServerManager\dist *** searching for required modules *** *** parsing results *** creating python loader for extension 'wx._misc_' (C:\Python26\lib\site-packages\wx-2.8-msw-unicode\wx\_misc_.pyd -> wx._misc_.pyd) creating python loader for extension 'lxml.etree' (C:\Python26\lib\site-packages\lxml\etree.pyd -> lxml.etree.pyd) ... ... creating python loader for extension 'bz2' (C:\Python26\DLLs\bz2.pyd -> bz2.pyd) *** finding dlls needed *** ``` py2exe seems to have found all my imports (though I was a bit surprised to see win32 mentioned, as I am not explicitly importing it). Also, my program starts up quite happily with this command: ``` python ServerManager.py ``` Clearly I am doing something fundamentally wrong, but in the absence of any error messages from py2exe I have no idea what.
I've discovered that py2exe works just fine if I comment out the part of my program that uses wxPython. Also, when I use py2exe on the 'simple' sample that comes with its download (i.e. in Python26\Lib\site-packages\py2exe\samples\simple), I get this error message: ``` *** finding dlls needed *** error: MSVCP90.dll: No such file or directory ``` So something about wxPython makes py2exe think I need a Visual Studio 2008 DLL. I don't have VS2008, and yet my program works perfectly well as a directory of Python modules. I found a copy of MSVCP90.DLL on the web, installed it in Python26/DLLs, and py2exe now works fine. I still don't understand where this dependency has come from, since I can run my code perfectly okay without py2exe. It's also annoying that py2exe didn't give me an error message like it did with the test\_wx.py sample. Further update: When I tried to run the output from py2exe on another PC, I discovered that it needed to have MSVCR90.DLL installed; so if your target PC hasn't got Visual C++ 2008 already installed, I recommend you download and install the [Microsoft Visual C++ 2008 Redistributable Package](http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=29).
py2exe fails to generate an executable
323,424
48
2008-11-27T10:31:51Z
774,715
37
2009-04-21T21:29:58Z
[ "python", "wxpython", "py2exe" ]
I am using python 2.6 on XP. I have just installed py2exe, and I can successfully create a simple hello.exe from a hello.py. However, when I try using py2exe on my real program, py2exe produces a few information messages but fails to generate anything in the dist folder. My setup.py looks like this: ``` from distutils.core import setup import py2exe setup(console=['ServerManager.py']) ``` and the py2exe output looks like this: ``` python setup.py py2exe running py2exe creating C:\DevSource\Scripts\ServerManager\build creating C:\DevSource\Scripts\ServerManager\build\bdist.win32 ... ... creating C:\DevSource\Scripts\ServerManager\dist *** searching for required modules *** *** parsing results *** creating python loader for extension 'wx._misc_' (C:\Python26\lib\site-packages\wx-2.8-msw-unicode\wx\_misc_.pyd -> wx._misc_.pyd) creating python loader for extension 'lxml.etree' (C:\Python26\lib\site-packages\lxml\etree.pyd -> lxml.etree.pyd) ... ... creating python loader for extension 'bz2' (C:\Python26\DLLs\bz2.pyd -> bz2.pyd) *** finding dlls needed *** ``` py2exe seems to have found all my imports (though I was a bit surprised to see win32 mentioned, as I am not explicitly importing it). Also, my program starts up quite happily with this command: ``` python ServerManager.py ``` Clearly I am doing something fundamentally wrong, but in the absence of any error messages from py2exe I have no idea what.
I put this in all my setup.py scripts: ``` distutils.core.setup( options = { "py2exe": { "dll_excludes": ["MSVCP90.dll"] } }, ... ) ``` This keeps py2exe quiet, but you still need to make sure that dll is on the user's machine.
py2exe fails to generate an executable
323,424
48
2008-11-27T10:31:51Z
4,216,212
10
2010-11-18T15:07:21Z
[ "python", "wxpython", "py2exe" ]
I am using python 2.6 on XP. I have just installed py2exe, and I can successfully create a simple hello.exe from a hello.py. However, when I try using py2exe on my real program, py2exe produces a few information messages but fails to generate anything in the dist folder. My setup.py looks like this: ``` from distutils.core import setup import py2exe setup(console=['ServerManager.py']) ``` and the py2exe output looks like this: ``` python setup.py py2exe running py2exe creating C:\DevSource\Scripts\ServerManager\build creating C:\DevSource\Scripts\ServerManager\build\bdist.win32 ... ... creating C:\DevSource\Scripts\ServerManager\dist *** searching for required modules *** *** parsing results *** creating python loader for extension 'wx._misc_' (C:\Python26\lib\site-packages\wx-2.8-msw-unicode\wx\_misc_.pyd -> wx._misc_.pyd) creating python loader for extension 'lxml.etree' (C:\Python26\lib\site-packages\lxml\etree.pyd -> lxml.etree.pyd) ... ... creating python loader for extension 'bz2' (C:\Python26\DLLs\bz2.pyd -> bz2.pyd) *** finding dlls needed *** ``` py2exe seems to have found all my imports (though I was a bit surprised to see win32 mentioned, as I am not explicitly importing it). Also, my program starts up quite happily with this command: ``` python ServerManager.py ``` Clearly I am doing something fundamentally wrong, but in the absence of any error messages from py2exe I have no idea what.
wxPython has nothing to do with it. Before Python 2.6, Python used Visual Studio 2003 as their Windows compiler. Beginning with 2.6, they switched to Visual Studio 2008, which requires a manifest file in some situations. This has been well documented. See the following links: <http://wiki.wxpython.org/py2exe> <http://py2exe.org/index.cgi/Tutorial#Step52> Also, if you're creating a wxPython application with py2exe, then you want to set the windows parameter, NOT the console one. Maybe my tutorial will help you: <http://www.blog.pythonlibrary.org/2010/07/31/a-py2exe-tutorial-build-a-binary-series/>
How to get the name of an open file?
323,515
13
2008-11-27T11:28:25Z
323,522
33
2008-11-27T11:30:31Z
[ "python", "file" ]
I'm trying to store in a variable the name of the current file that I've opened from a folder... How can I do that? I've tried cwd = os.getcwd() but this only gives me the path of the folder, and I need to store the name of the opened file... Can you please help me? Thanks.
``` Python 2.5.1 (r251:54863, Jul 31 2008, 22:53:39) [GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> f = open('generic.png','r') >>> f.name 'generic.png' ```
How to get the name of an open file?
323,515
13
2008-11-27T11:28:25Z
24,144,992
13
2014-06-10T15:30:11Z
[ "python", "file" ]
I'm trying to store in a variable the name of the current file that I've opened from a folder... How can I do that? I've tried cwd = os.getcwd() but this only gives me the path of the folder, and I need to store the name of the opened file... Can you please help me? Thanks.
One more useful trick to add. I agree with original correct answer, however if you're like me came to this page wanting the filename only without the rest of the path, this works well. ``` >>> f = open('/tmp/generic.png','r') >>> f.name '/tmp/generic.png' >>> import os >>> os.path.basename(f.name) 'generic.png' ```
Python list slice syntax used for no obvious reason
323,689
31
2008-11-27T12:57:05Z
323,698
48
2008-11-27T13:00:31Z
[ "python", "list", "shallow-copy" ]
I occasionally see the list slice syntax used in Python code like this: ``` newList = oldList[:] ``` Surely this is just the same as: ``` newList = oldList ``` Or am I missing something?
`[:]` [Shallow copies](http://en.wikipedia.org/wiki/Deep_copy#Shallow_copy) the list, making a copy of the list structure containing references to the original list members. This means that operations on the copy do not affect the structure of the original. However, if you do something to the list members, both lists still refer to them, so the updates will show up if the members are accessed through the original. A [Deep Copy](http://en.wikipedia.org/wiki/Deep_copy#Deep_copy) would make copies of all the list members as well. The code snippet below shows a shallow copy in action. ``` # ================================================================ # === ShallowCopy.py ============================================= # ================================================================ # class Foo: def __init__(self, data): self._data = data aa = Foo ('aaa') bb = Foo ('bbb') # The initial list has two elements containing 'aaa' and 'bbb' OldList = [aa,bb] print OldList[0]._data # The shallow copy makes a new list pointing to the old elements NewList = OldList[:] print NewList[0]._data # Updating one of the elements through the new list sees the # change reflected when you access that element through the # old list. NewList[0]._data = 'xxx' print OldList[0]._data # Updating the new list to point to something new is not reflected # in the old list. NewList[0] = Foo ('ccc') print NewList[0]._data print OldList[0]._data ``` Running it in a python shell gives the following transcript. We can see the list being made with copies of the old objects. One of the objects can have its state updated by reference through the old list, and the updates can be seen when the object is accessed through the old list. Finally, changing a reference in the new list can be seen to not reflect in the old list, as the new list is now referring to a different object. ``` >>> # ================================================================ ... # === ShallowCopy.py ============================================= ... # ================================================================ ... # ... class Foo: ... def __init__(self, data): ... self._data = data ... >>> aa = Foo ('aaa') >>> bb = Foo ('bbb') >>> >>> # The initial list has two elements containing 'aaa' and 'bbb' ... OldList = [aa,bb] >>> print OldList[0]._data aaa >>> >>> # The shallow copy makes a new list pointing to the old elements ... NewList = OldList[:] >>> print NewList[0]._data aaa >>> >>> # Updating one of the elements through the new list sees the ... # change reflected when you access that element through the ... # old list. ... NewList[0]._data = 'xxx' >>> print OldList[0]._data xxx >>> >>> # Updating the new list to point to something new is not reflected ... # in the old list. ... NewList[0] = Foo ('ccc') >>> print NewList[0]._data ccc >>> print OldList[0]._data xxx ```
Python list slice syntax used for no obvious reason
323,689
31
2008-11-27T12:57:05Z
323,733
44
2008-11-27T13:13:58Z
[ "python", "list", "shallow-copy" ]
I occasionally see the list slice syntax used in Python code like this: ``` newList = oldList[:] ``` Surely this is just the same as: ``` newList = oldList ``` Or am I missing something?
Like NXC said, Python variable names actually point to an object, and not a specific spot in memory. `newList = oldList` would create two different variables that point to the same object, therefore, changing `oldList` would also change `newList`. However, when you do `newList = oldList[:]`, it "slices" the list, and creates a new list. The default values for `[:]` are 0 and the end of the list, so it copies everything. Therefore, it creates a new list with all the data contained in the first one, but both can be altered without changing the other.
Python list slice syntax used for no obvious reason
323,689
31
2008-11-27T12:57:05Z
323,800
10
2008-11-27T13:40:50Z
[ "python", "list", "shallow-copy" ]
I occasionally see the list slice syntax used in Python code like this: ``` newList = oldList[:] ``` Surely this is just the same as: ``` newList = oldList ``` Or am I missing something?
As it has already been answered, I'll simply add a simple demonstration: ``` >>> a = [1, 2, 3, 4] >>> b = a >>> c = a[:] >>> b[2] = 10 >>> c[3] = 20 >>> a [1, 2, 10, 4] >>> b [1, 2, 10, 4] >>> c [1, 2, 3, 20] ```
How to access previous/next element while for looping?
323,750
31
2008-11-27T13:22:19Z
323,910
52
2008-11-27T14:28:23Z
[ "python" ]
Is there a way to access a list(or tuple, or other iterable)'s next, or previous element while looping through with for loop? ``` l=[1,2,3] for item in l: if item==2: get_previous(l,item) ```
Expressed as a generator function: ``` def neighborhood(iterable): iterator = iter(iterable) prev = None item = iterator.next() # throws StopIteration if empty. for next in iterator: yield (prev,item,next) prev = item item = next yield (prev,item,None) ``` Usage: ``` for prev,item,next in neighborhood(l): print prev, item, next ``` **Edit:** I thought it would reduce the readability, but this way seem to look better.
How to access previous/next element while for looping?
323,750
31
2008-11-27T13:22:19Z
324,273
7
2008-11-27T17:15:38Z
[ "python" ]
Is there a way to access a list(or tuple, or other iterable)'s next, or previous element while looping through with for loop? ``` l=[1,2,3] for item in l: if item==2: get_previous(l,item) ```
``` l=[1,2,3] for i,item in enumerate(l): if item==2: get_previous=l[i-1] print get_previous >>>1 ```
How to access previous/next element while for looping?
323,750
31
2008-11-27T13:22:19Z
23,531,068
11
2014-05-08T01:10:28Z
[ "python" ]
Is there a way to access a list(or tuple, or other iterable)'s next, or previous element while looping through with for loop? ``` l=[1,2,3] for item in l: if item==2: get_previous(l,item) ```
One simple way. ``` l=[1,2,3] for i,j in zip(l, l[1:]): print i, j ```
Foreign key from one app into another in Django
323,763
49
2008-11-27T13:24:45Z
323,905
79
2008-11-27T14:27:45Z
[ "python", "django", "django-models" ]
I'm wondering if it's possible to define a foreign key in a models.py file in Django that is a reference to a table in another app? In other words, I have two apps, called cf and profiles, and in cf/models.py I have (amongst other things): ``` class Movie(models.Model): title = models.CharField(max_length=255) ``` and in profiles/models.py I want to have: ``` class MovieProperty(models.Model): movie = models.ForeignKey(Movie) ``` But I can't get it to work. I've tried: ``` movie = models.ForeignKey(cf.Movie) ``` and I've tried importing cf.Movie at the beginning of models.py, but I always get errors, such as: ``` NameError: name 'User' is not defined ``` Am I breaking the rules by trying to tie two apps together in this way, or have I just got the syntax wrong?
According to the docs, your second attempt should work: > To refer to models defined in another application, you must instead explicitly specify the application label. For example, if the Manufacturer model above is defined in another application called production, you'd need to use: ``` class Car(models.Model): manufacturer = models.ForeignKey('production.Manufacturer') ``` Have you tried putting it into quotes?
Foreign key from one app into another in Django
323,763
49
2008-11-27T13:24:45Z
32,869,639
12
2015-09-30T15:09:36Z
[ "python", "django", "django-models" ]
I'm wondering if it's possible to define a foreign key in a models.py file in Django that is a reference to a table in another app? In other words, I have two apps, called cf and profiles, and in cf/models.py I have (amongst other things): ``` class Movie(models.Model): title = models.CharField(max_length=255) ``` and in profiles/models.py I want to have: ``` class MovieProperty(models.Model): movie = models.ForeignKey(Movie) ``` But I can't get it to work. I've tried: ``` movie = models.ForeignKey(cf.Movie) ``` and I've tried importing cf.Movie at the beginning of models.py, but I always get errors, such as: ``` NameError: name 'User' is not defined ``` Am I breaking the rules by trying to tie two apps together in this way, or have I just got the syntax wrong?
It is also possible to pass the class itself: ``` from django.db import models from production import models as production_models class Car(models.Model): manufacturer = models.ForeignKey(production_models.Manufacturer) ```
Is there any way to kill a Thread in Python?
323,972
399
2008-11-27T14:55:53Z
323,981
25
2008-11-27T14:58:49Z
[ "python", "multithreading", "kill", "terminate" ]
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
You should never forcibly kill a thread without cooperating with it. Killing a thread removes any guarantees that try/finally blocks set up so you might leave locks locked, files open, etc. The only time you can argue that forcibly killing threads is a good idea is to kill a program fast, but never single threads.
Is there any way to kill a Thread in Python?
323,972
399
2008-11-27T14:55:53Z
323,993
89
2008-11-27T15:08:07Z
[ "python", "multithreading", "kill", "terminate" ]
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
There is no official API to do that, no. You need to use platform API to kill the thread, e.g. pthread\_kill, or TerminateThread. You can access such API e.g. through pythonwin, or through ctypes. Notice that this is inherently unsafe. It will likely lead to uncollectable garbage (from local variables of the stack frames that become garbage), and may lead to deadlocks, if the thread being killed has the GIL at the point when it is killed.
Is there any way to kill a Thread in Python?
323,972
399
2008-11-27T14:55:53Z
324,002
9
2008-11-27T15:12:07Z
[ "python", "multithreading", "kill", "terminate" ]
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
It is better if you don't kill a thread. A way could be to introduce a "try" block into the thread's cycle and to throw an exception when you want to stop the thread (for example a break/return/... that stops your for/while/...). I've used this on my app and it works...
Is there any way to kill a Thread in Python?
323,972
399
2008-11-27T14:55:53Z
325,027
8
2008-11-28T03:47:26Z
[ "python", "multithreading", "kill", "terminate" ]
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
You can kill a thread by installing trace into the thread that will exit the thread. See attached link for one possible implementation. [Kill a thread in Python](https://web.archive.org/web/20130503082442/http://mail.python.org/pipermail/python-list/2004-May/281943.html)
Is there any way to kill a Thread in Python?
323,972
399
2008-11-27T14:55:53Z
325,528
385
2008-11-28T11:19:54Z
[ "python", "multithreading", "kill", "terminate" ]
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
It is generally a bad pattern to kill a thread abruptly, in Python and in any language. Think of the following cases: * the thread is holding a critical resource that must be closed properly * the thread has created several other threads that must be killed as well. The nice way of handling this if you can afford it (if you are managing your own threads) is to have an exit\_request flag that each threads checks on regular interval to see if it is time for him to exit. **For example:** ``` import threading class StoppableThread(threading.Thread): """Thread class with a stop() method. The thread itself has to check regularly for the stopped() condition.""" def __init__(self): super(StoppableThread, self).__init__() self._stop = threading.Event() def stop(self): self._stop.set() def stopped(self): return self._stop.isSet() ``` In this code, you should call stop() on the thread when you want it to exit, and wait for the thread to exit properly using join(). The thread should check the stop flag at regular intervals. There are cases however when you really need to kill a thread. An example is when you are wrapping an external library that is busy for long calls and you want to interrupt it. The following code allows (with some restrictions) to raise an Exception in a Python thread: ``` def _async_raise(tid, exctype): '''Raises an exception in the threads with id tid''' if not inspect.isclass(exctype): raise TypeError("Only types can be raised (not instances)") res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, ctypes.py_object(exctype)) if res == 0: raise ValueError("invalid thread id") elif res != 1: # "if it returns a number greater than one, you're in trouble, # and you should call it again with exc=NULL to revert the effect" ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, 0) raise SystemError("PyThreadState_SetAsyncExc failed") class ThreadWithExc(threading.Thread): '''A thread class that supports raising exception in the thread from another thread. ''' def _get_my_tid(self): """determines this (self's) thread id CAREFUL : this function is executed in the context of the caller thread, to get the identity of the thread represented by this instance. """ if not self.isAlive(): raise threading.ThreadError("the thread is not active") # do we have it cached? if hasattr(self, "_thread_id"): return self._thread_id # no, look for it in the _active dict for tid, tobj in threading._active.items(): if tobj is self: self._thread_id = tid return tid # TODO: in python 2.6, there's a simpler way to do : self.ident raise AssertionError("could not determine the thread's id") def raiseExc(self, exctype): """Raises the given exception type in the context of this thread. If the thread is busy in a system call (time.sleep(), socket.accept(), ...), the exception is simply ignored. If you are sure that your exception should terminate the thread, one way to ensure that it works is: t = ThreadWithExc( ... ) ... t.raiseExc( SomeException ) while t.isAlive(): time.sleep( 0.1 ) t.raiseExc( SomeException ) If the exception is to be caught by the thread, you need a way to check that your thread has caught it. CAREFUL : this function is executed in the context of the caller thread, to raise an excpetion in the context of the thread represented by this instance. """ _async_raise( self._get_my_tid(), exctype ) ``` As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption. A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup.
Is there any way to kill a Thread in Python?
323,972
399
2008-11-27T14:55:53Z
1,667,825
44
2009-11-03T14:53:11Z
[ "python", "multithreading", "kill", "terminate" ]
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
If you are trying to terminate the whole program you can set the thread as a "daemon". see [Thread.daemon](http://docs.python.org/library/threading.html#threading.Thread.daemon)
Is there any way to kill a Thread in Python?
323,972
399
2008-11-27T14:55:53Z
7,752,174
42
2011-10-13T09:38:20Z
[ "python", "multithreading", "kill", "terminate" ]
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
A `multiprocessing.Process` can `p.terminate()` In the cases where I want to kill a thread, but do not want to use flags/locks/signals/semaphores/events/whatever, I promote the threads to full blown processes. For code that makes use of just a few threads the overhead is not that bad. E.g. this comes in handy to easily terminate helper "threads" which execute blocking I/O The conversion is trivial: In related code replace all `threading.Thread` with `multiprocessing.Process` and all `queue.Queue` with `multiprocessing.Queue` and add the required calls of `p.terminate()` to your parent process which wants to kill its child `p` [Python doc](http://docs.python.org/release/3.1.3/library/multiprocessing.html)
Is there any way to kill a Thread in Python?
323,972
399
2008-11-27T14:55:53Z
15,185,771
9
2013-03-03T12:42:30Z
[ "python", "multithreading", "kill", "terminate" ]
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
In Python, you simply cannot kill a Thread directly. If you do NOT really need to have a Thread (!), what you can do, instead of using the [*threading* package](http://docs.python.org/2/library/threading.html) , is to use the [*multiprocessing* package](http://docs.python.org/2/library/multiprocessing.html) . Here, to kill a process, you can simply call the method: ``` yourProcess.terminate() # kill the process! ``` Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe) Note that the *multiprocessing.Event* and the *multiprocessing.Semaphore* work exactly in the same way of the *threading.Event* and the *threading.Semaphore* respectively. In fact, the first ones are clones of the latters. If you REALLY need to use a Thread, there is no way to kill it directly. What you can do, however, is to use a *"daemon thread"*. In fact, in Python, a Thread can be flagged as *daemon*: ``` yourThread.daemon = True # set the Thread as a "daemon thread" ``` The main program will exit when no alive non-daemon threads are left. In other words, when your main thread (which is, of course, a non-daemon thread) will finish its operations, the program will exit even if there are still some daemon threads working. Note that it is necessary to set a Thread as *daemon* before the *start()* method is called! Of course you can, and should, use *daemon* even with *multiprocessing*. Here, when the main process exits, it attempts to terminate all of its daemonic child processes. Finally, please, note that *sys.exit()* and *os.kill()* are not choices.
Is there any way to kill a Thread in Python?
323,972
399
2008-11-27T14:55:53Z
15,274,929
14
2013-03-07T15:23:22Z
[ "python", "multithreading", "kill", "terminate" ]
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
This is based on [thread2 -- killable threads (Python recipe)](http://code.activestate.com/recipes/496960-thread2-killable-threads/) You need to call PyThreadState\_SetasyncExc(), which is only available through ctypes. This has only been tested on Python 2.7.3, but it is likely to work with other recent 2.x releases. ``` import ctypes def terminate_thread(thread): """Terminates a python thread from another thread. :param thread: a threading.Thread instance """ if not thread.isAlive(): return exc = ctypes.py_object(SystemExit) res = ctypes.pythonapi.PyThreadState_SetAsyncExc( ctypes.c_long(thread.ident), exc) if res == 0: raise ValueError("nonexistent thread id") elif res > 1: # """if it returns a number greater than one, you're in trouble, # and you should call it again with exc=NULL to revert the effect""" ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None) raise SystemError("PyThreadState_SetAsyncExc failed") ```
Is there any way to kill a Thread in Python?
323,972
399
2008-11-27T14:55:53Z
16,146,048
7
2013-04-22T11:27:17Z
[ "python", "multithreading", "kill", "terminate" ]
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
``` from ctypes import * pthread = cdll.LoadLibrary("libpthread-2.15.so") pthread.pthread_cancel(c_ulong(t.ident)) ``` **t** is your `Thread` object. Read the python source (`Modules/threadmodule.c` and `Python/thread_pthread.h`) you can see the `Thread.ident` is an `pthread_t` type, so you can do anything `pthread` can do in python use `libpthread`.
Is there any way to kill a Thread in Python?
323,972
399
2008-11-27T14:55:53Z
27,261,365
10
2014-12-03T00:07:06Z
[ "python", "multithreading", "kill", "terminate" ]
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
As others have mentioned, the norm is to set a stop flag. For something lightweight (no subclassing of Thread, no global variable), a lambda callback is an option. (Note the parentheses in `if stop()`.) ``` import threading import time def do_work(id, stop): print("I am thread", id) while True: print("I am thread {} doing something".format(id)) if stop(): print(" Exiting loop.") break print("Thread {}, signing off".format(id)) def main(): stop_threads = False workers = [] for id in range(0,3): tmp = threading.Thread(target=do_work, args=(id, lambda: stop_threads)) workers.append(tmp) tmp.start() time.sleep(3) print('main: done sleeping; time to stop the threads.') stop_threads = True for worker in workers: worker.join() print('Finis.') if __name__ == '__main__': main() ``` Replacing `print()` with a `pr()` function that always flushes (`sys.stdout.flush()`) may improve the precision of the shell output. (Only tested on Windows/Eclipse/Python3.3)
split twice in the same expression?
324,132
5
2008-11-27T16:12:04Z
324,141
20
2008-11-27T16:16:23Z
[ "python" ]
Imagine I have the following: ``` inFile = "/adda/adas/sdas/hello.txt" # that instruction give me hello.txt Name = inFile.name.split("/") [-1] # that one give me the name I want - just hello Name1 = Name.split(".") [0] ``` Is there any chance to simplify that doing the same job in just one expression?
You can get what you want platform independently by using [os.path.basename](http://docs.python.org/library/os.path.html#os.path.basename) to get the last part of a path and then use [os.path.splitext](http://docs.python.org/library/os.path.html#os.path.splitext) to get the filename without extension. ``` from os.path import basename, splitext pathname = "/adda/adas/sdas/hello.txt" name, extension = splitext(basename(pathname)) print name # --> "hello" ``` Using [os.path.basename](http://docs.python.org/library/os.path.html#os.path.basename) and [os.path.splitext](http://docs.python.org/library/os.path.html#os.path.splitext) instead of str.split, or re.split is more proper (and therefore received more points then any other answer) because it does not break down on other [platforms that use different path separators (you would be surprised how varried this can be)](http://en.wikipedia.org/wiki/Path_(computing)#Representations_of_paths_by_operating_system_and_shell). It also carries most points because it answers your question for "one line" precisely and is aesthetically more pleasing then your example (even though that is debatable as are all questions of taste)
What is the fastest way to parse large XML docs in Python?
324,214
38
2008-11-27T16:47:54Z
324,355
8
2008-11-27T17:53:36Z
[ "python", "xml", "performance", "parsing" ]
I am currently the following code based on Chapter 12.5 of the Python Cookbook: ``` from xml.parsers import expat class Element(object): def __init__(self, name, attributes): self.name = name self.attributes = attributes self.cdata = '' self.children = [] def addChild(self, element): self.children.append(element) def getAttribute(self,key): return self.attributes.get(key) def getData(self): return self.cdata def getElements(self, name=''): if name: return [c for c in self.children if c.name == name] else: return list(self.children) class Xml2Obj(object): def __init__(self): self.root = None self.nodeStack = [] def StartElement(self, name, attributes): element = Element(name.encode(), attributes) if self.nodeStack: parent = self.nodeStack[-1] parent.addChild(element) else: self.root = element self.nodeStack.append(element) def EndElement(self, name): self.nodeStack.pop() def CharacterData(self,data): if data.strip(): data = data.encode() element = self.nodeStack[-1] element.cdata += data def Parse(self, filename): Parser = expat.ParserCreate() Parser.StartElementHandler = self.StartElement Parser.EndElementHandler = self.EndElement Parser.CharacterDataHandler = self.CharacterData ParserStatus = Parser.Parse(open(filename).read(),1) return self.root ``` I am working with XML docs about 1 GB in size. Does anyone know a faster way to parse these?
I recommend you to use [lxml](http://lxml.de/), it's a python binding for the libxml2 library which is really fast. In my experience, libxml2 and expat have very similar performance. But I prefer libxml2 (and lxml for python) because it seems to be more actively developed and tested. Also libxml2 has more features. lxml is mostly API compatible with [xml.etree.ElementTree](http://docs.python.org/lib/module-xml.etree.ElementTree.html). And there is good documentation in its web site.
What is the fastest way to parse large XML docs in Python?
324,214
38
2008-11-27T16:47:54Z
324,483
13
2008-11-27T19:00:41Z
[ "python", "xml", "performance", "parsing" ]
I am currently the following code based on Chapter 12.5 of the Python Cookbook: ``` from xml.parsers import expat class Element(object): def __init__(self, name, attributes): self.name = name self.attributes = attributes self.cdata = '' self.children = [] def addChild(self, element): self.children.append(element) def getAttribute(self,key): return self.attributes.get(key) def getData(self): return self.cdata def getElements(self, name=''): if name: return [c for c in self.children if c.name == name] else: return list(self.children) class Xml2Obj(object): def __init__(self): self.root = None self.nodeStack = [] def StartElement(self, name, attributes): element = Element(name.encode(), attributes) if self.nodeStack: parent = self.nodeStack[-1] parent.addChild(element) else: self.root = element self.nodeStack.append(element) def EndElement(self, name): self.nodeStack.pop() def CharacterData(self,data): if data.strip(): data = data.encode() element = self.nodeStack[-1] element.cdata += data def Parse(self, filename): Parser = expat.ParserCreate() Parser.StartElementHandler = self.StartElement Parser.EndElementHandler = self.EndElement Parser.CharacterDataHandler = self.CharacterData ParserStatus = Parser.Parse(open(filename).read(),1) return self.root ``` I am working with XML docs about 1 GB in size. Does anyone know a faster way to parse these?
Have you tried The cElementTree Module? cElementTree is included with Python 2.5 and later, as xml.etree.cElementTree. Refer the [benchmarks](http://effbot.org/zone/celementtree.htm). *removed dead ImageShack link*
What is the fastest way to parse large XML docs in Python?
324,214
38
2008-11-27T16:47:54Z
326,541
37
2008-11-28T20:03:02Z
[ "python", "xml", "performance", "parsing" ]
I am currently the following code based on Chapter 12.5 of the Python Cookbook: ``` from xml.parsers import expat class Element(object): def __init__(self, name, attributes): self.name = name self.attributes = attributes self.cdata = '' self.children = [] def addChild(self, element): self.children.append(element) def getAttribute(self,key): return self.attributes.get(key) def getData(self): return self.cdata def getElements(self, name=''): if name: return [c for c in self.children if c.name == name] else: return list(self.children) class Xml2Obj(object): def __init__(self): self.root = None self.nodeStack = [] def StartElement(self, name, attributes): element = Element(name.encode(), attributes) if self.nodeStack: parent = self.nodeStack[-1] parent.addChild(element) else: self.root = element self.nodeStack.append(element) def EndElement(self, name): self.nodeStack.pop() def CharacterData(self,data): if data.strip(): data = data.encode() element = self.nodeStack[-1] element.cdata += data def Parse(self, filename): Parser = expat.ParserCreate() Parser.StartElementHandler = self.StartElement Parser.EndElementHandler = self.EndElement Parser.CharacterDataHandler = self.CharacterData ParserStatus = Parser.Parse(open(filename).read(),1) return self.root ``` I am working with XML docs about 1 GB in size. Does anyone know a faster way to parse these?
I looks to me as if you do not need any DOM capabilities from your program. I would second the use of the (c)ElementTree library. If you use the iterparse function of the cElementTree module, you can work your way through the xml and deal with the events as they occur. Note however, Fredriks advice on using cElementTree [iterparse function](http://effbot.org/zone/element-iterparse.htm): > to parse large files, you can get rid of elements as soon as you’ve processed them: ``` for event, elem in iterparse(source): if elem.tag == "record": ... process record elements ... elem.clear() ``` > The above pattern has one drawback; it does not clear the root element, so you will end up with a single element with lots of empty child elements. If your files are huge, rather than just large, this might be a problem. To work around this, you need to get your hands on the root element. The easiest way to do this is to enable start events, and save a reference to the first element in a variable: ``` # get an iterable context = iterparse(source, events=("start", "end")) # turn it into an iterator context = iter(context) # get the root element event, root = context.next() for event, elem in context: if event == "end" and elem.tag == "record": ... process record elements ... root.clear() ``` The [lxml.iterparse()](http://codespeak.net/lxml/FAQ.html#why-can-t-i-just-delete-parents-or-clear-the-root-node-in-iterparse) does not allow this.
What is the equivalent of map<int, vector<int> > in Python?
324,643
9
2008-11-27T21:00:47Z
324,648
10
2008-11-27T21:06:08Z
[ "python", "dictionary" ]
In C++ often do something like this: ``` typedef map<int, vector<int> > MyIndexType; ``` Where I then use it like this: ``` MyIndexType myIndex; for( ... some loop ...) { myIndex[someId].push_back(someVal); } ``` If there was no entry in the map the code will insert a new empty vector and then append to it. In Python it would look like this: ``` myIndex = {} for (someId,someVal) in collection: try: myIndex[someId].append(someVal) except KeyError: myIndex[someId] = [someVal] ``` The try except is a bit ugly here. Is there a way to tell the dictionary an object type to insert when a KeyError is encountered at dictionary declaration time?
Something like this perhaps: ``` myIndex = {} for (someId,someVal) in collection: myIndex.setdefault(someId, []).append(someVal) ```
What is the equivalent of map<int, vector<int> > in Python?
324,643
9
2008-11-27T21:00:47Z
324,653
15
2008-11-27T21:10:23Z
[ "python", "dictionary" ]
In C++ often do something like this: ``` typedef map<int, vector<int> > MyIndexType; ``` Where I then use it like this: ``` MyIndexType myIndex; for( ... some loop ...) { myIndex[someId].push_back(someVal); } ``` If there was no entry in the map the code will insert a new empty vector and then append to it. In Python it would look like this: ``` myIndex = {} for (someId,someVal) in collection: try: myIndex[someId].append(someVal) except KeyError: myIndex[someId] = [someVal] ``` The try except is a bit ugly here. Is there a way to tell the dictionary an object type to insert when a KeyError is encountered at dictionary declaration time?
You want to use: ``` from collections import defaultdict myIndex = defaultdict(list) myIndex[someId].append(someVal) ``` Standard Library [`defaultdict` objects](http://docs.python.org/library/collections.html#id3). Example usage from the Python documentation: ``` >>> s = [('yellow', 1), ('blue', 2), ('yellow', 3), ('blue', 4), ('red', 1)] >>> d = defaultdict(list) >>> for k, v in s: d[k].append(v) >>> d.items() [('blue', [2, 4]), ('red', [1]), ('yellow', [1, 3])] ```
Strategies for speeding up batch ORM operations in Django
324,779
9
2008-11-27T23:01:18Z
325,066
15
2008-11-28T04:39:02Z
[ "python", "django", "batch-file", "orm" ]
One of my API calls can result in updates to a large number of objects (Django models). I'm running into performance issues with this since I'm updating each item individually, saving, and moving on to the next: ``` for item in Something.objects.filter(x='y'): item.a="something" item.save() ``` Sometimes my filter criterion looks like "where x in ('a','b','c',...)". It seems the [official answer to this is "won't fix"](http://code.djangoproject.com/ticket/661). I'm wondering what strategies people are using to improve performance in these scenarios.
The ticket you linked to is for bulk creation - if you're not relying on an overridden `save` method or pre/post save signals to do bits of work on save, [`QuerySet` has an `update` method](http://docs.djangoproject.com/en/dev/topics/db/queries/#updating-multiple-objects-at-once) which you can use to perform an `UPDATE` on the filtered rows: ``` Something.objects.filter(x__in=['a', 'b', 'c']).update(a='something') ```
Launch a shell command with in a python script, wait for the termination and return to the script
325,463
32
2008-11-28T10:42:03Z
325,467
7
2008-11-28T10:45:23Z
[ "python", "subprocess", "popen", "os.execl" ]
I've a python script that has to launch a shell command for every file in a dir: ``` import os files = os.listdir(".") for f in files: os.execlp("myscript", "myscript", f) ``` This works fine for the first file, but after the "myscript" command has ended, the execution stops and does not come back to the python script. How can I do? Do I have to `fork()` before `calling os.execlp()`?
The `os.exec*()` functions *replace* the current programm with the new one. When this programm ends so does your process. You probably want `os.system()`.
Launch a shell command with in a python script, wait for the termination and return to the script
325,463
32
2008-11-28T10:42:03Z
325,474
55
2008-11-28T10:48:18Z
[ "python", "subprocess", "popen", "os.execl" ]
I've a python script that has to launch a shell command for every file in a dir: ``` import os files = os.listdir(".") for f in files: os.execlp("myscript", "myscript", f) ``` This works fine for the first file, but after the "myscript" command has ended, the execution stops and does not come back to the python script. How can I do? Do I have to `fork()` before `calling os.execlp()`?
> subprocess: The `subprocess` module > allows you to spawn new processes, > connect to their input/output/error > pipes, and obtain their return codes. <http://docs.python.org/library/subprocess.html> Usage: ``` import subprocess process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE) process.wait() print process.returncode ```
Launch a shell command with in a python script, wait for the termination and return to the script
325,463
32
2008-11-28T10:42:03Z
325,495
39
2008-11-28T11:01:45Z
[ "python", "subprocess", "popen", "os.execl" ]
I've a python script that has to launch a shell command for every file in a dir: ``` import os files = os.listdir(".") for f in files: os.execlp("myscript", "myscript", f) ``` This works fine for the first file, but after the "myscript" command has ended, the execution stops and does not come back to the python script. How can I do? Do I have to `fork()` before `calling os.execlp()`?
You can use [`subprocess.Popen`](http://docs.python.org/library/subprocess.html). There's a few ways to do it: ``` import subprocess cmd = ['/run/myscript', '--arg', 'value'] p = subprocess.Popen(cmd, stdout=subprocess.PIPE) for line in p.stdout: print line p.wait() print p.returncode ``` Or, if you don't care what the external program actually does: ``` cmd = ['/run/myscript', '--arg', 'value'] subprocess.Popen(cmd).wait() ```
Launch a shell command with in a python script, wait for the termination and return to the script
325,463
32
2008-11-28T10:42:03Z
5,184,921
9
2011-03-03T18:29:29Z
[ "python", "subprocess", "popen", "os.execl" ]
I've a python script that has to launch a shell command for every file in a dir: ``` import os files = os.listdir(".") for f in files: os.execlp("myscript", "myscript", f) ``` This works fine for the first file, but after the "myscript" command has ended, the execution stops and does not come back to the python script. How can I do? Do I have to `fork()` before `calling os.execlp()`?
The subprocess module has come along way since 2008. In particular [`check_call`](http://docs.python.org/library/subprocess.html#subprocess.check_call) and [`check_output`](http://docs.python.org/library/subprocess.html#subprocess.check_output) make simple subprocess stuff even easier. The `check_*` family of functions are nice it that they raise an exception if something goes wrong. ``` import os import subprocess files = os.listdir('.') for f in files: subprocess.check_call( [ 'myscript', f ] ) ``` Any output generated by `myscript` will display as though your process produced the output (technically `myscript` and your python script share the same stdout). There are a couple of ways to avoid this. * `check_call( [ 'myscript', f ], stdout=subprocess.PIPE )` The stdout will be supressed (beware if `myscript` produces more that 4k of output). stderr will still be shown unless you add the option `stderr=subprocess.PIPE`. * `check_output( [ 'myscript', f ] )` `check_output` returns the stdout as a string so it isnt shown. stderr is still shown unless you add the option `stderr=subprocess.STDOUT`.
How can I dynamically get the set of classes from the current python module?
326,770
5
2008-11-28T22:07:18Z
326,789
9
2008-11-28T22:16:19Z
[ "python", "reflection", "metaprogramming" ]
I have a python module that defines a number of classes: ``` class A(object): def __call__(self): print "ran a" class B(object): def __call__(self): print "ran b" class C(object): def __call__(self): print "ran c" ``` From within the module, how might I add an attribute that gives me all of the classes? dir() gives me the names of everything from within my module, but I can't seem to figure out how to go from the name of a class to the class itself from *within* the module. From outside of the module, I can simply use `getattr(mod, 'A')`, but I don't have a `self` kind of module from within the module itself. This seems pretty obvious. Can someone tell me what I'm missing?
``` import sys getattr(sys.modules[__name__], 'A') ```