title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Python: can I have a list with named indices? | 178,199 | 13 | 2008-10-07T12:25:25Z | 178,668 | 11 | 2008-10-07T14:21:43Z | [
"python",
"arrays"
] | In PHP I can name my array indicies so that I may have something like:
```
$shows = Array(0 => Array('id' => 1, 'name' => 'Sesaeme Street'),
1 => Array('id' => 2, 'name' => 'Dora The Explorer'));
```
Is this possible in Python? | @Unkwntech,
What you want is available in the just-released Python 2.6 in the form of [named tuples](http://docs.python.org/whatsnew/2.6.html#new-improved-and-deprecated-modules). They allow you to do this:
```
import collections
person = collections.namedtuple('Person', 'id name age')
me = person(id=1, age=1e15, name='Dan')
you = person(2, 'Somebody', 31.4159)
assert me.age == me[2] # can access fields by either name or position
``` |
Python, unit-testing and mocking imports | 178,458 | 10 | 2008-10-07T13:34:42Z | 179,531 | 7 | 2008-10-07T17:30:27Z | [
"python",
"unit-testing",
"refactoring",
"python-import"
] | I am in a project where we are starting refactoring some massive code base. One problem that immediately sprang up is that each file imports a lot of other files. How do I in an elegant way mock this in my unit test without having to alter the actual code so I can start to write unit-tests?
As an example: The file with the functions I want to test, imports ten other files which is part of our software and not python core libs.
I want to be able to run the unit tests as separately as possible and for now I am only going to test functions that does not depend on things from the files that are being imported.
Thanks for all the answers.
I didn't really know what I wanted to do from the start but now I think I know.
Problem was that some imports was only possible when the whole application was running because of some third-party auto-magic. So I had to make some stubs for these modules in a directory which I pointed out with sys.path
Now I can import the file which contains the functions I want to write tests for in my unit-test file without complaints about missing modules. | If you want to import a module while at the same time ensuring that it doesn't import anything, you can replace the `__import__` builtin function.
For example, use this class:
```
class ImportWrapper(object):
def __init__(self, real_import):
self.real_import = real_import
def wrapper(self, wantedModules):
def inner(moduleName, *args, **kwargs):
if moduleName in wantedModules:
print "IMPORTING MODULE", moduleName
self.real_import(*args, **kwargs)
else:
print "NOT IMPORTING MODULE", moduleName
return inner
def mock_import(self, moduleName, wantedModules):
__builtins__.__import__ = self.wrapper(wantedModules)
try:
__import__(moduleName, globals(), locals(), [], -1)
finally:
__builtins__.__import__ = self.real_import
```
And in your test code, instead of writing `import myModule`, write:
```
wrapper = ImportWrapper(__import__)
wrapper.mock_import('myModule', [])
```
The second argument to `mock_import` is a list of module names you *do* want to import in inner module.
This example can be modified further to e.g. import other module than desired instead of just not importing it, or even mocking the module object with some custom object of your own. |
Average difference between dates in Python | 179,716 | 2 | 2008-10-07T18:18:48Z | 179,738 | 12 | 2008-10-07T18:26:46Z | [
"python",
"algorithm",
"datetime"
] | I have a series of datetime objects and would like to calculate the average delta between them.
For example, if the input was `(2008-10-01 12:15:00, 2008-10-01 12:25:00, 2008-10-01 12:35:00)`, then the average delta would be exactly 00:10:00, or 10 minutes.
Any suggestions on how to calculate this using Python? | As far as algorithms go, that's an easy one. Just find the max and min datetimes, take the difference, and divide by the number of datetimes you looked at.
If you have an array a of datetimes, you can do:
```
mx = max(a)
mn = min(a)
avg = (mx-mn)/(len(a)-1)
```
to get back the average difference.
EDIT: fixed the off-by-one error |
How to handle a broken pipe (SIGPIPE) in python? | 180,095 | 40 | 2008-10-07T19:53:53Z | 180,152 | 30 | 2008-10-07T20:07:27Z | [
"python",
"broken-pipe"
] | I've written a simple multi-threaded game server in python that creates a new thread for each client connection. I'm finding that every now and then, the server will crash because of a broken-pipe/SIGPIPE error. I'm pretty sure it is happening when the program tries to send a response back to a client that is no longer present.
What is a good way to deal with this? My preferred resolution would simply close the server-side connection to the client and move on, rather than exit the entire program.
PS: [This](http://stackoverflow.com/questions/108183/how-to-prevent-sigpipes-or-handle-them-properly) question/answer deals with the problem in a generic way; how specifically should I solve it? | Read up on the try: statement.
```
try:
# do something
except socket.error, e:
# A socket error
except IOError, e:
if e.errno == errno.EPIPE:
# EPIPE error
else:
# Other error
``` |
How to handle a broken pipe (SIGPIPE) in python? | 180,095 | 40 | 2008-10-07T19:53:53Z | 180,922 | 44 | 2008-10-08T00:14:18Z | [
"python",
"broken-pipe"
] | I've written a simple multi-threaded game server in python that creates a new thread for each client connection. I'm finding that every now and then, the server will crash because of a broken-pipe/SIGPIPE error. I'm pretty sure it is happening when the program tries to send a response back to a client that is no longer present.
What is a good way to deal with this? My preferred resolution would simply close the server-side connection to the client and move on, rather than exit the entire program.
PS: [This](http://stackoverflow.com/questions/108183/how-to-prevent-sigpipes-or-handle-them-properly) question/answer deals with the problem in a generic way; how specifically should I solve it? | Assuming that you are using the standard socket module, you should be catching the `socket.error: (32, 'Broken pipe')` exception (not IOError as others have suggested). This will be raised in the case that you've described, i.e. sending/writing to a socket for which the remote side has disconnected.
```
import socket, errno, time
# setup socket to listen for incoming connections
s = socket.socket()
s.bind(('localhost', 1234))
s.listen(1)
remote, address = s.accept()
print "Got connection from: ", address
while 1:
try:
remote.send("message to peer\n")
time.sleep(1)
except socket.error, e:
if isinstance(e.args, tuple):
print "errno is %d" % e[0]
if e[0] == errno.EPIPE:
# remote peer disconnected
print "Detected remote disconnect"
else:
# determine and handle different error
pass
else:
print "socket error ", e
remote.close()
break
except IOError, e:
# Hmmm, Can IOError actually be raised by the socket module?
print "Got IOError: ", e
break
```
Note that this exception will not always be raised on the first write to a closed socket - more usually the second write (unless the number of bytes written in the first write is larger than the socket's buffer size). You need to keep this in mind in case your application thinks that the remote end received the data from the first write when it may have already disconnected.
You can reduce the incidence (but not entirely eliminate) of this by using `select.select()` (or `poll`). Check for data ready to read from the peer before attempting a write. If `select` reports that there is data available to read from the peer socket, read it using `socket.recv()`. If this returns an empty string, the remote peer has closed the connection. Because there is still a race condition here, you'll still need to catch and handle the exception.
Twisted is great for this sort of thing, however, it sounds like you've already written a fair bit of code. |
How do I convert a list of ascii values to a string in python? | 180,606 | 31 | 2008-10-07T21:51:01Z | 180,615 | 56 | 2008-10-07T21:54:33Z | [
"python",
"string",
"ascii"
] | I've got a list in a Python program that contains a series of numbers, which are themselves ASCII values. How do I convert this into a "regular" string that I can echo to the screen? | You are probably looking for 'chr()':
```
>>> L = [104, 101, 108, 108, 111, 44, 32, 119, 111, 114, 108, 100]
>>> ''.join(chr(i) for i in L)
'hello, world'
``` |
How do I convert a list of ascii values to a string in python? | 180,606 | 31 | 2008-10-07T21:51:01Z | 181,057 | 12 | 2008-10-08T01:22:05Z | [
"python",
"string",
"ascii"
] | I've got a list in a Python program that contains a series of numbers, which are themselves ASCII values. How do I convert this into a "regular" string that I can echo to the screen? | Same basic solution as others, but I personally prefer to use map instead of the list comprehension:
```
>>> L = [104, 101, 108, 108, 111, 44, 32, 119, 111, 114, 108, 100]
>>> ''.join(map(chr,L))
'hello, world'
``` |
How do I convert a list of ascii values to a string in python? | 180,606 | 31 | 2008-10-07T21:51:01Z | 184,708 | 8 | 2008-10-08T20:22:31Z | [
"python",
"string",
"ascii"
] | I've got a list in a Python program that contains a series of numbers, which are themselves ASCII values. How do I convert this into a "regular" string that I can echo to the screen? | ```
import array
def f7(list):
return array.array('B', list).tostring()
```
from [Python Patterns - An Optimization Anecdote](http://www.python.org/doc/essays/list2str.html) |
In the Django admin interface, is there a way to duplicate an item? | 180,809 | 40 | 2008-10-07T23:23:11Z | 180,816 | 71 | 2008-10-07T23:26:15Z | [
"python",
"django",
"django-models",
"django-admin"
] | Just wondering if there is an easy way to add the functionality to duplicate an existing listing in the admin interface?
In data entry we have run into a situation where a lot of items share generic data with another item, and to save time it would be very nice to quickly duplicate an existing listing and only alter the changed data. Using a better model structure would be one way of reducing the duplication of the data, but there may be situation where the duplicated data needs to be changed on an individual basis in the future. | You can [save as](http://docs.djangoproject.com/en/dev/ref/contrib/admin/#django.contrib.admin.ModelAdmin.save_as) by just enabling adding this to your ModelAdmin:
```
save_as = True
```
This replaces the "Save and add another" button with a "Save as" button. "Save as" means the object will be saved as a new object (with a new ID), rather than the old object. |
What is the difference between Python's re.search and re.match? | 180,986 | 189 | 2008-10-08T00:51:36Z | 180,993 | 209 | 2008-10-08T00:53:12Z | [
"python",
"regex",
"search",
"match"
] | What is the difference between the `search()` and `match()` functions in the [Python `re` module](http://docs.python.org/2/library/re.html)?
I've read the [documentation](http://www.python.org/doc/2.5.2/lib/matching-searching.html) ([current documentation](http://docs.python.org/2/library/re.html?highlight=matching%20searching#search-vs-match)), but I never seem to remember it. I keep having to look it up and re-learn it. I'm hoping that someone will answer it clearly with examples so that (perhaps) it will stick in my head. Or at least I'll have a better place to return with my question and it will take less time to re-learn it. | `re.match` is anchored at the beginning of the string. That has nothing to do with newlines, so it is not the same as using `^` in the pattern.
As [re.match documentation](http://docs.python.org/2/library/re.html#re.match) says:
> If zero or more characters at the
> **beginning of string** match the regular expression pattern, return a
> corresponding `MatchObject` instance.
> Return `None` if the string does not
> match the pattern; note that this is
> different from a zero-length match.
>
> Note: If you want to locate a match
> anywhere in string, use `search()`
> instead.
`re.search` searches the entire string, as [the documentation says](http://docs.python.org/2/library/re.html#re.search):
> **Scan through string** looking for a
> location where the regular expression
> pattern produces a match, and return a
> corresponding `MatchObject` instance.
> Return `None` if no position in the
> string matches the pattern; note that
> this is different from finding a
> zero-length match at some point in the
> string.
So if you need to match at the beginning of the string, or to match the entire string use `match`. It is faster. Otherwise use `search`.
The documentation has a [specific section for `match` vs. `search`](http://docs.python.org/2/library/re.html#search-vs-match) that also covers multiline strings:
> Python offers two different primitive
> operations based on regular
> expressions: `match` checks for a match
> **only at the beginning** of the string,
> while `search` checks for a match
> **anywhere** in the string (this is what
> Perl does by default).
>
> Note that `match` may differ from `search`
> even when using a regular expression
> beginning with `'^'`: `'^'` matches only
> at the start of the string, or in
> `MULTILINE` mode also immediately
> following a newline. The â`match`â
> operation succeeds *only if the pattern
> matches at the **start** of the string*
> regardless of mode, or at the starting
> position given by the optional `pos`
> argument regardless of whether a
> newline precedes it.
Now, enough talk. Time to some example code:
```
# example code:
string_with_newlines = """something
someotherthing"""
import re
print re.match('some', string_with_newlines) # matches
print re.match('someother',
string_with_newlines) # won't match
print re.match('^someother', string_with_newlines,
re.MULTILINE) # also won't match
print re.search('someother',
string_with_newlines) # finds something
print re.search('^someother', string_with_newlines,
re.MULTILINE) # also finds something
m = re.compile('thing$', re.MULTILINE)
print m.match(string_with_newlines) # no match
print m.match(string_with_newlines, pos=4) # matches
print m.search(string_with_newlines,
re.MULTILINE) # also matches
``` |
What is the difference between Python's re.search and re.match? | 180,986 | 189 | 2008-10-08T00:51:36Z | 181,000 | 9 | 2008-10-08T00:54:57Z | [
"python",
"regex",
"search",
"match"
] | What is the difference between the `search()` and `match()` functions in the [Python `re` module](http://docs.python.org/2/library/re.html)?
I've read the [documentation](http://www.python.org/doc/2.5.2/lib/matching-searching.html) ([current documentation](http://docs.python.org/2/library/re.html?highlight=matching%20searching#search-vs-match)), but I never seem to remember it. I keep having to look it up and re-learn it. I'm hoping that someone will answer it clearly with examples so that (perhaps) it will stick in my head. Or at least I'll have a better place to return with my question and it will take less time to re-learn it. | re.match attempts to match a pattern **at the beginning of the string**. re.search attempts to match the pattern **throughout the string** until it finds a match. |
What is the difference between Python's re.search and re.match? | 180,986 | 189 | 2008-10-08T00:51:36Z | 181,028 | 37 | 2008-10-08T01:07:26Z | [
"python",
"regex",
"search",
"match"
] | What is the difference between the `search()` and `match()` functions in the [Python `re` module](http://docs.python.org/2/library/re.html)?
I've read the [documentation](http://www.python.org/doc/2.5.2/lib/matching-searching.html) ([current documentation](http://docs.python.org/2/library/re.html?highlight=matching%20searching#search-vs-match)), but I never seem to remember it. I keep having to look it up and re-learn it. I'm hoping that someone will answer it clearly with examples so that (perhaps) it will stick in my head. Or at least I'll have a better place to return with my question and it will take less time to re-learn it. | `re.search` **search**es for the pattern **throughout the string**, whereas `re.match` does *not search* the pattern; if it does not, it has no other choice than to **match** it at start of the string. |
What is the difference between Python's re.search and re.match? | 180,986 | 189 | 2008-10-08T00:51:36Z | 8,687,988 | 32 | 2011-12-31T12:05:43Z | [
"python",
"regex",
"search",
"match"
] | What is the difference between the `search()` and `match()` functions in the [Python `re` module](http://docs.python.org/2/library/re.html)?
I've read the [documentation](http://www.python.org/doc/2.5.2/lib/matching-searching.html) ([current documentation](http://docs.python.org/2/library/re.html?highlight=matching%20searching#search-vs-match)), but I never seem to remember it. I keep having to look it up and re-learn it. I'm hoping that someone will answer it clearly with examples so that (perhaps) it will stick in my head. Or at least I'll have a better place to return with my question and it will take less time to re-learn it. | `search` ⇒ find something anywhere in the string and return a match object.
`match` ⇒ find something at the *beginning* of the string and return a match object. |
Python style: multiple-line conditions in IFs | 181,530 | 270 | 2008-10-08T06:19:07Z | 181,553 | 11 | 2008-10-08T06:31:54Z | [
"python",
"coding-style",
"if-statement"
] | Sometimes I break long conditions in IFs to several lines. The most obvious way to do this is:
```
if (cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
Isn't very very appealing visually, because the action blends with the conditions. However, it is the natural way using correct Python indentation of 4 spaces.
For the moment I'm using:
```
if ( cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
But this isn't very pretty. :-)
Can you recommend an alternative way ? | This doesn't improve so much but...
```
allCondsAreOK = (cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4')
if allCondsAreOK:
do_something
``` |
Python style: multiple-line conditions in IFs | 181,530 | 270 | 2008-10-08T06:19:07Z | 181,557 | 296 | 2008-10-08T06:34:25Z | [
"python",
"coding-style",
"if-statement"
] | Sometimes I break long conditions in IFs to several lines. The most obvious way to do this is:
```
if (cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
Isn't very very appealing visually, because the action blends with the conditions. However, it is the natural way using correct Python indentation of 4 spaces.
For the moment I'm using:
```
if ( cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
But this isn't very pretty. :-)
Can you recommend an alternative way ? | You don't need to use 4 spaces on your second conditional line. Maybe use:
```
if (cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
Also, don't forget the whitespace is more flexible than you might think:
```
if (
cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'
):
do_something
if (cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
Both of those are fairly ugly though.
Maybe lose the brackets (the [Style Guide](https://www.python.org/dev/peps/pep-0008/#multiline-if-statements) discourages this though)?
```
if cond1 == 'val1' and cond2 == 'val2' and \
cond3 == 'val3' and cond4 == 'val4':
do_something
```
This at least gives you some differentiation.
Or even:
```
if cond1 == 'val1' and cond2 == 'val2' and \
cond3 == 'val3' and \
cond4 == 'val4':
do_something
```
I think I prefer:
```
if cond1 == 'val1' and \
cond2 == 'val2' and \
cond3 == 'val3' and \
cond4 == 'val4':
do_something
```
Here's the [Style Guide](https://www.python.org/dev/peps/pep-0008/#multiline-if-statements), which (since 2010) recommends using brackets. |
Python style: multiple-line conditions in IFs | 181,530 | 270 | 2008-10-08T06:19:07Z | 181,641 | 13 | 2008-10-08T07:19:09Z | [
"python",
"coding-style",
"if-statement"
] | Sometimes I break long conditions in IFs to several lines. The most obvious way to do this is:
```
if (cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
Isn't very very appealing visually, because the action blends with the conditions. However, it is the natural way using correct Python indentation of 4 spaces.
For the moment I'm using:
```
if ( cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
But this isn't very pretty. :-)
Can you recommend an alternative way ? | I suggest moving the `and` keyword to the second line and indenting all lines containing conditions with two spaces instead of four:
```
if (cond1 == 'val1' and cond2 == 'val2'
and cond3 == 'val3' and cond4 == 'val4'):
do_something
```
This is exactly how I solve this problem in my code. Having a keyword as the first word in the line makes the condition a lot more readable, and reducing the number of spaces further distinguishes condition from action. |
Python style: multiple-line conditions in IFs | 181,530 | 270 | 2008-10-08T06:19:07Z | 181,848 | 11 | 2008-10-08T08:39:58Z | [
"python",
"coding-style",
"if-statement"
] | Sometimes I break long conditions in IFs to several lines. The most obvious way to do this is:
```
if (cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
Isn't very very appealing visually, because the action blends with the conditions. However, it is the natural way using correct Python indentation of 4 spaces.
For the moment I'm using:
```
if ( cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
But this isn't very pretty. :-)
Can you recommend an alternative way ? | I prefer this style when I have a terribly large if-condition:
```
if (
expr1
and (expr2 or expr3)
and hasattr(thingy1, '__eq__')
or status=="HappyTimes"
):
do_stuff()
else:
do_other_stuff()
``` |
Python style: multiple-line conditions in IFs | 181,530 | 270 | 2008-10-08T06:19:07Z | 182,050 | 61 | 2008-10-08T10:26:48Z | [
"python",
"coding-style",
"if-statement"
] | Sometimes I break long conditions in IFs to several lines. The most obvious way to do this is:
```
if (cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
Isn't very very appealing visually, because the action blends with the conditions. However, it is the natural way using correct Python indentation of 4 spaces.
For the moment I'm using:
```
if ( cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
But this isn't very pretty. :-)
Can you recommend an alternative way ? | I've resorted to the following in the degenerate case where it's simply AND's or OR's.
```
if all( [cond1 == 'val1', cond2 == 'val2', cond3 == 'val3', cond4 == 'val4'] ):
if any( [cond1 == 'val1', cond2 == 'val2', cond3 == 'val3', cond4 == 'val4'] ):
```
It shaves a few characters and makes it clear that there's no subtlety to the condition. |
Python style: multiple-line conditions in IFs | 181,530 | 270 | 2008-10-08T06:19:07Z | 183,206 | 27 | 2008-10-08T14:55:02Z | [
"python",
"coding-style",
"if-statement"
] | Sometimes I break long conditions in IFs to several lines. The most obvious way to do this is:
```
if (cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
Isn't very very appealing visually, because the action blends with the conditions. However, it is the natural way using correct Python indentation of 4 spaces.
For the moment I'm using:
```
if ( cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
But this isn't very pretty. :-)
Can you recommend an alternative way ? | *Someone* has to champion use of vertical whitespace here! :)
```
if ( cond1 == val1
and cond2 == val2
and cond3 == val3
):
do_stuff()
```
This makes each condition clearly visible. It also allows cleaner expression of more complex conditions:
```
if ( cond1 == val1
or
( cond2_1 == val2_1
and cond2_2 >= val2_2
and cond2_3 != bad2_3
)
):
do_more_stuff()
```
Yes, we're trading off a bit of vertical real estate for clarity. Well worth it IMO. |
Python style: multiple-line conditions in IFs | 181,530 | 270 | 2008-10-08T06:19:07Z | 4,690,241 | 12 | 2011-01-14T10:50:32Z | [
"python",
"coding-style",
"if-statement"
] | Sometimes I break long conditions in IFs to several lines. The most obvious way to do this is:
```
if (cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
Isn't very very appealing visually, because the action blends with the conditions. However, it is the natural way using correct Python indentation of 4 spaces.
For the moment I'm using:
```
if ( cond1 == 'val1' and cond2 == 'val2' and
cond3 == 'val3' and cond4 == 'val4'):
do_something
```
But this isn't very pretty. :-)
Can you recommend an alternative way ? | Here's my very personal take: long conditions are (in my view) a code smell that suggests refactoring into a boolean-returning function/method. For example:
```
def is_action__required(...):
return (cond1 == 'val1' and cond2 == 'val2'
and cond3 == 'val3' and cond4 == 'val4')
```
Now, if I found a way to make multi-line conditions look good, I would probably find myself content with having them and skip the refactoring.
On the other hand, having them perturb my aesthetic sense acts as an incentive for refactoring.
My conclusion, therefore, is that multiple line conditions should look ugly and this is an incentive to avoid them. |
What is the problem with reduce()? | 181,543 | 47 | 2008-10-08T06:27:12Z | 181,593 | 7 | 2008-10-08T06:53:54Z | [
"python",
"python-3.x"
] | There seems to be a lot of heated discussion on the net about the changes to the reduce() function in python 3.0 and how it should be removed. I am having a little difficulty understanding why this is the case; I find it quite reasonable to use it in a variety of cases. If the contempt was simply subjective, I cannot imagine that such a large number of people would care about it.
What am I missing? What is the problem with reduce()? | People worry it encourages an obfuscated style of programming, doing something that can be achieved with clearer methods.
I'm not against reduce myself, I also find it a useful tool sometimes. |
What is the problem with reduce()? | 181,543 | 47 | 2008-10-08T06:27:12Z | 181,646 | 24 | 2008-10-08T07:20:43Z | [
"python",
"python-3.x"
] | There seems to be a lot of heated discussion on the net about the changes to the reduce() function in python 3.0 and how it should be removed. I am having a little difficulty understanding why this is the case; I find it quite reasonable to use it in a variety of cases. If the contempt was simply subjective, I cannot imagine that such a large number of people would care about it.
What am I missing? What is the problem with reduce()? | `reduce()` is not being removed -- it's simply being moved into the `functools` module. Guido's reasoning is that except for trivial cases like summation, code written using `reduce()` is usually clearer when written as an accumulation loop. |
What is the problem with reduce()? | 181,543 | 47 | 2008-10-08T06:27:12Z | 181,706 | 56 | 2008-10-08T07:42:12Z | [
"python",
"python-3.x"
] | There seems to be a lot of heated discussion on the net about the changes to the reduce() function in python 3.0 and how it should be removed. I am having a little difficulty understanding why this is the case; I find it quite reasonable to use it in a variety of cases. If the contempt was simply subjective, I cannot imagine that such a large number of people would care about it.
What am I missing? What is the problem with reduce()? | As Guido says in his [The fate of reduce() in Python 3000](http://www.artima.com/weblogs/viewpost.jsp?thread=98196) post:
> So now reduce(). This is actually the one I've always hated most, because, apart from a few examples involving + or \*, almost every time I see a reduce() call with a non-trivial function argument, I need to grab pen and paper to diagram what's actually being fed into that function before I understand what the reduce() is supposed to do. So in my mind, the applicability of reduce() is pretty much limited to associative operators, and in all other cases it's better to write out the accumulation loop explicitly.
There is an excellent example of a confusing `reduce` in the [Functional Programming HOWTO](http://www.amk.ca/python/writing/functional) article:
> Quick, what's the following code doing?
>
> ```
> total = reduce(lambda a, b: (0, a[1] + b[1]), items)[1]
> ```
>
> You can figure it out, but it takes time to disentangle the expression to figure out
> what's going on. Using a short nested def statements makes things a little bit better:
>
> ```
> def combine (a, b):
> return 0, a[1] + b[1]
>
> total = reduce(combine, items)[1]
> ```
>
> But it would be best of all if I had simply used a for loop:
>
> ```
> total = 0
> for a, b in items:
> total += b
> ```
>
> Or the sum() built-in and a generator expression:
>
> ```
> total = sum(b for a,b in items)
> ```
>
> Many uses of reduce() are clearer when written as for loops. |
wxPython: displaying multiple widgets in same frame | 181,573 | 2 | 2008-10-08T06:41:18Z | 181,626 | 8 | 2008-10-08T07:11:08Z | [
"python",
"user-interface",
"layout",
"wxpython",
"wxwidgets"
] | I would like to be able to display `Notebook` and a `TxtCtrl` wx widgets in a single frame. Below is an example adapted from the wxpython wiki; is it possible to change their layout (maybe with something like `wx.SplitterWindow`) to display the text box below the `Notebook` in the same frame?
```
import wx
import wx.lib.sheet as sheet
class MySheet(sheet.CSheet):
def __init__(self, parent):
sheet.CSheet.__init__(self, parent)
self.SetLabelBackgroundColour('#CCFF66')
self.SetNumberRows(50)
self.SetNumberCols(50)
class Notebook(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, size=(600, 600))
menubar = wx.MenuBar()
file = wx.Menu()
file.Append(101, 'Quit', '' )
menubar.Append(file, "&File")
self.SetMenuBar(menubar)
wx.EVT_MENU(self, 101, self.OnQuit)
nb = wx.Notebook(self, -1, style=wx.NB_BOTTOM)
self.sheet1 = MySheet(nb)
self.sheet2 = MySheet(nb)
self.sheet3 = MySheet(nb)
nb.AddPage(self.sheet1, "Sheet1")
nb.AddPage(self.sheet2, "Sheet2")
nb.AddPage(self.sheet3, "Sheet3")
self.sheet1.SetFocus()
self.StatusBar()
def StatusBar(self):
self.statusbar = self.CreateStatusBar()
def OnQuit(self, event):
self.Close()
class MyFrame(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, wx.DefaultPosition, wx.Size(450, 400))
self.text = wx.TextCtrl(self, -1, style = wx.TE_MULTILINE)
self.Center()
class MyApp(wx.App):
def OnInit(self):
frame = Notebook(None, -1, 'notebook.py')
frame.Show(True)
frame.Center()
frame2 = MyFrame(None, -1, '')
frame2.Show(True)
self.SetTopWindow(frame2)
return True
app = MyApp(0)
app.MainLoop()
``` | Making two widgets appear on the same frame is easy, actually. You should use sizers to accomplish this.
In your example, you can change your `Notebook` class implementation to something like this:
```
class Notebook(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, size=(600, 600))
menubar = wx.MenuBar()
file = wx.Menu()
file.Append(101, 'Quit', '' )
menubar.Append(file, "&File")
self.SetMenuBar(menubar)
wx.EVT_MENU(self, 101, self.OnQuit)
nb = wx.Notebook(self, -1, style=wx.NB_BOTTOM)
self.sheet1 = MySheet(nb)
self.sheet2 = MySheet(nb)
self.sheet3 = MySheet(nb)
nb.AddPage(self.sheet1, "Sheet1")
nb.AddPage(self.sheet2, "Sheet2")
nb.AddPage(self.sheet3, "Sheet3")
self.sheet1.SetFocus()
self.StatusBar()
# new code begins here:
# add your text ctrl:
self.text = wx.TextCtrl(self, -1, style = wx.TE_MULTILINE)
# create a new sizer for both controls:
sizer = wx.BoxSizer(wx.VERTICAL)
# add notebook first, with size factor 2:
sizer.Add(nb, 2)
# then text, size factor 1, maximized
sizer.Add(self.text, 1, wx.EXPAND)
# assign the sizer to Frame:
self.SetSizerAndFit(sizer)
```
Only the `__init__` method is changed. Note that you can manipulate the proportions between the notebook and text control by changing the second argument of the `Add` method.
You can learn more about sizers from the official [Sizer overview](http://docs.wxwidgets.org/2.8/wx_sizeroverview.html) article. |
When to use the Python debugger | 181,724 | 5 | 2008-10-08T07:49:01Z | 181,767 | 7 | 2008-10-08T08:04:08Z | [
"python",
"debugging"
] | Since Python is a dynamic, interpreted language you don't have to compile your code before running it. Hence, it's very easy to simply write your code, run it, see what problems occur, and fix them. Using hotkeys or macros can make this incredibly quick.
So, because it's so easy to immediately see the output of your program and any errors that may occur, I haven't uses a debugger tool yet. What situations may call for using a real debugger vs. the method I currently use?
I'd like to know before I get into a situation and get frustrated because I don't know how to fix the problem. | I use pdb for basic python debugging. Some of the situations I use it are:
* When you have a loop iterating over 100,000 entries and want to break at a specific point, it becomes really helpful.(conditional breaks)
* Trace the control flow of someone else's code.
* Its always better to use a debugger than litter the code with prints.
* Normally there can be more than one point of failures resulting in a bug, all are not obvious in the first look. So you look for obvious places, if nothing is wrong there, you move ahead and add some more prints.. debugger can save you time here, you dont need to add the print and run again. |
When to use the Python debugger | 181,724 | 5 | 2008-10-08T07:49:01Z | 182,010 | 8 | 2008-10-08T10:09:56Z | [
"python",
"debugging"
] | Since Python is a dynamic, interpreted language you don't have to compile your code before running it. Hence, it's very easy to simply write your code, run it, see what problems occur, and fix them. Using hotkeys or macros can make this incredibly quick.
So, because it's so easy to immediately see the output of your program and any errors that may occur, I haven't uses a debugger tool yet. What situations may call for using a real debugger vs. the method I currently use?
I'd like to know before I get into a situation and get frustrated because I don't know how to fix the problem. | In 30 years of programming I've used a debugger exactly 4 times. All four times were to read the `core` file produced from a C program crashing to locate the traceback information that's buried in there.
I don't think debuggers help much, even in compiled languages. Many people like debuggers, there are some reasons for using them, I'm sure, or people wouldn't lavish such love and care on them.
Here's the point -- **software is knowledge capture**.
Yes, it does have to run. More importantly, however, software has **meaning**.
This is not an indictment of *your* use of a debugger. However, I find that the folks who rely on debugging will sometimes produce really odd-looking code and won't have a good justification for what it **means**. They can only say "it may be a hack, but it works."
My suggestion on debuggers is "don't bother".
"But, what if I'm totally stumped?" you ask, "should I learn the debugger then?" Totally stumped by what? The language? Python's too simple for utter befuddlement. Some library? Perhaps.
Here's what you do -- with or without a debugger.
1. You have the source, read it.
2. You write small tests to exercise the library. Using the interactive shell, if possible. [All the really good libraries seem to show their features using the interactive Python mode -- I strive for this level of tight, clear simplicity.]
3. You have the source, add print functions. |
Setup Python enviroment on windows | 182,053 | 3 | 2008-10-08T10:29:28Z | 182,057 | 7 | 2008-10-08T10:31:26Z | [
"python",
"windows",
"database",
"installation",
"development-environment"
] | How do I setup a Python enviroment on windows computer so I can start writing and running Python scripts, is there an install bundle? Also which database should i use?
Thanks
---
Sorry I should of mentioned that I am using this for web based applications. Does it require apache? or does it use another http server? What is the standard setup for Python running web apps? | Download the Python 2.6 Windows installer from [python.org](http://python.org/) ([direct link](http://python.org/ftp/python/2.6/python-2.6.msi)). If you're just learning, use the included SQLite library so you don't have to fiddle with database servers.
---
Most web development frameworks (Django, Turbogears, etc) come with a built-in webserver command that runs on the local computer without Apache. |
How do I watch a file for changes using Python? | 182,197 | 197 | 2008-10-08T11:12:55Z | 182,247 | 46 | 2008-10-08T11:29:43Z | [
"python",
"file",
"pywin32",
"watch"
] | I have a log file being written by another process which I want to watch for changes. Each time a change occurrs I'd like to read the new data in to do some processing on it.
What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the `win32file.FindNextChangeNotification` function but have no idea how to ask it to watch a specific file.
If anyone's done anything like this I'd be really grateful to hear how...
**[Edit]** I should have mentioned that I was after a solution that doesn't require polling.
**[Edit]** Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk. | Have you already looked at the documentation available on <http://timgolden.me.uk/python/win32_how_do_i/watch_directory_for_changes.html>? If you only need it to work under Windows the 2nd example seems to be exactly what you want (if you exchange the path of the directory with the one of the file you want to watch).
Otherwise, polling will probably be the only really platform-independent option.
**Note:** I haven't tried any of these solutions. |
How do I watch a file for changes using Python? | 182,197 | 197 | 2008-10-08T11:12:55Z | 182,259 | 49 | 2008-10-08T11:34:14Z | [
"python",
"file",
"pywin32",
"watch"
] | I have a log file being written by another process which I want to watch for changes. Each time a change occurrs I'd like to read the new data in to do some processing on it.
What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the `win32file.FindNextChangeNotification` function but have no idea how to ask it to watch a specific file.
If anyone's done anything like this I'd be really grateful to hear how...
**[Edit]** I should have mentioned that I was after a solution that doesn't require polling.
**[Edit]** Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk. | If polling is good enough for you, I'd just watch if the "modified time" file stat changes. To read it:
```
os.stat(filename).st_mtime
```
(Also note that the Windows native change event solution does not work in all circumstances, e.g. on network drives.)
```
import os
class Monkey(object):
def __init__(self):
self._cached_stamp = 0
self.filename = '/path/to/file'
def ook(self):
stamp = os.stat(self.filename).st_mtime
if stamp != self._cached_stamp:
self._cached_stamp = stamp
# File has changed, so do something...
``` |
How do I watch a file for changes using Python? | 182,197 | 197 | 2008-10-08T11:12:55Z | 182,953 | 10 | 2008-10-08T14:05:53Z | [
"python",
"file",
"pywin32",
"watch"
] | I have a log file being written by another process which I want to watch for changes. Each time a change occurrs I'd like to read the new data in to do some processing on it.
What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the `win32file.FindNextChangeNotification` function but have no idea how to ask it to watch a specific file.
If anyone's done anything like this I'd be really grateful to hear how...
**[Edit]** I should have mentioned that I was after a solution that doesn't require polling.
**[Edit]** Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk. | Well after a bit of hacking of Tim Golden's script, I have the following which seems to work quite well:
```
import os
import win32file
import win32con
path_to_watch = "." # look at the current directory
file_to_watch = "test.txt" # look for changes to a file called test.txt
def ProcessNewData( newData ):
print "Text added: %s"%newData
# Set up the bits we'll need for output
ACTIONS = {
1 : "Created",
2 : "Deleted",
3 : "Updated",
4 : "Renamed from something",
5 : "Renamed to something"
}
FILE_LIST_DIRECTORY = 0x0001
hDir = win32file.CreateFile (
path_to_watch,
FILE_LIST_DIRECTORY,
win32con.FILE_SHARE_READ | win32con.FILE_SHARE_WRITE,
None,
win32con.OPEN_EXISTING,
win32con.FILE_FLAG_BACKUP_SEMANTICS,
None
)
# Open the file we're interested in
a = open(file_to_watch, "r")
# Throw away any exising log data
a.read()
# Wait for new data and call ProcessNewData for each new chunk that's written
while 1:
# Wait for a change to occur
results = win32file.ReadDirectoryChangesW (
hDir,
1024,
False,
win32con.FILE_NOTIFY_CHANGE_LAST_WRITE,
None,
None
)
# For each change, check to see if it's updating the file we're interested in
for action, file in results:
full_filename = os.path.join (path_to_watch, file)
#print file, ACTIONS.get (action, "Unknown")
if file == file_to_watch:
newText = a.read()
if newText != "":
ProcessNewData( newText )
```
It could probably do with a load more error checking, but for simply watching a log file and doing some processing on it before spitting it out to the screen, this works well.
Thanks everyone for your input - great stuff! |
How do I watch a file for changes using Python? | 182,197 | 197 | 2008-10-08T11:12:55Z | 473,471 | 20 | 2009-01-23T16:08:40Z | [
"python",
"file",
"pywin32",
"watch"
] | I have a log file being written by another process which I want to watch for changes. Each time a change occurrs I'd like to read the new data in to do some processing on it.
What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the `win32file.FindNextChangeNotification` function but have no idea how to ask it to watch a specific file.
If anyone's done anything like this I'd be really grateful to hear how...
**[Edit]** I should have mentioned that I was after a solution that doesn't require polling.
**[Edit]** Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk. | It should not work on windows (maybe with cygwin ?), but for unix user, you should use the "fcntl" system call. Here is an example in Python. It's mostly the same code if you need to write it in C (same function names)
```
import time
import fcntl
import os
import signal
FNAME = "/HOME/TOTO/FILETOWATCH"
def handler(signum, frame):
print "File %s modified" % (FNAME,)
signal.signal(signal.SIGIO, handler)
fd = os.open(FNAME, os.O_RDONLY)
fcntl.fcntl(fd, fcntl.F_SETSIG, 0)
fcntl.fcntl(fd, fcntl.F_NOTIFY,
fcntl.DN_MODIFY | fcntl.DN_CREATE | fcntl.DN_MULTISHOT)
while True:
time.sleep(10000)
``` |
How do I watch a file for changes using Python? | 182,197 | 197 | 2008-10-08T11:12:55Z | 3,031,168 | 19 | 2010-06-13T05:12:49Z | [
"python",
"file",
"pywin32",
"watch"
] | I have a log file being written by another process which I want to watch for changes. Each time a change occurrs I'd like to read the new data in to do some processing on it.
What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the `win32file.FindNextChangeNotification` function but have no idea how to ask it to watch a specific file.
If anyone's done anything like this I'd be really grateful to hear how...
**[Edit]** I should have mentioned that I was after a solution that doesn't require polling.
**[Edit]** Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk. | Check out [pyinotify](https://github.com/seb-m/pyinotify).
inotify replaces dnotify (from an earlier answer) in newer linuxes and allows file-level rather than directory-level monitoring. |
How do I watch a file for changes using Python? | 182,197 | 197 | 2008-10-08T11:12:55Z | 4,690,739 | 164 | 2011-01-14T11:52:26Z | [
"python",
"file",
"pywin32",
"watch"
] | I have a log file being written by another process which I want to watch for changes. Each time a change occurrs I'd like to read the new data in to do some processing on it.
What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the `win32file.FindNextChangeNotification` function but have no idea how to ask it to watch a specific file.
If anyone's done anything like this I'd be really grateful to hear how...
**[Edit]** I should have mentioned that I was after a solution that doesn't require polling.
**[Edit]** Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk. | Did you try using [Watchdog](http://packages.python.org/watchdog/)?
> Python API library and shell utilities to monitor file system events.
>
> ### Directory monitoring made easy with
>
> * A cross-platform API.
> * A shell tool to run commands in response to directory changes.
>
> Get started quickly with a simple example in [Quickstart](https://pythonhosted.org/watchdog/quickstart.html#quickstart)... |
How do I watch a file for changes using Python? | 182,197 | 197 | 2008-10-08T11:12:55Z | 5,339,877 | 41 | 2011-03-17T13:45:31Z | [
"python",
"file",
"pywin32",
"watch"
] | I have a log file being written by another process which I want to watch for changes. Each time a change occurrs I'd like to read the new data in to do some processing on it.
What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the `win32file.FindNextChangeNotification` function but have no idea how to ask it to watch a specific file.
If anyone's done anything like this I'd be really grateful to hear how...
**[Edit]** I should have mentioned that I was after a solution that doesn't require polling.
**[Edit]** Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk. | If you want a multiplatform solution, then check [QFileSystemWatcher](http://doc.qt.io/qt-4.8/qfilesystemwatcher.html).
Here an example code (not sanitized):
```
from PyQt4 import QtCore
@QtCore.pyqtSlot(str)
def directory_changed(path):
print('Directory Changed!!!')
@QtCore.pyqtSlot(str)
def file_changed(path):
print('File Changed!!!')
fs_watcher = QtCore.QFileSystemWatcher(['/path/to/files_1', '/path/to/files_2', '/path/to/files_3'])
fs_watcher.connect(fs_watcher, QtCore.SIGNAL('directoryChanged(QString)'), directory_changed)
fs_watcher.connect(fs_watcher, QtCore.SIGNAL('fileChanged(QString)'), file_changed)
``` |
What do I need to import to gain access to my models? | 182,229 | 7 | 2008-10-08T11:23:54Z | 182,345 | 12 | 2008-10-08T11:55:46Z | [
"python",
"django"
] | I'd like to run a script to populate my database. I'd like to access it through the Django database API.
The only problem is that I don't know what I would need to import to gain access to this.
How can this be achieved? | Import your settings module too
```
import os
os.environ["DJANGO_SETTINGS_MODULE"] = "mysite.settings"
from mysite.polls.models import Poll, Choice
```
should do the trick. |
Os.path : can you explain this behavior? | 182,253 | 5 | 2008-10-08T11:32:16Z | 182,417 | 24 | 2008-10-08T12:11:55Z | [
"python",
"path"
] | I love Python because it comes batteries included, and I use built-in functions, a lot, to do the dirty job for me.
I have always been using happily the os.path module to deal with file path but recently I ended up with unexpected results on Python 2.5 under Ubuntu linux, while dealing with string that represent windows file paths :
```
filepath = r"c:\ttemp\FILEPA~1.EXE"
print os.path.basename(filepath)
'c:\\ttemp\\FILEPA~1.EXE']
print os.path.splitdrive(filepath)
('', 'c:\ttemp\\FILEPA~1.EXE')
```
WTF ?
It ends up the same way with filepath = u"c:\ttemp\FILEPA~1.EXE" and filepath = "c:\ttemp\FILEPA~1.EXE".
Do you have a clue ? Ubuntu use UTF8 but I don't feel like it has something to do with it. Maybe my Python install is messed up but I did not perform any particular tweak on it that I can remember. | If you want to manipulate Windows paths on linux you should use the ntpath module (this is the module that is imported as os.path on windows - posixpath is imported as os.path on linux)
```
>>> import ntpath
>>> filepath = r"c:\ttemp\FILEPA~1.EXE"
>>> print ntpath.basename(filepath)
FILEPA~1.EXE
>>> print ntpath.splitdrive(filepath)
('c:', '\\ttemp\\FILEPA~1.EXE')
``` |
How can I use UUIDs in SQLAlchemy? | 183,042 | 26 | 2008-10-08T14:26:20Z | 812,363 | 40 | 2009-05-01T17:29:48Z | [
"python",
"postgresql",
"orm",
"sqlalchemy",
"uuid"
] | Is there a way to define a column (primary key) as a [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier) in [SQLAlchemy](http://www.sqlalchemy.org/) if using [PostgreSQL](http://www.postgresql.org/) (Postgres)? | [I wrote this](http://blog.sadphaeton.com/2009/01/19/sqlalchemy-recipeuuid-column.html) and the domain is gone but here's the guts....
Regardless of how my colleagues who really care about proper database design feel about UUID's and GUIDs used for key fields. I often find I need to do it. I think it has some advantages over autoincrement that make it worth it.
I've been refining a UUID column type for the past few months and I think I've finally got it solid.
```
from sqlalchemy import types
from sqlalchemy.dialects.mysql.base import MSBinary
from sqlalchemy.schema import Column
import uuid
class UUID(types.TypeDecorator):
impl = MSBinary
def __init__(self):
self.impl.length = 16
types.TypeDecorator.__init__(self,length=self.impl.length)
def process_bind_param(self,value,dialect=None):
if value and isinstance(value,uuid.UUID):
return value.bytes
elif value and not isinstance(value,uuid.UUID):
raise ValueError,'value %s is not a valid uuid.UUID' % value
else:
return None
def process_result_value(self,value,dialect=None):
if value:
return uuid.UUID(bytes=value)
else:
return None
def is_mutable(self):
return False
id_column_name = "id"
def id_column():
import uuid
return Column(id_column_name,UUID(),primary_key=True,default=uuid.uuid4)
# Usage
my_table = Table('test',
metadata,
id_column(),
Column('parent_id',
UUID(),
ForeignKey(table_parent.c.id)))
```
I believe storing as binary(16 bytes) should end up being more efficient than the string representation(36 bytes?), And there seems to be some indication that indexing 16 byte blocks should be more efficient in mysql than strings. I wouldn't expect it to be worse anyway.
One disadvantage I've found is that at least in phpymyadmin, you can't edit records because it implicitly tries to do some sort of character conversion for the "select \* from table where id =..." and there's miscellaneous display issues.
Other than that everything seems to work fine, and so I'm throwing it out there. Leave a comment if you see a glaring error with it. I welcome any suggestions for improving it.
Unless I'm missing something the above solution will work if the underlying database has a UUID type. If it doesn't, you would likely get errors when the table is created. The solution I came up with I was targeting MSSqlServer originally and then went MySql in the end, so I think my solution is a little more flexible as it seems to work fine on mysql and sqlite. Haven't bothered checking postgres yet. |
How can I use UUIDs in SQLAlchemy? | 183,042 | 26 | 2008-10-08T14:26:20Z | 5,384,215 | 21 | 2011-03-21T21:51:55Z | [
"python",
"postgresql",
"orm",
"sqlalchemy",
"uuid"
] | Is there a way to define a column (primary key) as a [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier) in [SQLAlchemy](http://www.sqlalchemy.org/) if using [PostgreSQL](http://www.postgresql.org/) (Postgres)? | See also the recipe for [Backend-agnostic GUID Type](http://docs.sqlalchemy.org/en/rel_0_9/core/custom_types.html?highlight=guid#backend-agnostic-guid-type) in the SQLAlchemy documentation for column types. |
Is this the best way to get unique version of filename w/ Python? | 183,480 | 13 | 2008-10-08T15:50:21Z | 183,582 | 22 | 2008-10-08T16:13:02Z | [
"python",
"filenames"
] | Still 'diving in' to Python, and want to make sure I'm not overlooking something. I wrote a script that extracts files from several zip files, and saves the extracted files together in one directory. To prevent duplicate filenames from being over-written, I wrote this little function - and I'm just wondering if there is a better way to do this?
Thanks!
```
def unique_filename(file_name):
counter = 1
file_name_parts = os.path.splitext(file_name) # returns ('/path/file', '.ext')
while os.path.isfile(file_name):
file_name = file_name_parts[0] + '_' + str(counter) + file_name_parts[1]
counter += 1
return file_name
```
I really *do* require the files to be in a single directory, and numbering duplicates is definitely acceptable in my case, so I'm not looking for a more robust method (tho' I suppose any pointers are welcome), but just to make sure that what this accomplishes is getting done the right way. | One issue is that there is a race condition in your above code, since there is a gap between testing for existance, and creating the file. There may be security implications to this (think about someone maliciously inserting a symlink to a sensitive file which they wouldn't be able to overwrite, but your program running with a higher privilege could) Attacks like these are why things like os.tempnam() are deprecated.
To get around it, the best approach is to actually try create the file in such a way that you'll get an exception if it fails, and on success, return the actually opened file object. This can be done with the lower level os.open functions, by passing both the os.O\_CREAT and os.O\_EXCL flags. Once opened, return the actual file (and optionally filename) you create. Eg, here's your code modified to use this approach (returning a (file, filename) tuple):
```
def unique_file(file_name):
counter = 1
file_name_parts = os.path.splitext(file_name) # returns ('/path/file', '.ext')
while 1:
try:
fd = os.open(file_name, os.O_CREAT | os.O_EXCL | os.O_RDRW)
return os.fdopen(fd), file_name
except OSError:
pass
file_name = file_name_parts[0] + '_' + str(counter) + file_name_parts[1]
counter += 1
```
**[Edit]** Actually, a better way, which will handle the above issues for you, is probably to use the tempfile module, though you may lose some control over the naming. Here's an example of using it (keeping a similar interface):
```
def unique_file(file_name):
dirname, filename = os.path.split(file_name)
prefix, suffix = os.path.splitext(filename)
fd, filename = tempfile.mkstemp(suffix, prefix+"_", dirname)
return os.fdopen(fd), filename
>>> f, filename=unique_file('/home/some_dir/foo.txt')
>>> print filename
/home/some_dir/foo_z8f_2Z.txt
```
The only downside with this approach is that you will always get a filename with some random characters in it, as there's no attempt to create an unmodified file (/home/some\_dir/foo.txt) first.
You may also want to look at tempfile.TemporaryFile and NamedTemporaryFile, which will do the above and also automatically delete from disk when closed. |
In Python, what is the difference between '/' and '//' when used for division? | 183,853 | 144 | 2008-10-08T17:16:35Z | 183,866 | 9 | 2008-10-08T17:19:59Z | [
"python",
"math",
"syntax",
"operators"
] | Is there a benefit to using one over the other? They both seem to return the same results.
```
>>> 6/3
2
>>> 6//3
2
``` | `//` implements "floor division", regardless of your type. So
`1.0/2.0` will give `0.5`, but both `1/2`, `1//2` and `1.0//2.0` will give `0`.
See <https://docs.python.org/whatsnew/2.2.html#pep-238-changing-the-division-operator> for details |
In Python, what is the difference between '/' and '//' when used for division? | 183,853 | 144 | 2008-10-08T17:16:35Z | 183,870 | 210 | 2008-10-08T17:21:37Z | [
"python",
"math",
"syntax",
"operators"
] | Is there a benefit to using one over the other? They both seem to return the same results.
```
>>> 6/3
2
>>> 6//3
2
``` | In Python 3.0, `5 / 2` will return `2.5` and `5 // 2` will return `2`. The former is floating point division, and the latter is floor division, sometimes also called integer division.
In Python 2.2 or later in the 2.x line, there is no difference for integers unless you perform a `from __future__ import division`, which causes Python 2.x to adopt the behavior of 3.0
Regardless of the future import, `5.0 // 2` will return `2.0` since that's the floor division result of the operation.
You can find a detailed description at <https://docs.python.org/whatsnew/2.2.html#pep-238-changing-the-division-operator> |
In Python, what is the difference between '/' and '//' when used for division? | 183,853 | 144 | 2008-10-08T17:16:35Z | 1,704,753 | 13 | 2009-11-09T23:55:18Z | [
"python",
"math",
"syntax",
"operators"
] | Is there a benefit to using one over the other? They both seem to return the same results.
```
>>> 6/3
2
>>> 6//3
2
``` | As everyone has already answered, `//` is floor division.
Why this is important is that `//` is unambiguously floor division, in all Python versions from 2.2, including Python 3.x versions.
The behavior of `/` can change depending on:
* Active `__future__` import or not (module-local)
* Python command line option, either `-Q old` or `-Q new` |
In Python, what is the difference between '/' and '//' when used for division? | 183,853 | 144 | 2008-10-08T17:16:35Z | 11,604,247 | 25 | 2012-07-22T21:50:35Z | [
"python",
"math",
"syntax",
"operators"
] | Is there a benefit to using one over the other? They both seem to return the same results.
```
>>> 6/3
2
>>> 6//3
2
``` | It helps to clarify for the Python 2.x line, `/` is neither floor division nor true division. The current accepted answer is not clear on this. `/` is floor division when both args are int, but is true division when either or both of the args are float.
The above tells a lot more truth, and is a lot more clearer than the 2nd paragraph in the accepted answer. |
Framework/Language for new web 2.0 sites (2008 and 2009) | 184,049 | 2 | 2008-10-08T18:07:42Z | 184,278 | 9 | 2008-10-08T18:51:26Z | [
"python",
"ruby-on-rails",
"django",
"merb"
] | I know I'll get a thousand "Depends on what you're trying to do" answers, but seriously, there really is no solid information about this online yet. Here are my assumptions - I think they're similar for alot of people right now:
1. It is now October 2008. I want to start writing an application for January 2009. I am willing to use beta code and such but by January, I'd like a site that doesn't have 'strange' problems. With that said, if a language is simply 10% slower than another, I don't care about those things as long as the issue is linear. My main concern is developer productivity.
2. I'll be using Linux, Apache, MySQL for the application.
3. I want the power to do things like run scp and ftp client functions with stable libraries (I only picked those two because they're not web-related but at the same time represent pretty common network protocols that any larger app might use). Technologies like OpenID and Oauth will be used as well.
4. Experienced web developers are readily available (i.e. I don't have to find people from financial companies and such).
5. Whatever the choice is is common and will be around for a while.
6. Here's a kicker. I'd like to be able to use advanced presentation layer tools/languages similar to HAML, SASS. I definitively want to use JQuery.
7. I will be creating a Facebook app and at some point doing things like dealing with SMS messages, iPhone apps, etc...
At this point, the choices for language are PHP (Cake,Symfony,Zend), Python (Django), Ruby (Merb). I'm really between Django and Merb at this point mostly because everybody else seems to be going that way.
Please don't put any technologies in here that aren't made for mainstream. I know Merb is untested mostly, but their stated goal is a solid platform and it has alot of momentum behind it so I'm confident that it's workable. Please don't answer with how great Perl is or .Net.
For Future References - these choices were already made:
* Debian (Lenny) - For converting CPU cycles into something useful. Trac
* 0.11 - For Project Management Gliffy - For wireframes and such
* Google Docs/Apps - For documentation, hosted email, etc...
* Amazon ec2/S3 - For hosting, storage.
Cheers,
Adam | Sorry, but your question is wrong. People are probably going to vote me down for this one but I want to say it anyway:
I wouldn't expect to get an objective answer! Why? That's simple:
* All Ruby advocates will tell to use Ruby.
* All Python advocates will tell to use Python.
* All PHP advocates will tell to use PHP.
* Insert additional languages here.
Got the idea?
I recommend you to try each of the languages you mentioned for yourself. At least a few days each. Afterwards you should have a much better foundation to make your final decision.
That said, I would choose Ruby (because I am a Ruby advocate). |
Framework/Language for new web 2.0 sites (2008 and 2009) | 184,049 | 2 | 2008-10-08T18:07:42Z | 184,282 | 16 | 2008-10-08T18:51:52Z | [
"python",
"ruby-on-rails",
"django",
"merb"
] | I know I'll get a thousand "Depends on what you're trying to do" answers, but seriously, there really is no solid information about this online yet. Here are my assumptions - I think they're similar for alot of people right now:
1. It is now October 2008. I want to start writing an application for January 2009. I am willing to use beta code and such but by January, I'd like a site that doesn't have 'strange' problems. With that said, if a language is simply 10% slower than another, I don't care about those things as long as the issue is linear. My main concern is developer productivity.
2. I'll be using Linux, Apache, MySQL for the application.
3. I want the power to do things like run scp and ftp client functions with stable libraries (I only picked those two because they're not web-related but at the same time represent pretty common network protocols that any larger app might use). Technologies like OpenID and Oauth will be used as well.
4. Experienced web developers are readily available (i.e. I don't have to find people from financial companies and such).
5. Whatever the choice is is common and will be around for a while.
6. Here's a kicker. I'd like to be able to use advanced presentation layer tools/languages similar to HAML, SASS. I definitively want to use JQuery.
7. I will be creating a Facebook app and at some point doing things like dealing with SMS messages, iPhone apps, etc...
At this point, the choices for language are PHP (Cake,Symfony,Zend), Python (Django), Ruby (Merb). I'm really between Django and Merb at this point mostly because everybody else seems to be going that way.
Please don't put any technologies in here that aren't made for mainstream. I know Merb is untested mostly, but their stated goal is a solid platform and it has alot of momentum behind it so I'm confident that it's workable. Please don't answer with how great Perl is or .Net.
For Future References - these choices were already made:
* Debian (Lenny) - For converting CPU cycles into something useful. Trac
* 0.11 - For Project Management Gliffy - For wireframes and such
* Google Docs/Apps - For documentation, hosted email, etc...
* Amazon ec2/S3 - For hosting, storage.
Cheers,
Adam | **Django!**
Look up the DjangoCon talks on Google/Youtube - Especially "Reusable Apps" (www.youtube.com/watch?v=A-S0tqpPga4)
I've been using Django for some time, after starting with Ruby/Rails. I found the Django Community easier to get into (nicer), the language documented with *excellent* examples, and it's modularity is awesome, especially if you're wanting to throw custom components into the mix, and not be forced to use certain things here and there.
I'm sure there are probably ways to be just as flexible with Rails or some such, but I highly encourage you to take a long look at the Django introductions, etc, at <http://www.djangoproject.com/>
Eugene mentioned it's now at 1.0 - and therefore will remain a stable and backward-compatible codebase well through January 2009.
Also, the automatic admin interfaces it builds are *production ready*, and extremely flexible. |
Framework/Language for new web 2.0 sites (2008 and 2009) | 184,049 | 2 | 2008-10-08T18:07:42Z | 188,971 | 7 | 2008-10-09T20:02:49Z | [
"python",
"ruby-on-rails",
"django",
"merb"
] | I know I'll get a thousand "Depends on what you're trying to do" answers, but seriously, there really is no solid information about this online yet. Here are my assumptions - I think they're similar for alot of people right now:
1. It is now October 2008. I want to start writing an application for January 2009. I am willing to use beta code and such but by January, I'd like a site that doesn't have 'strange' problems. With that said, if a language is simply 10% slower than another, I don't care about those things as long as the issue is linear. My main concern is developer productivity.
2. I'll be using Linux, Apache, MySQL for the application.
3. I want the power to do things like run scp and ftp client functions with stable libraries (I only picked those two because they're not web-related but at the same time represent pretty common network protocols that any larger app might use). Technologies like OpenID and Oauth will be used as well.
4. Experienced web developers are readily available (i.e. I don't have to find people from financial companies and such).
5. Whatever the choice is is common and will be around for a while.
6. Here's a kicker. I'd like to be able to use advanced presentation layer tools/languages similar to HAML, SASS. I definitively want to use JQuery.
7. I will be creating a Facebook app and at some point doing things like dealing with SMS messages, iPhone apps, etc...
At this point, the choices for language are PHP (Cake,Symfony,Zend), Python (Django), Ruby (Merb). I'm really between Django and Merb at this point mostly because everybody else seems to be going that way.
Please don't put any technologies in here that aren't made for mainstream. I know Merb is untested mostly, but their stated goal is a solid platform and it has alot of momentum behind it so I'm confident that it's workable. Please don't answer with how great Perl is or .Net.
For Future References - these choices were already made:
* Debian (Lenny) - For converting CPU cycles into something useful. Trac
* 0.11 - For Project Management Gliffy - For wireframes and such
* Google Docs/Apps - For documentation, hosted email, etc...
* Amazon ec2/S3 - For hosting, storage.
Cheers,
Adam | All of them will get the job done.
## Use the one that you and your team are most familiar with
This will have a far greater impact on the delivery times and stability of your app than any of the other variables. |
How do I check out a file from perforce in python? | 184,187 | 13 | 2008-10-08T18:33:08Z | 184,193 | 7 | 2008-10-08T18:33:51Z | [
"python",
"scripting",
"perforce"
] | I would like to write some scripts in python that do some automated changes to source code. If the script determines it needs to change the file I would like to first check it out of perforce. I don't care about checking in because I will always want to build and test first. | Here's what I came up with:
```
import os
def CreateNewChangeList(description):
"Create a new changelist and returns the changelist number as a string"
p4in, p4out = os.popen2("p4 changelist -i")
p4in.write("change: new\n")
p4in.write("description: " + description)
p4in.close()
changelist = p4out.readline().split()[1]
return changelist
def OpenFileForEdit(file, changelist = ""):
"Open a file for edit, if a changelist is passed in then open it in that list"
cmd = "p4 edit "
if changelist:
cmd += " -c " + changelist + " "
ret = os.popen(cmd + file).readline().strip()
if not ret.endswith("opened for edit"):
print "Couldn't open", file, "for edit:"
print ret
raise ValueError
``` |
How do I check out a file from perforce in python? | 184,187 | 13 | 2008-10-08T18:33:08Z | 184,344 | 20 | 2008-10-08T19:02:38Z | [
"python",
"scripting",
"perforce"
] | I would like to write some scripts in python that do some automated changes to source code. If the script determines it needs to change the file I would like to first check it out of perforce. I don't care about checking in because I will always want to build and test first. | Perforce has Python wrappers around their C/C++ tools, available in binary form for Windows, and source for other platforms:
<http://www.perforce.com/perforce/loadsupp.html#api>
You will find their documentation of the scripting API to be helpful:
<http://www.perforce.com/perforce/doc.current/manuals/p4script/p4script.pdf>
Use of the Python API is quite similar to the command-line client:
```
PythonWin 2.5.1 (r251:54863, May 1 2007, 17:47:05) [MSC v.1310 32 bit (Intel)] on win32.
Portions Copyright 1994-2006 Mark Hammond - see 'Help/About PythonWin' for further copyright information.
>>> import P4
>>> p4 = P4.P4()
>>> p4.connect() # connect to the default server, with the default clientspec
>>> desc = {"Description": "My new changelist description",
... "Change": "new"
... }
>>> p4.input = desc
>>> p4.run("changelist", "-i")
['Change 2579505 created.']
>>>
```
I'll verify it from the command line:
```
P:\>p4 changelist -o 2579505
# A Perforce Change Specification.
#
# Change: The change number. 'new' on a new changelist.
# Date: The date this specification was last modified.
# Client: The client on which the changelist was created. Read-only.
# User: The user who created the changelist.
# Status: Either 'pending' or 'submitted'. Read-only.
# Description: Comments about the changelist. Required.
# Jobs: What opened jobs are to be closed by this changelist.
# You may delete jobs from this list. (New changelists only.)
# Files: What opened files from the default changelist are to be added
# to this changelist. You may delete files from this list.
# (New changelists only.)
Change: 2579505
Date: 2008/10/08 13:57:02
Client: MYCOMPUTER-DT
User: myusername
Status: pending
Description:
My new changelist description
``` |
Regular expression to match start of filename and filename extension | 185,378 | 9 | 2008-10-08T23:42:27Z | 185,397 | 23 | 2008-10-08T23:48:59Z | [
"python",
"sql",
"regex",
"like"
] | What is the regular expression to match strings (in this case, file names) that start with 'Run' and have a filename extension of '.py'?
The regular expression should match any of the following:
```
RunFoo.py
RunBar.py
Run42.py
```
It should not match:
```
myRunFoo.py
RunBar.py1
Run42.txt
```
The SQL equivalent of what I am looking for is `... LIKE 'Run%.py' ...`. | For a regular expression, you would use:
```
re.match(r'Run.*\.py$')
```
A quick explanation:
* . means match any character.
* \* means match any repetition of the previous character (hence .\* means any sequence of chars)
* \ is an escape to escape the explicit dot
* $ indicates "end of the string", so we don't match "Run\_foo.py.txt"
However, for this task, you're probably better off using simple string methods. ie.
```
filename.startswith("Run") and filename.endswith(".py")
```
Note: if you want case insensitivity (ie. matching "run.PY" as well as "Run.py", use the re.I option to the regular expression, or convert to a specific case (eg filename.lower()) before using string methods. |
Regular expression to match start of filename and filename extension | 185,378 | 9 | 2008-10-08T23:42:27Z | 185,426 | 13 | 2008-10-09T00:01:25Z | [
"python",
"sql",
"regex",
"like"
] | What is the regular expression to match strings (in this case, file names) that start with 'Run' and have a filename extension of '.py'?
The regular expression should match any of the following:
```
RunFoo.py
RunBar.py
Run42.py
```
It should not match:
```
myRunFoo.py
RunBar.py1
Run42.txt
```
The SQL equivalent of what I am looking for is `... LIKE 'Run%.py' ...`. | Warning:
* jobscry's answer ("^Run.?.py$") is incorrect (will not match "Run123.py", for example).
* orlandu63's answer ("/^Run[\w]\*?.py$/") will not match "RunFoo.Bar.py".
(I don't have enough reputation to comment, sorry.) |
Regular expression to match start of filename and filename extension | 185,378 | 9 | 2008-10-08T23:42:27Z | 185,593 | 11 | 2008-10-09T01:27:29Z | [
"python",
"sql",
"regex",
"like"
] | What is the regular expression to match strings (in this case, file names) that start with 'Run' and have a filename extension of '.py'?
The regular expression should match any of the following:
```
RunFoo.py
RunBar.py
Run42.py
```
It should not match:
```
myRunFoo.py
RunBar.py1
Run42.txt
```
The SQL equivalent of what I am looking for is `... LIKE 'Run%.py' ...`. | I don't really understand why you're after a regular expression to solve this 'problem'. You're just after a way to find all .py files that start with 'Run'. So this is a simple solution that will work, without resorting to compiling an running a regular expression:
```
import os
for filename in os.listdir(dirname):
root, ext = os.path.splitext(filename)
if root.startswith('Run') and ext == '.py':
print filename
``` |
Delete Folder Contents in Python | 185,936 | 171 | 2008-10-09T04:22:33Z | 185,941 | 153 | 2008-10-09T04:27:21Z | [
"python",
"file",
"local",
"delete-directory"
] | How can I delete the contents of a local folder in Python?
The current project is for Windows but I would like to see \*nix also. | Updated to only delete files and to used the os.path.join() method suggested in the comments. If you also want to remove subdirectories, uncomment the elif statement.
```
import os, shutil
folder = '/path/to/folder'
for the_file in os.listdir(folder):
file_path = os.path.join(folder, the_file)
try:
if os.path.isfile(file_path):
os.unlink(file_path)
#elif os.path.isdir(file_path): shutil.rmtree(file_path)
except Exception as e:
print(e)
``` |
Delete Folder Contents in Python | 185,936 | 171 | 2008-10-09T04:22:33Z | 186,049 | 8 | 2008-10-09T05:52:54Z | [
"python",
"file",
"local",
"delete-directory"
] | How can I delete the contents of a local folder in Python?
The current project is for Windows but I would like to see \*nix also. | You might be better off using `os.walk()` for this.
`os.listdir()` doesn't distinguish files from directories and you will quickly get into trouble trying to unlink these. There is a good example of using `os.walk()` to recursively remove a directory [here](http://docs.python.org/library/os.html#os.walk), and hints on how to adapt it to your circumstances. |
Delete Folder Contents in Python | 185,936 | 171 | 2008-10-09T04:22:33Z | 186,236 | 125 | 2008-10-09T07:18:35Z | [
"python",
"file",
"local",
"delete-directory"
] | How can I delete the contents of a local folder in Python?
The current project is for Windows but I would like to see \*nix also. | Try the shutil module
```
import shutil
shutil.rmtree('/path/to/folder')
```
> Description: `shutil.rmtree(path,
> ignore_errors=False, onerror=None)`
>
> Docstring: Recursively delete a
> directory tree.
>
> If `ignore_errors` is set, errors are
> ignored; otherwise, if `onerror` is set,
> it is called to handle the error with
> arguments `(func, path, exc_info)` where
> `func` is `os.listdir`, `os.remove`, or
> `os.rmdir`; path is the argument to that
> function that caused it to fail; and
> `exc_info` is a tuple returned by
> `sys.exc_info()`. If `ignore_errors` is
> false and `onerror` is `None`, an
> exception is raised. |
Delete Folder Contents in Python | 185,936 | 171 | 2008-10-09T04:22:33Z | 1,073,382 | 50 | 2009-07-02T09:25:56Z | [
"python",
"file",
"local",
"delete-directory"
] | How can I delete the contents of a local folder in Python?
The current project is for Windows but I would like to see \*nix also. | Expanding on mhawke's answer this is what I've implemented. It removes all the content of a folder but not the folder itself. Tested on Linux with files, folders and symbolic links, should work on Windows as well.
```
import os
import shutil
for root, dirs, files in os.walk('/path/to/folder'):
for f in files:
os.unlink(os.path.join(root, f))
for d in dirs:
shutil.rmtree(os.path.join(root, d))
``` |
Delete Folder Contents in Python | 185,936 | 171 | 2008-10-09T04:22:33Z | 5,756,937 | 81 | 2011-04-22T15:23:45Z | [
"python",
"file",
"local",
"delete-directory"
] | How can I delete the contents of a local folder in Python?
The current project is for Windows but I would like to see \*nix also. | You can simply do this :
```
import os
import glob
files = glob.glob('/YOUR/PATH/*')
for f in files:
os.remove(f)
```
You can of corse use an other filter in you path, for exemple : /YOU/PATH/\*.txt for removing all text files in a directory. |
Delete Folder Contents in Python | 185,936 | 171 | 2008-10-09T04:22:33Z | 6,615,332 | 38 | 2011-07-07T18:25:47Z | [
"python",
"file",
"local",
"delete-directory"
] | How can I delete the contents of a local folder in Python?
The current project is for Windows but I would like to see \*nix also. | Using `rmtree` and recreating the folder could work, but I have run into errors when deleting and immediately recreating folders on network drives.
The proposed solution using walk does not work as it uses `rmtree` to remove folders and then may attempt to use `os.unlink` on the files that were previously in those folders. This causes an error.
The posted `glob` solution will also attempt to delete non-empty folders, causing errors.
I suggest you use:
```
folder_path = '/path/to/folder'
for file_object in os.listdir(folder_path):
file_object_path = os.path.join(folder_path, file_object)
if os.path.isfile(file_object_path):
os.unlink(file_object_path)
else:
shutil.rmtree(file_object_path)
``` |
Delete Folder Contents in Python | 185,936 | 171 | 2008-10-09T04:22:33Z | 20,173,900 | 10 | 2013-11-24T11:22:15Z | [
"python",
"file",
"local",
"delete-directory"
] | How can I delete the contents of a local folder in Python?
The current project is for Windows but I would like to see \*nix also. | As a oneliner:
```
import os
# Python 2.7
map( os.unlink, (os.path.join( mydir,f) for f in os.listdir(mydir)) )
# Python 3+
list( map( os.unlink, (os.path.join( mydir,f) for f in os.listdir(mydir)) ) )
``` |
What is the best way to open a file for exclusive access in Python? | 186,202 | 29 | 2008-10-09T06:58:54Z | 186,464 | 16 | 2008-10-09T09:00:09Z | [
"python",
"file",
"locking"
] | What is the most elegant way to solve this:
* open a file for reading, but only if it is not already opened for writing
* open a file for writing, but only if it is not already opened for reading or writing
The built-in functions work like this
```
>>> path = r"c:\scr.txt"
>>> file1 = open(path, "w")
>>> print file1
<open file 'c:\scr.txt', mode 'w' at 0x019F88D8>
>>> file2 = open(path, "w")
>>> print file2
<open file 'c:\scr.txt', mode 'w' at 0x02332188>
>>> file1.write("111")
>>> file2.write("222")
>>> file1.close()
```
scr.txt now contains '111'.
```
>>> file2.close()
```
scr.txt was overwritten and now contains '222' (on Windows, Python 2.4).
The solution should work inside the same process (like in the example above) as well as when another process has opened the file.
It is preferred, if a crashing program will not keep the lock open. | I don't think there is a fully crossplatform way. On unix, the fcntl module will do this for you. However on windows (which I assume you are by the paths), you'll need to use the win32file module.
Fortunately, there is a portable implementation ([portalocker](https://github.com/WoLpH/portalocker)) using the platform appropriate method at the python cookbook.
To use it, open the file, and then call:
```
portalocker.lock(file, flags)
```
where flags are portalocker.LOCK\_EX for exclusive write access, or LOCK\_SH for shared, read access. |
Python - How to use Conch to create a Virtual SSH server | 186,316 | 5 | 2008-10-09T07:54:12Z | 189,452 | 7 | 2008-10-09T22:30:29Z | [
"python",
"twisted"
] | I'm looking at creating a server in python that I can run, and will work as an SSH server. This will then let different users login, and act as if they'd logged in normally, but only had access to one command.
I want to do this so that I can have a system where I can add users to without having to create a system wide account, so that they can then, for example, commit to a VCS branch, or similar.
While I can work out how to do this with conch to get it to a "custom" shell... I can't figure out how to make it so that the SSH stream works as if it were a real one (I'm preferably wanting to limit to /bin/bzr so that bzr+ssh will work.
It needs to be in python (which i can get to do the authorisation) but don't know how to do the linking to the app.
This needs to be in python to work within the app its designed for, and to be able to be used for those without access to add new users | When you write a Conch server, you can control what happens when the client makes a shell request by implementing [`ISession.openShell`](http://twistedmatrix.com/trac/browser/trunk/twisted/conch/interfaces.py?rev=24441#L62). The Conch server will request [`IConchUser`](http://twistedmatrix.com/trac/browser/trunk/twisted/conch/interfaces.py?rev=24441#L6) from your realm and then adapt the resulting avatar to `ISession` to call `openShell` on it if necessary.
`ISession.openShell`'s job is to take the transport object passed to it and associate it with a protocol to interpret the bytes received from it and, if desired, to write bytes to it to be sent to the client.
In an unfortunate twist, the object passed to `openShell` which represents the transport is actually an [`IProcessProtocol`](http://twistedmatrix.com/trac/browser/trunk/twisted/internet/interfaces.py?rev=24441#L1028) provider. This means that you need to call [`makeConnection`](http://twistedmatrix.com/trac/browser/trunk/twisted/internet/interfaces.py?rev=24441#L1033) on it, passing an [`IProcessTransport`](http://twistedmatrix.com/trac/browser/trunk/twisted/internet/interfaces.py?rev=24441#L1261) provider. When data is received from the client, the `IProcessProtocol` will call [`writeToChild`](http://twistedmatrix.com/trac/browser/trunk/twisted/internet/interfaces.py?rev=24441#L1282) on the transport you pass to `makeConnection`. When you want to send data to the client, you should call [`childDataReceived`](http://twistedmatrix.com/trac/browser/trunk/twisted/internet/interfaces.py?rev=24441#L1043) on it.
To see the exact behavior, I suggest reading [the implementation of the `IProcessProtocol` that is passed in](http://twistedmatrix.com/trac/browser/trunk/twisted/conch/ssh/session.py?rev=24441#L173). Don't depend on anything that's not part of `IProcessProtocol`, but seeing the implementation can make it easier to understand what's going on.
You may also want to look at [the implementation of the normal shell-creation](http://twistedmatrix.com/trac/browser/trunk/twisted/conch/unix.py?rev=24441#L174) to get a sense of what you're aiming for. This will give you a clue about how to associate the stdio of the bzr child process you launch with the SSH channel. |
Splitting a semicolon-separated string to a dictionary, in Python | 186,857 | 56 | 2008-10-09T11:38:22Z | 186,873 | 81 | 2008-10-09T11:43:42Z | [
"python",
"string",
"dictionary",
"split"
] | I have a string that looks like this:
```
"Name1=Value1;Name2=Value2;Name3=Value3"
```
Is there a built-in class/function in Python that will take that string and construct a dictionary, as though I had done this:
```
dict = {
"Name1": "Value1",
"Name2": "Value2",
"Name3": "Value3"
}
```
I have looked through the modules available but can't seem to find anything that matches.
---
Thanks, I do know how to make the relevant code myself, but since such smallish solutions are usually mine-fields waiting to happen (ie. someone writes: Name1='Value1=2';) etc. then I usually prefer some pre-tested function.
I'll do it myself then. | There's no builtin, but you can accomplish this fairly simply with a generator comprehension:
```
s= "Name1=Value1;Name2=Value2;Name3=Value3"
dict(item.split("=") for item in s.split(";"))
```
**[Edit]** From your update you indicate you may need to handle quoting. This does complicate things, depending on what the exact format you are looking for is (what quote chars are accepted, what escape chars etc). You may want to look at the csv module to see if it can cover your format. Here's an example: (Note that the API is a little clunky for this example, as CSV is designed to iterate through a sequence of records, hence the .next() calls I'm making to just look at the first line. Adjust to suit your needs):
```
>>> s = "Name1='Value=2';Name2=Value2;Name3=Value3"
>>> dict(csv.reader([item], delimiter='=', quotechar="'").next()
for item in csv.reader([s], delimiter=';', quotechar="'").next())
{'Name2': 'Value2', 'Name3': 'Value3', 'Name1': 'Value1=2'}
```
Depending on the exact structure of your format, you may need to write your own simple parser however. |
Configuration file with list of key-value pairs in python | 186,916 | 25 | 2008-10-09T11:55:48Z | 186,990 | 8 | 2008-10-09T12:13:27Z | [
"python",
"configuration",
"serialization"
] | I have a python script that analyzes a set of error messages and checks for each message if it matches a certain pattern (regular expression) in order to group these messages. For example "file x does not exist" and "file y does not exist" would match "file .\* does not exist" and be accounted as two occurrences of "file not found" category.
As the number of patterns and categories is growing, I'd like to put these couples "regular expression/display string" in a configuration file, basically a dictionary serialization of some sort.
I would like this file to be editable by hand, so I'm discarding any form of binary serialization, and also I'd rather not resort to xml serialization to avoid problems with characters to escape (& <> and so on...).
Do you have any idea of what could be a good way of accomplishing this?
Update: thanks to Daren Thomas and Federico Ramponi, but I cannot have an external python file with possibly arbitrary code. | I've heard that [ConfigObj](http://www.voidspace.org.uk/python/configobj.html "ConfigObj") is easier to work with than ConfigParser. It is used by a lot of big projects, IPython, Trac, Turbogears, etc...
From their [introduction](http://www.voidspace.org.uk/python/configobj.html#introduction):
ConfigObj is a simple but powerful config file reader and writer: an ini file round tripper. Its main feature is that it is very easy to use, with a straightforward programmer's interface and a simple syntax for config files. It has lots of other features though :
* Nested sections (subsections), to any level
* List values
* Multiple line values
* String interpolation (substitution)
* Integrated with a powerful validation system
+ including automatic type checking/conversion
+ repeated sections
+ and allowing default values
* When writing out config files, ConfigObj preserves all comments and the order of members and sections
* Many useful methods and options for working with configuration files (like the 'reload' method)
* Full Unicode support |
Configuration file with list of key-value pairs in python | 186,916 | 25 | 2008-10-09T11:55:48Z | 187,045 | 36 | 2008-10-09T12:32:37Z | [
"python",
"configuration",
"serialization"
] | I have a python script that analyzes a set of error messages and checks for each message if it matches a certain pattern (regular expression) in order to group these messages. For example "file x does not exist" and "file y does not exist" would match "file .\* does not exist" and be accounted as two occurrences of "file not found" category.
As the number of patterns and categories is growing, I'd like to put these couples "regular expression/display string" in a configuration file, basically a dictionary serialization of some sort.
I would like this file to be editable by hand, so I'm discarding any form of binary serialization, and also I'd rather not resort to xml serialization to avoid problems with characters to escape (& <> and so on...).
Do you have any idea of what could be a good way of accomplishing this?
Update: thanks to Daren Thomas and Federico Ramponi, but I cannot have an external python file with possibly arbitrary code. | I sometimes just write a python module (i.e. file) called `config.py` or something with following contents:
```
config = {
'name': 'hello',
'see?': 'world'
}
```
this can then be 'read' like so:
```
from config import config
config['name']
config['see?']
```
easy. |
Configuration file with list of key-value pairs in python | 186,916 | 25 | 2008-10-09T11:55:48Z | 187,628 | 35 | 2008-10-09T14:57:08Z | [
"python",
"configuration",
"serialization"
] | I have a python script that analyzes a set of error messages and checks for each message if it matches a certain pattern (regular expression) in order to group these messages. For example "file x does not exist" and "file y does not exist" would match "file .\* does not exist" and be accounted as two occurrences of "file not found" category.
As the number of patterns and categories is growing, I'd like to put these couples "regular expression/display string" in a configuration file, basically a dictionary serialization of some sort.
I would like this file to be editable by hand, so I'm discarding any form of binary serialization, and also I'd rather not resort to xml serialization to avoid problems with characters to escape (& <> and so on...).
Do you have any idea of what could be a good way of accomplishing this?
Update: thanks to Daren Thomas and Federico Ramponi, but I cannot have an external python file with possibly arbitrary code. | You have two decent options:
1. Python standard config file format
using [ConfigParser](http://docs.python.org/lib/module-ConfigParser.html)
2. [YAML](http://www.yaml.org/) using a library like [PyYAML](http://pyyaml.org/)
The standard Python configuration files look like INI files with `[sections]` and `key : value` or `key = value` pairs. The advantages to this format are:
* No third-party libraries necessary
* Simple, familiar file format.
YAML is different in that it is designed to be a human friendly data serialization format rather than specifically designed for configuration. It is very readable and gives you a couple different ways to represent the same data. For your problem, you could create a YAML file that looks like this:
```
file .* does not exist : file not found
user .* not found : authorization error
```
Or like this:
```
{ file .* does not exist: file not found,
user .* not found: authorization error }
```
Using PyYAML couldn't be simpler:
```
import yaml
errors = yaml.load(open('my.yaml'))
```
At this point `errors` is a Python dictionary with the expected format. YAML is capable of representing more than dictionaries: if you prefer a list of pairs, use this format:
```
-
- file .* does not exist
- file not found
-
- user .* not found
- authorization error
```
Or
```
[ [file .* does not exist, file not found],
[user .* not found, authorization error]]
```
Which will produce a list of lists when `yaml.load` is called.
One advantage of YAML is that you could use it to export your existing, hard-coded data out to a file to create the initial version, rather than cut/paste plus a bunch of find/replace to get the data into the right format.
The YAML format will take a little more time to get familiar with, but using PyYAML is even simpler than using ConfigParser with the advantage is that you have more options regarding how your data is represented using YAML.
Either one sounds like it will fit your current needs, ConfigParser will be easier to start with while YAML gives you more flexibilty in the future, if your needs expand.
Best of luck! |
Base-2 (Binary) Representation Using Python | 187,273 | 4 | 2008-10-09T13:38:32Z | 187,536 | 13 | 2008-10-09T14:29:59Z | [
"python"
] | Building on [How Do You Express Binary Literals in Python](http://stackoverflow.com/questions/1476/how-do-you-express-binary-literals-in-python#13107), I was thinking about sensible, intuitive ways to do that Programming 101 chestnut of displaying integers in base-2 form. This is the best I came up with, but I'd like to replace it with a better algorithm, or at least one that should have screaming-fast performance.
```
def num_bin(N, places=8):
def bit_at_p(N, p):
''' find the bit at place p for number n '''
two_p = 1 << p # 2 ^ p, using bitshift, will have exactly one
# bit set, at place p
x = N & two_p # binary composition, will be one where *both* numbers
# have a 1 at that bit. this can only happen
# at position p. will yield two_p if N has a 1 at
# bit p
return int(x > 0)
bits = ( bit_at_p(N,x) for x in xrange(places))
return "".join( (str(x) for x in bits) )
# or, more consisely
# return "".join([str(int((N & 1 << x)>0)) for x in xrange(places)])
``` | For best efficiency, you generally want to process more than a single bit at a time.
You can use a simple method to get a fixed width binary representation. eg.
```
def _bin(x, width):
return ''.join(str((x>>i)&1) for i in xrange(width-1,-1,-1))
```
\_bin(x, 8) will now give a zero padded representation of x's lower 8 bits. This can be used to build a lookup table, allowing your converter to process 8 bits at a time (or more if you want to devote the memory to it).
```
_conv_table = [_bin(x,8) for x in range(256)]
```
Then you can use this in your real function, stripping off leading zeroes when returning it. I've also added handling for signed numbers, as without it you will get an infinite loop (Negative integers conceptually have an infinite number of set sign bits.)
```
def bin(x):
if x == 0:
return '0' #Special case: Don't strip leading zero if no other digits
elif x < 0:
sign='-'
x*=-1
else:
sign = ''
l=[]
while x:
l.append(_conv_table[x & 0xff])
x >>= 8
return sign + ''.join(reversed(l)).lstrip("0")
```
[Edit] Changed code to handle signed integers.
[Edit2] Here are some timing figures of the various solutions. bin is the function above, constantin\_bin is from [Constantin's answer](http://stackoverflow.com/questions/187273/base-2-binary-representation-using-python#189579) and num\_bin is the original version. Out of curiosity, I also tried a 16 bit lookup table variant of the above (bin16 below), and tried out Python3's builtin bin() function. All timings were for 100000 runs using an 01010101 bit pattern.
```
Num Bits: 8 16 32 64 128 256
---------------------------------------------------------------------
bin 0.544 0.586 0.744 1.942 1.854 3.357
bin16 0.542 0.494 0.592 0.773 1.150 1.886
constantin_bin 2.238 3.803 7.794 17.869 34.636 94.799
num_bin 3.712 5.693 12.086 32.566 67.523 128.565
Python3's bin 0.079 0.045 0.062 0.069 0.212 0.201
```
As you can see, when processing long values using large chunks really pays off, but nothing beats the low-level C code of python3's builtin (which bizarrely seems consistently faster at 256 bits than 128!). Using a 16 bit lookup table improves things, but probably isn't worth it unless you really need it, as it uses up a large chunk of memory, and can introduce a small but noticalbe startup delay to precompute the table. |
Counting array elements in Python | 187,455 | 64 | 2008-10-09T14:12:55Z | 187,463 | 156 | 2008-10-09T14:14:56Z | [
"python",
"arrays"
] | How can I count the number of elements in an array, because contrary to logic array.count(string) does not count all the elements in the array, it just searches for the number of occurrences of string. | The method len() returns the number of elements in the list.
Syntax:
```
len(myArray)
```
Eg:
```
myArray = [1, 2, 3]
len(myArray)
```
Output:
```
3
``` |
Counting array elements in Python | 187,455 | 64 | 2008-10-09T14:12:55Z | 188,867 | 23 | 2008-10-09T19:40:04Z | [
"python",
"arrays"
] | How can I count the number of elements in an array, because contrary to logic array.count(string) does not count all the elements in the array, it just searches for the number of occurrences of string. | `len` is a built-in function that calls the given container object's `__len__` member function to get the number of elements in the object.
Functions encased with double underscores are usually "special methods" implementing one of the standard interfaces in Python (container, number, etc). Special methods are used via syntactic sugar (object creation, container indexing and slicing, attribute access, built-in functions, etc.).
Using `obj.__len__()` wouldn't be the correct way of using the special method, but I don't see why the others were modded down so much. |
How do I capture an mp3 stream with python | 187,552 | 11 | 2008-10-09T14:36:02Z | 187,563 | 15 | 2008-10-09T14:40:25Z | [
"python",
"streaming"
] | What's the best way of capturing an mp3 stream coming off of http and saving it to disk with python?
Thus far I've tried
```
target = open(target_path, "w")
conn = urllib.urlopen(stream_url)
while True:
target.write(conn.read(buf_size))
```
This gives me data but its garbled or wont play in mp3 players. | If you're on Windows, you might accidentally be doing CRLF conversions, corrupting the binary data. Try opening `target` in binary mode:
```
target = open(target_path, "wb")
``` |
How to make a python, command-line program autocomplete arbitrary things NOT interpreter | 187,621 | 61 | 2008-10-09T14:55:02Z | 187,660 | 38 | 2008-10-09T15:01:38Z | [
"python",
"linux",
"unix",
"command-line",
"autocomplete"
] | I am aware of how to setup autocompletion of python objects in the python interpreter (on unix).
* Google shows many hits for explanations on how to do this.
* Unfortunately, there are so many references to that it is difficult to find what I need to do, which is slightly different.
I need to know how to enable, tab/auto completion of arbitrary items in a command-line program written in python.
My specific use case is a command-line python program that needs to send emails. I want to be able to autocomplete email addresses (I have the addresses on disk) when the user types part of it (and optionally presses the TAB key).
I do not need it to work on windows or mac, just linux. | Use Python's `readline` bindings. For example,
```
import readline
def completer(text, state):
options = [i for i in commands if i.startswith(text)]
if state < len(options):
return options[state]
else:
return None
readline.parse_and_bind("tab: complete")
readline.set_completer(completer)
```
The official [module docs](http://docs.python.org/lib/module-readline.html) aren't much more detailed, see the [readline docs](http://tiswww.case.edu/php/chet/readline/readline.html#SEC44) for more info. |
How to make a python, command-line program autocomplete arbitrary things NOT interpreter | 187,621 | 61 | 2008-10-09T14:55:02Z | 187,701 | 46 | 2008-10-09T15:08:18Z | [
"python",
"linux",
"unix",
"command-line",
"autocomplete"
] | I am aware of how to setup autocompletion of python objects in the python interpreter (on unix).
* Google shows many hits for explanations on how to do this.
* Unfortunately, there are so many references to that it is difficult to find what I need to do, which is slightly different.
I need to know how to enable, tab/auto completion of arbitrary items in a command-line program written in python.
My specific use case is a command-line python program that needs to send emails. I want to be able to autocomplete email addresses (I have the addresses on disk) when the user types part of it (and optionally presses the TAB key).
I do not need it to work on windows or mac, just linux. | Follow the [cmd documentation](http://docs.python.org/library/cmd.html#cmd.Cmd.cmdloop) and you'll be fine
```
import cmd
addresses = [
'[email protected]',
'[email protected]',
'[email protected]',
]
class MyCmd(cmd.Cmd):
def do_send(self, line):
pass
def complete_send(self, text, line, start_index, end_index):
if text:
return [
address for address in addresses
if address.startswith(text)
]
else:
return addresses
if __name__ == '__main__':
my_cmd = MyCmd()
my_cmd.cmdloop()
```
Output for tab -> tab -> send -> tab -> tab -> f -> tab
```
(Cmd)
help send
(Cmd) send
[email protected] [email protected] [email protected]
(Cmd) send [email protected]
(Cmd)
``` |
How to make a python, command-line program autocomplete arbitrary things NOT interpreter | 187,621 | 61 | 2008-10-09T14:55:02Z | 197,158 | 19 | 2008-10-13T09:59:23Z | [
"python",
"linux",
"unix",
"command-line",
"autocomplete"
] | I am aware of how to setup autocompletion of python objects in the python interpreter (on unix).
* Google shows many hits for explanations on how to do this.
* Unfortunately, there are so many references to that it is difficult to find what I need to do, which is slightly different.
I need to know how to enable, tab/auto completion of arbitrary items in a command-line program written in python.
My specific use case is a command-line python program that needs to send emails. I want to be able to autocomplete email addresses (I have the addresses on disk) when the user types part of it (and optionally presses the TAB key).
I do not need it to work on windows or mac, just linux. | Since you say "NOT interpreter" in your question, I guess you don't want answers involving python readline and suchlike. (***edit***: in hindsight, that's obviously not the case. Ho hum. I think this info is interesting anyway, so I'll leave it here.)
I think you might be after [this](http://www.debian-administration.org/articles/317).
It's about adding shell-level completion to arbitrary commands, extending bash's own tab-completion.
In a nutshell, you'll create a file containing a shell-function that will generate possible completions, save it into `/etc/bash_completion.d/` and register it with the command `complete`. Here's a snippet from the linked page:
```
_foo()
{
local cur prev opts
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
opts="--help --verbose --version"
if [[ ${cur} == -* ]] ; then
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
return 0
fi
}
complete -F _foo foo
```
In this case, the typing `foo --[TAB]` will give you the values in the variable `opts`, i.e. `--help`, `--verbose` and `--version`. For your purposes, you'll essentially want to customise the values that are put into `opts`.
Do have a look at the example on the linked page, it's all pretty straightforward. |
How to make a python, command-line program autocomplete arbitrary things NOT interpreter | 187,621 | 61 | 2008-10-09T14:55:02Z | 209,915 | 10 | 2008-10-16T19:26:31Z | [
"python",
"linux",
"unix",
"command-line",
"autocomplete"
] | I am aware of how to setup autocompletion of python objects in the python interpreter (on unix).
* Google shows many hits for explanations on how to do this.
* Unfortunately, there are so many references to that it is difficult to find what I need to do, which is slightly different.
I need to know how to enable, tab/auto completion of arbitrary items in a command-line program written in python.
My specific use case is a command-line python program that needs to send emails. I want to be able to autocomplete email addresses (I have the addresses on disk) when the user types part of it (and optionally presses the TAB key).
I do not need it to work on windows or mac, just linux. | Here is a full-working version of the code that was very supplied by ephemient [here](http://stackoverflow.com/questions/187621/how-to-make-a-python-command-line-program-autocomplete-arbitrary-things-not-int#187660) (thank you).
```
import readline
addrs = ['[email protected]', '[email protected]', '[email protected]']
def completer(text, state):
options = [x for x in addrs if x.startswith(text)]
try:
return options[state]
except IndexError:
return None
readline.set_completer(completer)
readline.parse_and_bind("tab: complete")
while 1:
a = raw_input("> ")
print "You entered", a
``` |
How to make a python, command-line program autocomplete arbitrary things NOT interpreter | 187,621 | 61 | 2008-10-09T14:55:02Z | 23,959,790 | 12 | 2014-05-30T16:59:54Z | [
"python",
"linux",
"unix",
"command-line",
"autocomplete"
] | I am aware of how to setup autocompletion of python objects in the python interpreter (on unix).
* Google shows many hits for explanations on how to do this.
* Unfortunately, there are so many references to that it is difficult to find what I need to do, which is slightly different.
I need to know how to enable, tab/auto completion of arbitrary items in a command-line program written in python.
My specific use case is a command-line python program that needs to send emails. I want to be able to autocomplete email addresses (I have the addresses on disk) when the user types part of it (and optionally presses the TAB key).
I do not need it to work on windows or mac, just linux. | I am surprised that nobody has mentioned argcomplete, here is an example from the docs:
```
from argcomplete.completers import ChoicesCompleter
parser.add_argument("--protocol", choices=('http', 'https', 'ssh', 'rsync', 'wss'))
parser.add_argument("--proto").completer=ChoicesCompleter(('http', 'https', 'ssh', 'rsync', 'wss'))
``` |
Reading/Writing MS Word files in Python | 188,444 | 15 | 2008-10-09T18:06:51Z | 7,848,324 | 34 | 2011-10-21T10:45:37Z | [
"python",
"ms-word",
"read-write"
] | Is it possible to read and write Word (2003 and 2007) files in Python without using a COM object?
I know that I can:
```
f = open('c:\file.doc', "w")
f.write(text)
f.close()
```
but Word will read it as an HTML file not a native .doc file. | See [python-docx](https://github.com/python-openxml/python-docx), its official documentation is available [here](https://python-docx.readthedocs.org/en/latest/).
This has worked very well for me. |
How can I, in python, iterate over multiple 2d lists at once, cleanly? | 189,087 | 22 | 2008-10-09T20:32:13Z | 189,111 | 10 | 2008-10-09T20:39:29Z | [
"python"
] | If I'm making a simple grid based game, for example, I might have a few 2d lists. One might be for terrain, another might be for objects, etc. Unfortunately, when I need to iterate over the lists and have the contents of a square in one list affect part of another list, I have to do something like this.
```
for i in range(len(alist)):
for j in range(len(alist[i])):
if alist[i][j].isWhatever:
blist[i][j].doSomething()
```
Is there a nicer way to do something like this? | You could zip them. ie:
```
for a_row,b_row in zip(alist, blist):
for a_item, b_item in zip(a_row,b_row):
if a_item.isWhatever:
b_item.doSomething()
```
However the overhead of zipping and iterating over the items may be higher than your original method if you rarely actually use the b\_item (ie a\_item.isWhatever is usually False). You could use itertools.izip instead of zip to reduce the memory impact of this, but its still probably going to be slightly slower unless you always need the b\_item.
Alternatively, consider using a 3D list instead, so terrain for cell i,j is at l[i][j][0], objects at l[i][j][1] etc, or even combine the objects so you can do a[i][j].terrain, a[i][j].object etc.
[Edit] [DzinX's timings](http://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189497) actually show that the impact of the extra check for b\_item isn't really significant, next to the performance penalty of re-looking up by index, so the above (using izip) seems to be fastest.
I've now given a quick test for the 3d approach as well, and it seems faster still, so if you can store your data in that form, it could be both simpler and faster to access. Here's an example of using it:
```
# Initialise 3d list:
alist = [ [[A(a_args), B(b_args)] for i in xrange(WIDTH)] for j in xrange(HEIGHT)]
# Process it:
for row in xlist:
for a,b in row:
if a.isWhatever():
b.doSomething()
```
Here are my timings for 10 loops using a 1000x1000 array, with various proportions of isWhatever being true are:
```
( Chance isWhatever is True )
Method 100% 50% 10% 1%
3d 3.422 2.151 1.067 0.824
izip 3.647 2.383 1.282 0.985
original 5.422 3.426 1.891 1.534
``` |
How can I, in python, iterate over multiple 2d lists at once, cleanly? | 189,087 | 22 | 2008-10-09T20:32:13Z | 189,165 | 14 | 2008-10-09T20:51:33Z | [
"python"
] | If I'm making a simple grid based game, for example, I might have a few 2d lists. One might be for terrain, another might be for objects, etc. Unfortunately, when I need to iterate over the lists and have the contents of a square in one list affect part of another list, I have to do something like this.
```
for i in range(len(alist)):
for j in range(len(alist[i])):
if alist[i][j].isWhatever:
blist[i][j].doSomething()
```
Is there a nicer way to do something like this? | I'd start by writing a generator method:
```
def grid_objects(alist, blist):
for i in range(len(alist)):
for j in range(len(alist[i])):
yield(alist[i][j], blist[i][j])
```
Then whenever you need to iterate over the lists your code looks like this:
```
for (a, b) in grid_objects(alist, blist):
if a.is_whatever():
b.do_something()
``` |
How can I, in python, iterate over multiple 2d lists at once, cleanly? | 189,087 | 22 | 2008-10-09T20:32:13Z | 189,497 | 32 | 2008-10-09T22:49:19Z | [
"python"
] | If I'm making a simple grid based game, for example, I might have a few 2d lists. One might be for terrain, another might be for objects, etc. Unfortunately, when I need to iterate over the lists and have the contents of a square in one list affect part of another list, I have to do something like this.
```
for i in range(len(alist)):
for j in range(len(alist[i])):
if alist[i][j].isWhatever:
blist[i][j].doSomething()
```
Is there a nicer way to do something like this? | If anyone is interested in performance of the above solutions, here they are for 4000x4000 grids, from fastest to slowest:
* [Brian](http://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189111): 1.08s (modified, with `izip` instead of `zip`)
* [John](http://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189270): 2.33s
* [DzinX](http://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189234): 2.36s
* [ΤÎΩΤÎÎÎÎ¥](http://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189348): 2.41s (but object initialization took 62s)
* [Eugene](http://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly): 3.17s
* [Robert](http://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189165): 4.56s
* [Brian](http://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189111): 27.24s (original, with `zip`)
**EDIT**: Added Brian's scores with `izip` modification and it won by a large amount!
John's solution is also very fast, although it uses indices (I was really surprised to see this!), whereas Robert's and Brian's (with `zip`) are slower than the question creator's initial solution.
So let's present [Brian](http://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189111)'s winning function, as it is not shown in proper form anywhere in this thread:
```
from itertools import izip
for a_row,b_row in izip(alist, blist):
for a_item, b_item in izip(a_row,b_row):
if a_item.isWhatever:
b_item.doSomething()
``` |
How to use Python to login to a webpage and retrieve cookies for later usage? | 189,555 | 126 | 2008-10-09T23:14:43Z | 189,580 | 132 | 2008-10-09T23:24:39Z | [
"python",
"http",
"authentication",
"cookies"
] | I want to download and parse webpage using python, but to access it I need a couple of cookies set. Therefore I need to login over https to the webpage first. The login moment involves sending two POST params (username, password) to /login.php. During the login request I want to retrieve the cookies from the response header and store them so I can use them in the request to download the webpage /data.php.
How would I do this in python (preferably 2.6)? If possible I only want to use builtin modules. | ```
import urllib, urllib2, cookielib
username = 'myuser'
password = 'mypassword'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'username' : username, 'j_password' : password})
opener.open('http://www.example.com/login.php', login_data)
resp = opener.open('http://www.example.com/hiddenpage.php')
print resp.read()
```
`resp.read()` is the straight html of the page you want to open, and you can use `opener` to view any page using your session cookie. |
How to use Python to login to a webpage and retrieve cookies for later usage? | 189,555 | 126 | 2008-10-09T23:14:43Z | 12,103,969 | 122 | 2012-08-24T06:07:36Z | [
"python",
"http",
"authentication",
"cookies"
] | I want to download and parse webpage using python, but to access it I need a couple of cookies set. Therefore I need to login over https to the webpage first. The login moment involves sending two POST params (username, password) to /login.php. During the login request I want to retrieve the cookies from the response header and store them so I can use them in the request to download the webpage /data.php.
How would I do this in python (preferably 2.6)? If possible I only want to use builtin modules. | Here's a version using the excellent [requests](http://docs.python-requests.org/en/latest/index.html) library:
```
from requests import session
payload = {
'action': 'login',
'username': USERNAME,
'password': PASSWORD
}
with session() as c:
c.post('http://example.com/login.php', data=payload)
response = c.get('http://example.com/protected_page.php')
print(response.headers)
print(response.text)
``` |
How to break out of multiple loops in Python? | 189,645 | 226 | 2008-10-10T00:02:01Z | 189,664 | 70 | 2008-10-10T00:11:37Z | [
"python",
"control-flow"
] | Given the following code (that doesn't work):
```
while True:
#snip: print out current state
while True:
ok = get_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": break 2 #this doesn't work :(
if ok == "n" or ok == "N": break
#do more processing with menus and stuff
```
Is there a way to make this work? Or do I have do one check to break out of the input loop, then another, more limited, check in the outside loop to break out all together if the user is satisfied?
**Edit-FYI:** get\_input is a short function I wrote that supports showing a prompt and default values and all that fanciness and returns `stdin.readline().strip()` | First, ordinary logic is helpful.
If, for some reason, the terminating conditions can't be worked out, exceptions are a fall-back plan.
```
class GetOutOfLoop( Exception ):
pass
try:
done= False
while not done:
isok= False
while not (done or isok):
ok = get_input("Is this ok? (y/n)")
if ok in ("y", "Y") or ok in ("n", "N") :
done= True # probably better
raise GetOutOfLoop
# other stuff
except GetOutOfLoop:
pass
```
For this specific example, an exception may not be necessary.
On other other hand, we often have "Y", "N" and "Q" options in character-mode applications. For the "Q" option, we want an immediate exit. That's more exceptional. |
How to break out of multiple loops in Python? | 189,645 | 226 | 2008-10-10T00:02:01Z | 189,666 | 29 | 2008-10-10T00:12:38Z | [
"python",
"control-flow"
] | Given the following code (that doesn't work):
```
while True:
#snip: print out current state
while True:
ok = get_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": break 2 #this doesn't work :(
if ok == "n" or ok == "N": break
#do more processing with menus and stuff
```
Is there a way to make this work? Or do I have do one check to break out of the input loop, then another, more limited, check in the outside loop to break out all together if the user is satisfied?
**Edit-FYI:** get\_input is a short function I wrote that supports showing a prompt and default values and all that fanciness and returns `stdin.readline().strip()` | First, you may also consider making the process of getting and validating the input a function; within that function, you can just return the value if its correct, and keep spinning in the *while* loop if not. This essentially obviates the problem you solved, and can usually be applied in the more general case (breaking out of multiple loops). If you absolutely must keep this structure in your code, and really don't want to deal with bookkeeping booleans...
You may also use *goto* in the following way (using an April Fools module from [here](http://entrian.com/goto/)):
```
#import the stuff
from goto import goto, label
while True:
#snip: print out current state
while True:
ok = get_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": goto .breakall
if ok == "n" or ok == "N": break
#do more processing with menus and stuff
label .breakall
```
I know, I know, "thou shalt not use goto" and all that, but it works well in strange cases like this. |
How to break out of multiple loops in Python? | 189,645 | 226 | 2008-10-10T00:02:01Z | 189,685 | 258 | 2008-10-10T00:25:05Z | [
"python",
"control-flow"
] | Given the following code (that doesn't work):
```
while True:
#snip: print out current state
while True:
ok = get_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": break 2 #this doesn't work :(
if ok == "n" or ok == "N": break
#do more processing with menus and stuff
```
Is there a way to make this work? Or do I have do one check to break out of the input loop, then another, more limited, check in the outside loop to break out all together if the user is satisfied?
**Edit-FYI:** get\_input is a short function I wrote that supports showing a prompt and default values and all that fanciness and returns `stdin.readline().strip()` | My first instinct would be to refactor the nested loop into a function and use `return` to break out. |
How to break out of multiple loops in Python? | 189,645 | 226 | 2008-10-10T00:02:01Z | 189,696 | 7 | 2008-10-10T00:29:37Z | [
"python",
"control-flow"
] | Given the following code (that doesn't work):
```
while True:
#snip: print out current state
while True:
ok = get_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": break 2 #this doesn't work :(
if ok == "n" or ok == "N": break
#do more processing with menus and stuff
```
Is there a way to make this work? Or do I have do one check to break out of the input loop, then another, more limited, check in the outside loop to break out all together if the user is satisfied?
**Edit-FYI:** get\_input is a short function I wrote that supports showing a prompt and default values and all that fanciness and returns `stdin.readline().strip()` | ```
keeplooping=True
while keeplooping:
#Do Stuff
while keeplooping:
#do some other stuff
if finisheddoingstuff(): keeplooping=False
```
or something like that. You could set a variable in the inner loop, and check it in the outer loop immediately after the inner loop exits, breaking if appropriate. I kinda like the GOTO method, provided you don't mind using an April Fool's joke module - its not Pythonic, but it does make sense. |
How to break out of multiple loops in Python? | 189,645 | 226 | 2008-10-10T00:02:01Z | 189,838 | 7 | 2008-10-10T01:41:42Z | [
"python",
"control-flow"
] | Given the following code (that doesn't work):
```
while True:
#snip: print out current state
while True:
ok = get_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": break 2 #this doesn't work :(
if ok == "n" or ok == "N": break
#do more processing with menus and stuff
```
Is there a way to make this work? Or do I have do one check to break out of the input loop, then another, more limited, check in the outside loop to break out all together if the user is satisfied?
**Edit-FYI:** get\_input is a short function I wrote that supports showing a prompt and default values and all that fanciness and returns `stdin.readline().strip()` | This isn't the prettiest way to do it, but in my opinion, it's the best way.
```
def loop():
while True:
#snip: print out current state
while True:
ok = get_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": return
if ok == "n" or ok == "N": break
#do more processing with menus and stuff
```
I'm pretty sure you could work out something using recursion here as well, but I dunno if that's a good option for you. |
How to break out of multiple loops in Python? | 189,645 | 226 | 2008-10-10T00:02:01Z | 190,070 | 93 | 2008-10-10T03:50:44Z | [
"python",
"control-flow"
] | Given the following code (that doesn't work):
```
while True:
#snip: print out current state
while True:
ok = get_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": break 2 #this doesn't work :(
if ok == "n" or ok == "N": break
#do more processing with menus and stuff
```
Is there a way to make this work? Or do I have do one check to break out of the input loop, then another, more limited, check in the outside loop to break out all together if the user is satisfied?
**Edit-FYI:** get\_input is a short function I wrote that supports showing a prompt and default values and all that fanciness and returns `stdin.readline().strip()` | [PEP 3136](http://www.python.org/dev/peps/pep-3136/) proposes labeled break/continue. Guido [rejected it](http://mail.python.org/pipermail/python-3000/2007-July/008663.html) because "code so complicated to require this feature is very rare". The PEP does mention some workarounds, though (such as the exception technique), while Guido feels refactoring to use return will be simpler in most cases. |
How to break out of multiple loops in Python? | 189,645 | 226 | 2008-10-10T00:02:01Z | 2,621,659 | 7 | 2010-04-12T11:33:15Z | [
"python",
"control-flow"
] | Given the following code (that doesn't work):
```
while True:
#snip: print out current state
while True:
ok = get_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": break 2 #this doesn't work :(
if ok == "n" or ok == "N": break
#do more processing with menus and stuff
```
Is there a way to make this work? Or do I have do one check to break out of the input loop, then another, more limited, check in the outside loop to break out all together if the user is satisfied?
**Edit-FYI:** get\_input is a short function I wrote that supports showing a prompt and default values and all that fanciness and returns `stdin.readline().strip()` | Factor your loop logic into an iterator that yields the loop variables and returns when done -- here is a simple one that lays out images in rows/columns until we're out of images or out of places to put them:
```
def it(rows, cols, images):
i = 0
for r in xrange(rows):
for c in xrange(cols):
if i >= len(images):
return
yield r, c, images[i]
i += 1
for r, c, image in it(rows=4, cols=4, images=['a.jpg', 'b.jpg', 'c.jpg']):
... do something with r, c, image ...
```
This has the advantage of splitting up the complicated loop logic and the processing... |
How to break out of multiple loops in Python? | 189,645 | 226 | 2008-10-10T00:02:01Z | 3,150,107 | 63 | 2010-06-30T14:15:59Z | [
"python",
"control-flow"
] | Given the following code (that doesn't work):
```
while True:
#snip: print out current state
while True:
ok = get_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": break 2 #this doesn't work :(
if ok == "n" or ok == "N": break
#do more processing with menus and stuff
```
Is there a way to make this work? Or do I have do one check to break out of the input loop, then another, more limited, check in the outside loop to break out all together if the user is satisfied?
**Edit-FYI:** get\_input is a short function I wrote that supports showing a prompt and default values and all that fanciness and returns `stdin.readline().strip()` | Here's another approach that is short. The disadvantage is that you can only break the outer loop, but sometimes it's exactly what you want.
```
for a in xrange(10):
for b in xrange(20):
if something(a, b):
# Break the inner loop...
break
else:
# Continue if the inner loop wasn't broken.
continue
# Inner loop was broken, break the outer.
break
``` |
How to break out of multiple loops in Python? | 189,645 | 226 | 2008-10-10T00:02:01Z | 3,171,971 | 34 | 2010-07-03T15:50:53Z | [
"python",
"control-flow"
] | Given the following code (that doesn't work):
```
while True:
#snip: print out current state
while True:
ok = get_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": break 2 #this doesn't work :(
if ok == "n" or ok == "N": break
#do more processing with menus and stuff
```
Is there a way to make this work? Or do I have do one check to break out of the input loop, then another, more limited, check in the outside loop to break out all together if the user is satisfied?
**Edit-FYI:** get\_input is a short function I wrote that supports showing a prompt and default values and all that fanciness and returns `stdin.readline().strip()` | I tend to agree that refactoring into a function is usually the best approach for this sort of situation, but for when you *really* need to break out of nested loops, here's an interesting variant of the exception-raising approach that @S.Lott described. It uses Python's `with` statement to make the exception raising look a bit nicer. Define a new context manager (you only have to do this once) with:
```
from contextlib import contextmanager
@contextmanager
def nested_break():
class NestedBreakException(Exception):
pass
try:
yield NestedBreakException
except NestedBreakException:
pass
```
Now you can use this context manager as follows:
```
with nested_break() as mylabel:
while True:
print "current state"
while True:
ok = raw_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": raise mylabel
if ok == "n" or ok == "N": break
print "more processing"
```
Advantages: (1) it's slightly cleaner (no explicit try-except block), and (2) you get a custom-built `Exception` subclass for each use of `nested_break`; no need to declare your own `Exception` subclass each time. |
How to break out of multiple loops in Python? | 189,645 | 226 | 2008-10-10T00:02:01Z | 6,564,670 | 12 | 2011-07-03T18:15:55Z | [
"python",
"control-flow"
] | Given the following code (that doesn't work):
```
while True:
#snip: print out current state
while True:
ok = get_input("Is this ok? (y/n)")
if ok == "y" or ok == "Y": break 2 #this doesn't work :(
if ok == "n" or ok == "N": break
#do more processing with menus and stuff
```
Is there a way to make this work? Or do I have do one check to break out of the input loop, then another, more limited, check in the outside loop to break out all together if the user is satisfied?
**Edit-FYI:** get\_input is a short function I wrote that supports showing a prompt and default values and all that fanciness and returns `stdin.readline().strip()` | Introduce a new variable that you'll use as a 'loop breaker'. First assign something to it(False,0, etc.), and then, inside the outer loop, before you break from it, change the value to something else(True,1,...). Once the loop exits make the 'parent' loop check for that value. Let me demonstrate:
```
breaker = False #our mighty loop exiter!
while True:
while True:
if conditionMet:
#insert code here...
breaker = True
break
if breaker: # the interesting part!
break # <--- !
```
If you have an infinite loop, this is the only way out; for other loops execution is really a lot faster. This also works if you have many nested loops. You can exit all, or just a few. Endless possibilities! Hope this helped! |
Google App Engine and 404 error | 189,751 | 35 | 2008-10-10T00:51:00Z | 189,935 | 33 | 2008-10-10T02:36:02Z | [
"python",
"google-app-engine",
"http-status-code-404"
] | I've setup a static website on GAE using hints found elsewhere, but can't figure out how to return a 404 error. My app.yaml file looks like
```
- url: (.*)/
static_files: static\1/index.html
upload: static/index.html
- url: /
static_dir: static
```
with all the static html/jpg files stored under the static directory. The above works for files that exist, but returns a null length file if they don't. The answer is probably to write a python script to return a 404 error, but how do you set things up to serve the static files that exist but run the script for files that don't?
Here is the log from fetching a non-existent file (nosuch.html) on the development application server:
```
ERROR 2008-11-25 20:08:34,084 dev_appserver.py] Error encountered reading file "/usr/home/ctuffli/www/tufflinet/static/nosuch.html":
[Errno 2] No such file or directory: '/usr/home/ctuffli/www/tufflinet/static/nosuch.html'
INFO 2008-11-25 20:08:34,088 dev_appserver.py] "GET /nosuch.html HTTP/1.1" 404 -
``` | You need to register a catch-all script handler. Append this at the end of your app.yaml:
```
- url: /.*
script: main.py
```
In main.py you will need to put this code:
```
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
class NotFoundPageHandler(webapp.RequestHandler):
def get(self):
self.error(404)
self.response.out.write('<Your 404 error html page>')
application = webapp.WSGIApplication([('/.*', NotFoundPageHandler)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
```
Replace `<Your 404 error html page>` with something meaningful. Or better use a template, you can read how to do that [here](http://code.google.com/appengine/docs/gettingstarted/templates.html).
Please let me know if you have problems setting this up. |
Google App Engine and 404 error | 189,751 | 35 | 2008-10-10T00:51:00Z | 3,722,135 | 25 | 2010-09-15T21:50:05Z | [
"python",
"google-app-engine",
"http-status-code-404"
] | I've setup a static website on GAE using hints found elsewhere, but can't figure out how to return a 404 error. My app.yaml file looks like
```
- url: (.*)/
static_files: static\1/index.html
upload: static/index.html
- url: /
static_dir: static
```
with all the static html/jpg files stored under the static directory. The above works for files that exist, but returns a null length file if they don't. The answer is probably to write a python script to return a 404 error, but how do you set things up to serve the static files that exist but run the script for files that don't?
Here is the log from fetching a non-existent file (nosuch.html) on the development application server:
```
ERROR 2008-11-25 20:08:34,084 dev_appserver.py] Error encountered reading file "/usr/home/ctuffli/www/tufflinet/static/nosuch.html":
[Errno 2] No such file or directory: '/usr/home/ctuffli/www/tufflinet/static/nosuch.html'
INFO 2008-11-25 20:08:34,088 dev_appserver.py] "GET /nosuch.html HTTP/1.1" 404 -
``` | google app engine now has [Custom Error Responses](https://cloud.google.com/appengine/docs/python/config/appconfig?csw=1#Python_app_yaml_Custom_error_responses)
so you can now add an error\_handlers section to your app.yaml, as in this example:
```
error_handlers:
- file: default_error.html
- error_code: over_quota
file: over_quota.html
``` |
How can I quantify difference between two images? | 189,943 | 88 | 2008-10-10T02:39:33Z | 189,960 | 41 | 2008-10-10T02:46:02Z | [
"python",
"image-processing",
"background-subtraction",
"image-comparison",
"timelapse"
] | Here's what I would like to do:
I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much *looks* the same, I don't want to store the latest snapshot.
I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold.
I'm looking for simplicity rather than perfection.
I'm using python. | **A simple solution:**
Encode the image as a **jpeg** and look for a substantial change in **filesize**.
I've implemented something similar with video thumbnails, and had a lot of success and scalability. |
How can I quantify difference between two images? | 189,943 | 88 | 2008-10-10T02:39:33Z | 190,036 | 12 | 2008-10-10T03:37:57Z | [
"python",
"image-processing",
"background-subtraction",
"image-comparison",
"timelapse"
] | Here's what I would like to do:
I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much *looks* the same, I don't want to store the latest snapshot.
I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold.
I'm looking for simplicity rather than perfection.
I'm using python. | A trivial thing to try:
Resample both images to small thumbnails (e.g. 64 x 64) and compare the thumbnails pixel-by-pixel with a certain threshold. If the original images are almost the same, the resampled thumbnails will be very similar or even exactly the same. This method takes care of noise that can occur especially in low-light scenes. It may even be better if you go grayscale. |
How can I quantify difference between two images? | 189,943 | 88 | 2008-10-10T02:39:33Z | 190,061 | 14 | 2008-10-10T03:47:44Z | [
"python",
"image-processing",
"background-subtraction",
"image-comparison",
"timelapse"
] | Here's what I would like to do:
I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much *looks* the same, I don't want to store the latest snapshot.
I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold.
I'm looking for simplicity rather than perfection.
I'm using python. | Two popular and relatively simple methods are: (a) the Euclidean distance already suggested, or (b) normalized cross-correlation. Normalized cross-correlation tends to be noticeably more robust to lighting changes than simple cross-correlation. Wikipedia gives a formula for the [normalized cross-correlation](http://en.wikipedia.org/wiki/Cross-correlation#Normalized_cross-correlation). More sophisticated methods exist too, but they require quite a bit more work.
Using numpy-like syntax,
```
dist_euclidean = sqrt(sum((i1 - i2)^2)) / i1.size
dist_manhattan = sum(abs(i1 - i2)) / i1.size
dist_ncc = sum( (i1 - mean(i1)) * (i2 - mean(i2)) ) / (
(i1.size - 1) * stdev(i1) * stdev(i2) )
```
assuming that `i1` and `i2` are 2D grayscale image arrays. |
How can I quantify difference between two images? | 189,943 | 88 | 2008-10-10T02:39:33Z | 196,882 | 40 | 2008-10-13T06:50:17Z | [
"python",
"image-processing",
"background-subtraction",
"image-comparison",
"timelapse"
] | Here's what I would like to do:
I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much *looks* the same, I don't want to store the latest snapshot.
I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold.
I'm looking for simplicity rather than perfection.
I'm using python. | You can compare two images using functions from [PIL](http://www.pythonware.com/products/pil/).
```
import Image
import ImageChops
im1 = Image.open("splash.png")
im2 = Image.open("splash2.png")
diff = ImageChops.difference(im2, im1)
```
The diff object is an image in which every pixel is the result of the subtraction of the color values of that pixel in the second image from the first image. Using the diff image you can do several things. The simplest one is the `diff.getbbox()` function. It will tell you the minimal rectangle that contains all the changes between your two images.
You can probably implement approximations of the other stuff mentioned here using functions from PIL as well. |
How can I quantify difference between two images? | 189,943 | 88 | 2008-10-10T02:39:33Z | 3,935,002 | 143 | 2010-10-14T15:43:55Z | [
"python",
"image-processing",
"background-subtraction",
"image-comparison",
"timelapse"
] | Here's what I would like to do:
I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much *looks* the same, I don't want to store the latest snapshot.
I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold.
I'm looking for simplicity rather than perfection.
I'm using python. | ## General idea
Option 1: Load both images as arrays (`scipy.misc.imread`) and calculate an element-wise (pixel-by-pixel) difference. Calculate the norm of the difference.
Option 2: Load both images. Calculate some feature vector for each of them (like a histogram). Calculate distance between feature vectors rather than images.
However, there are some decisions to make first.
## Questions
You should answer these questions first:
* Are images of the same shape and dimension?
If not, you may need to resize or crop them. PIL library will help to do it in Python.
If they are taken with the same settings and the same device, they are probably the same.
* Are images well-aligned?
If not, you may want to run cross-correlation first, to find the best alignment first. SciPy has functions to do it.
If the camera and the scene are still, the images are likely to be well-aligned.
* Is exposure of the images always the same? (Is lightness/contrast the same?)
If not, you may want [to normalize](http://en.wikipedia.org/wiki/Normalization_(image_processing)) images.
But be careful, in some situations this may do more wrong than good. For example, a single bright pixel on a dark background will make the normalized image very different.
* Is color information important?
If you want to notice color changes, you will have a vector of color values per point, rather than a scalar value as in gray-scale image. You need more attention when writing such code.
* Are there distinct edges in the image? Are they likely to move?
If yes, you can apply edge detection algorithm first (e.g. calculate gradient with Sobel or Prewitt transform, apply some threshold), then compare edges on the first image to edges on the second.
* Is there noise in the image?
All sensors pollute the image with some amount of noise. Low-cost sensors have more noise. You may wish to apply some noise reduction before you compare images. Blur is the most simple (but not the best) approach here.
* What kind of changes do you want to notice?
This may affect the choice of norm to use for the difference between images.
Consider using Manhattan norm (the sum of the absolute values) or zero norm (the number of elements not equal to zero) to measure how much the image has changed. The former will tell you how much the image is off, the latter will tell only how many pixels differ.
## Example
I assume your images are well-aligned, the same size and shape, possibly with different exposure. For simplicity, I convert them to grayscale even if they are color (RGB) images.
You will need these imports:
```
import sys
from scipy.misc import imread
from scipy.linalg import norm
from scipy import sum, average
```
Main function, read two images, convert to grayscale, compare and print results:
```
def main():
file1, file2 = sys.argv[1:1+2]
# read images as 2D arrays (convert to grayscale for simplicity)
img1 = to_grayscale(imread(file1).astype(float))
img2 = to_grayscale(imread(file2).astype(float))
# compare
n_m, n_0 = compare_images(img1, img2)
print "Manhattan norm:", n_m, "/ per pixel:", n_m/img1.size
print "Zero norm:", n_0, "/ per pixel:", n_0*1.0/img1.size
```
How to compare. `img1` and `img2` are 2D SciPy arrays here:
```
def compare_images(img1, img2):
# normalize to compensate for exposure difference, this may be unnecessary
# consider disabling it
img1 = normalize(img1)
img2 = normalize(img2)
# calculate the difference and its norms
diff = img1 - img2 # elementwise for scipy arrays
m_norm = sum(abs(diff)) # Manhattan norm
z_norm = norm(diff.ravel(), 0) # Zero norm
return (m_norm, z_norm)
```
If the file is a color image, `imread` returns a 3D array, average RGB channels (the last array axis) to obtain intensity. No need to do it for grayscale images (e.g. `.pgm`):
```
def to_grayscale(arr):
"If arr is a color image (3D array), convert it to grayscale (2D array)."
if len(arr.shape) == 3:
return average(arr, -1) # average over the last axis (color channels)
else:
return arr
```
Normalization is trivial, you may choose to normalize to [0,1] instead of [0,255]. `arr` is a SciPy array here, so all operations are element-wise:
```
def normalize(arr):
rng = arr.max()-arr.min()
amin = arr.min()
return (arr-amin)*255/rng
```
Run the `main` function:
```
if __name__ == "__main__":
main()
```
Now you can put this all in a script and run against two images. If we compare image to itself, there is no difference:
```
$ python compare.py one.jpg one.jpg
Manhattan norm: 0.0 / per pixel: 0.0
Zero norm: 0 / per pixel: 0.0
```
If we blur the image and compare to the original, there is some difference:
```
$ python compare.py one.jpg one-blurred.jpg
Manhattan norm: 92605183.67 / per pixel: 13.4210411116
Zero norm: 6900000 / per pixel: 1.0
```
P.S. Entire [compare.py](http://gist.github.com/626356) script.
## Update: relevant techniques
As the question is about a video sequence, where frames are likely to be almost the same, and you look for something unusual, I'd like to mention some alternative approaches which may be relevant:
* background subtraction and segmentation (to detect foreground objects)
* sparse optical flow (to detect motion)
* comparing histograms or some other statistics instead of images
I strongly recommend taking a look at âLearning OpenCVâ book, Chapters 9 (Image parts and segmentation) and 10 (Tracking and motion). The former teaches to use Background subtraction method, the latter gives some info on optical flow methods. All methods are implemented in OpenCV library. If you use Python, I suggest to use OpenCV ⥠2.3, and its `cv2` Python module.
The most simple version of the background subtraction:
* learn the average value μ and standard deviation Ï for every pixel of the background
* compare current pixel values to the range of (μ-2Ï,μ+2Ï) or (μ-Ï,μ+Ï)
More advanced versions make take into account time series for every pixel and handle non-static scenes (like moving trees or grass).
The idea of optical flow is to take two or more frames, and assign velocity vector to every pixel (dense optical flow) or to some of them (sparse optical flow). To estimate sparse optical flow, you may use [Lucas-Kanade method](http://en.wikipedia.org/wiki/Lucas%E2%80%93Kanade_method) (it is also implemented in OpenCV). Obviously, if there is a lot of flow (high average over max values of the velocity field), then something is moving in the frame, and subsequent images are more different.
Comparing histograms may help to detect sudden changes between consecutive frames. This approach was used in [Courbon et al, 2010](http://www.sciencedirect.com/science/article/pii/S0967066110000808):
> *Similarity of consecutive frames.* The distance between two consecutive frames is measured. If it is too high, it means that the second frame is corrupted and thus the image is eliminated. The [KullbackâLeibler distance](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence), or mutual entropy, on the histograms of the two frames:
>
> 
>
> where *p* and *q* are the histograms of the frames is used. The threshold is fixed on 0.2. |
Daemon Threads Explanation | 190,010 | 104 | 2008-10-10T03:24:07Z | 190,017 | 220 | 2008-10-10T03:27:36Z | [
"python",
"multithreading"
] | In the [Python documentation](https://docs.python.org/2/library/threading.html#thread-objects)
it says:
> A thread can be flagged as a "daemon thread". The significance of this
> flag is that the entire Python program exits when only daemon threads
> are left. The initial value is inherited from the creating thread.
Does anyone have a clearer explanation of what that means or a practical example showing where you would want to set threads as `daemonic`?
---
To clarify for me:
so the only time you **wouldn't** set threads as daemonic is if you wanted them to continue running after the main thread exits? | Some threads do background tasks, like sending keepalive packets, or performing periodic garbage collection, or whatever. These are only useful when the main program is running, and it's okay to kill them off once the other, non-daemon, threads have exited.
Without daemon threads, you'd have to keep track of them, and tell them to exit, before your program can completely quit. By setting them as daemon threads, you can let them run and forget about them, and when your program quits, any daemon threads are killed automatically. |
Daemon Threads Explanation | 190,010 | 104 | 2008-10-10T03:24:07Z | 190,033 | 14 | 2008-10-10T03:36:16Z | [
"python",
"multithreading"
] | In the [Python documentation](https://docs.python.org/2/library/threading.html#thread-objects)
it says:
> A thread can be flagged as a "daemon thread". The significance of this
> flag is that the entire Python program exits when only daemon threads
> are left. The initial value is inherited from the creating thread.
Does anyone have a clearer explanation of what that means or a practical example showing where you would want to set threads as `daemonic`?
---
To clarify for me:
so the only time you **wouldn't** set threads as daemonic is if you wanted them to continue running after the main thread exits? | Let's say you're making some kind of dashboard widget. As part of this, you want it to display the unread message count in your email box. So you make a little thread that will:
1. Connect to the mail server and ask how many unread messages you have.
2. Signal the GUI with the updated count.
3. Sleep for a little while.
When your widget starts up, it would create this thread, designate it a daemon, and start it. Because it's a daemon, you don't have to think about it; when your widget exits, the thread will stop automatically. |
Daemon Threads Explanation | 190,010 | 104 | 2008-10-10T03:24:07Z | 190,131 | 8 | 2008-10-10T04:34:28Z | [
"python",
"multithreading"
] | In the [Python documentation](https://docs.python.org/2/library/threading.html#thread-objects)
it says:
> A thread can be flagged as a "daemon thread". The significance of this
> flag is that the entire Python program exits when only daemon threads
> are left. The initial value is inherited from the creating thread.
Does anyone have a clearer explanation of what that means or a practical example showing where you would want to set threads as `daemonic`?
---
To clarify for me:
so the only time you **wouldn't** set threads as daemonic is if you wanted them to continue running after the main thread exits? | A simpler way to think about it, perhaps: when main returns, your process will not exit if there are non-daemon threads still running.
A bit of advice: Clean shutdown is easy to get wrong when threads and synchronization are involved - if you can avoid it, do so. Use daemon threads whenever possible. |
Daemon Threads Explanation | 190,010 | 104 | 2008-10-10T03:24:07Z | 583,996 | 7 | 2009-02-24T22:55:10Z | [
"python",
"multithreading"
] | In the [Python documentation](https://docs.python.org/2/library/threading.html#thread-objects)
it says:
> A thread can be flagged as a "daemon thread". The significance of this
> flag is that the entire Python program exits when only daemon threads
> are left. The initial value is inherited from the creating thread.
Does anyone have a clearer explanation of what that means or a practical example showing where you would want to set threads as `daemonic`?
---
To clarify for me:
so the only time you **wouldn't** set threads as daemonic is if you wanted them to continue running after the main thread exits? | Other posters gave some examples for situations in which you'd use daemon threads. My recommendation, however, is never to use them.
It's not because they're not useful, but because there are some bad side effects you can experience if you use them. Daemon threads can still execute after the Python runtime starts tearing down things in the main thread, causing some pretty bizarre exceptions.
More info here:
<http://joeshaw.org/2009/02/24/605>
<http://mail.python.org/pipermail/python-list/2005-February/307042.html>
Strictly speaking you never need them, it just makes implementation easier in some cases. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.