title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
How to determine the variable type in Python? | 402,504 | 607 | 2008-12-31T07:58:57Z | 402,508 | 9 | 2008-12-31T08:02:11Z | [
"python",
"types",
"unsigned",
"signed",
"16-bit"
] | How do I see the type of a variable whether it is unsigned 32 bit, signed 16 bit, etc.?
How do I view it? | ```
print type(variable_name)
```
I also highly recommend the [IPython](http://ipython.scipy.org/) interactive interpreter when dealing with questions like this. It lets you type `variable_name?` and will return a whole list of information about the object including the type and the doc string for the type.
e.g.
```
In [9]: var = 123
In [10]: var?
Type: int
Base Class: <type 'int'>
String Form: 123
Namespace: Interactive
Docstring:
int(x[, base]) -> integer
Convert a string or number to an integer, if possible. A floating point
argument will be truncated towards zero (this does not include a string
representation of a floating point number!) When converting a string, use
the optional base. It is an error to supply a base when converting a
non-string. If the argument is outside the integer range a long object
will be returned instead.
``` |
How to determine the variable type in Python? | 402,504 | 607 | 2008-12-31T07:58:57Z | 402,511 | 20 | 2008-12-31T08:08:06Z | [
"python",
"types",
"unsigned",
"signed",
"16-bit"
] | How do I see the type of a variable whether it is unsigned 32 bit, signed 16 bit, etc.?
How do I view it? | The question is somewhat ambiguous -- I'm not sure what you mean by "view". If you are trying to *query* the type of a native Python object, [@atzz](http://stackoverflow.com/questions/402504/how-to-determine-the-variable-type-in-python#402516)'s answer will steer you in the right direction.
However, if you are trying to *generate* Python objects that have the semantics of primitive C-types, (such as `uint32_t`, `int16_t`), use the [`struct`](http://docs.python.org/library/struct.html#struct.calcsize) module. You can determine the number of bits in a given C-type primitive thusly:
```
>>> struct.calcsize('c') # char
1
>>> struct.calcsize('h') # short
2
>>> struct.calcsize('i') # int
4
>>> struct.calcsize('l') # long
4
```
This is also reflected in the `array` module, which can make arrays of these lower-level types:
```
>>> array.array('c').itemsize # char
1
```
The maximum integer supported (Python 2's `int`) is given by [sys.maxint](http://docs.python.org/library/sys.html#sys.maxsize).
```
>>> import sys, math
>>> math.ceil(math.log(sys.maxint, 2)) + 1 # Signedness
32.0
```
There is also [sys.getsizeof](http://docs.python.org/library/sys.html#sys.getsizeof), which returns the actual size of the *Python* object in residual memory:
```
>>> a = 5
>>> sys.getsizeof(a) # Residual memory.
12
```
For float data and precision data, use [sys.float\_info](http://docs.python.org/library/sys.html#sys.float_info):
```
>>> sys.float_info
sys.floatinfo(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.2204460492503131e-16, radix=2, rounds=1)
``` |
How to determine the variable type in Python? | 402,504 | 607 | 2008-12-31T07:58:57Z | 402,515 | 12 | 2008-12-31T08:10:54Z | [
"python",
"types",
"unsigned",
"signed",
"16-bit"
] | How do I see the type of a variable whether it is unsigned 32 bit, signed 16 bit, etc.?
How do I view it? | Do you mean in [Python](http://en.wikipedia.org/wiki/Python_%28programming_language%29) or using [ctypes](http://www.python.org/doc/2.5/lib/module-ctypes.html)?
In the first case, you simply cannot - because Python does not have signed/unsigned, 16/32 bit integers.
In the second case, you can use `type()`:
```
>>> import ctypes
>>> a = ctypes.c_uint() # unsigned int
>>> type(a)
<class 'ctypes.c_ulong'>
```
For more reference on ctypes, an its type, see [the official documentation](http://docs.python.org/library/ctypes.html). |
How to determine the variable type in Python? | 402,504 | 607 | 2008-12-31T07:58:57Z | 402,516 | 10 | 2008-12-31T08:12:34Z | [
"python",
"types",
"unsigned",
"signed",
"16-bit"
] | How do I see the type of a variable whether it is unsigned 32 bit, signed 16 bit, etc.?
How do I view it? | Python doesn't have such types as you describe. There are two types used to represent integral values: `int`, which corresponds to platform's int type in C, and `long`, which is an arbitrary precision integer (i.e. it grows as needed and doesn't have an upper limit). `int`s are silently converted to `long` if an expression produces result which cannot be stored in `int`. |
How to determine the variable type in Python? | 402,504 | 607 | 2008-12-31T07:58:57Z | 402,704 | 582 | 2008-12-31T10:43:07Z | [
"python",
"types",
"unsigned",
"signed",
"16-bit"
] | How do I see the type of a variable whether it is unsigned 32 bit, signed 16 bit, etc.?
How do I view it? | Python doesn't have the same types as C/C++, which appears to be your question.
Try this:
```
>>> i = 123
>>> type(i)
<type 'int'>
>>> type(i) is int
True
>>> i = 123456789L
>>> type(i)
<type 'long'>
>>> type(i) is long
True
>>> i = 123.456
>>> type(i)
<type 'float'>
>>> type(i) is float
True
```
The distinction between int and long goes away in Python 3.0, though. |
How to determine the variable type in Python? | 402,504 | 607 | 2008-12-31T07:58:57Z | 18,224,831 | 9 | 2013-08-14T06:39:32Z | [
"python",
"types",
"unsigned",
"signed",
"16-bit"
] | How do I see the type of a variable whether it is unsigned 32 bit, signed 16 bit, etc.?
How do I view it? | It may be little irrelevant. but you can check types of an object with `isinstance(object, type)` as mentioned [here](http://stackoverflow.com/questions/2225038/python-determine-the-type-of-an-object "Object type in python"). |
How to determine the variable type in Python? | 402,504 | 607 | 2008-12-31T07:58:57Z | 29,113,307 | 26 | 2015-03-18T02:48:04Z | [
"python",
"types",
"unsigned",
"signed",
"16-bit"
] | How do I see the type of a variable whether it is unsigned 32 bit, signed 16 bit, etc.?
How do I view it? | One more way using `__class__`:
```
>>> a = [1, 2, 3, 4]
>>> a.__class__
<type 'list'>
>>> b = {'key1': 'val1'}
>>> b.__class__
<type 'dict'>
>>> c = 12
>>> c.__class__
<type 'int'>
``` |
How to determine the variable type in Python? | 402,504 | 607 | 2008-12-31T07:58:57Z | 32,885,953 | 55 | 2015-10-01T11:02:32Z | [
"python",
"types",
"unsigned",
"signed",
"16-bit"
] | How do I see the type of a variable whether it is unsigned 32 bit, signed 16 bit, etc.?
How do I view it? | It is so simple. You do it like this.
```
print type(variable_name)
``` |
How to sort a list of objects in Python, based on an attribute of the objects? | 403,421 | 310 | 2008-12-31T16:41:32Z | 403,426 | 531 | 2008-12-31T16:42:59Z | [
"python",
"sorting",
"count"
] | I've got a list of Python objects that I'd like to sort by an attribute of the objects themselves. The list looks like:
```
>>> ut
[<Tag: 128>, <Tag: 2008>, <Tag: <>, <Tag: actionscript>, <Tag: addresses>, <Tag: aes>, <Tag: ajax> ...]
```
Each object has a count:
```
>>> ut[1].count
1L
```
I need to sort the list by number of counts descending.
I've seen several methods for this, but I'm looking for best practice in Python. | ```
# To sort the list in place...
ut.sort(key=lambda x: x.count, reverse=True)
# To return a new list, use the sorted() built-in function...
newlist = sorted(ut, key=lambda x: x.count, reverse=True)
```
More on [sorting by keys »](http://wiki.python.org/moin/HowTo/Sorting#Sortingbykeys) |
How to sort a list of objects in Python, based on an attribute of the objects? | 403,421 | 310 | 2008-12-31T16:41:32Z | 403,496 | 9 | 2008-12-31T17:10:00Z | [
"python",
"sorting",
"count"
] | I've got a list of Python objects that I'd like to sort by an attribute of the objects themselves. The list looks like:
```
>>> ut
[<Tag: 128>, <Tag: 2008>, <Tag: <>, <Tag: actionscript>, <Tag: addresses>, <Tag: aes>, <Tag: ajax> ...]
```
Each object has a count:
```
>>> ut[1].count
1L
```
I need to sort the list by number of counts descending.
I've seen several methods for this, but I'm looking for best practice in Python. | It looks much like a list of Django ORM model instances.
Why not sort them on query like this:
```
ut = Tag.objects.order_by('-count')
``` |
How to sort a list of objects in Python, based on an attribute of the objects? | 403,421 | 310 | 2008-12-31T16:41:32Z | 403,607 | 38 | 2008-12-31T17:48:21Z | [
"python",
"sorting",
"count"
] | I've got a list of Python objects that I'd like to sort by an attribute of the objects themselves. The list looks like:
```
>>> ut
[<Tag: 128>, <Tag: 2008>, <Tag: <>, <Tag: actionscript>, <Tag: addresses>, <Tag: aes>, <Tag: ajax> ...]
```
Each object has a count:
```
>>> ut[1].count
1L
```
I need to sort the list by number of counts descending.
I've seen several methods for this, but I'm looking for best practice in Python. | A way that can be fastest, especially if your list has a lot of records, is to use `operator.attrgetter("count")`. However, this might run on an pre-operator version of Python, so it would be nice to have a fallback mechanism. You might want to do the following, then:
```
try: import operator
except ImportError: cmpfun= lambda x: x.count # use a lambda if no operator module
else: cmpfun= operator.attrgetter("count") # use operator since it's faster than lambda
ut.sort(key=cmpfun, reverse=True) # sort in-place
``` |
How to sort a list of objects in Python, based on an attribute of the objects? | 403,421 | 310 | 2008-12-31T16:41:32Z | 403,795 | 10 | 2008-12-31T19:00:33Z | [
"python",
"sorting",
"count"
] | I've got a list of Python objects that I'd like to sort by an attribute of the objects themselves. The list looks like:
```
>>> ut
[<Tag: 128>, <Tag: 2008>, <Tag: <>, <Tag: actionscript>, <Tag: addresses>, <Tag: aes>, <Tag: ajax> ...]
```
Each object has a count:
```
>>> ut[1].count
1L
```
I need to sort the list by number of counts descending.
I've seen several methods for this, but I'm looking for best practice in Python. | ```
from operator import attrgetter
ut.sort(key = attrgetter('count'), reverse = True)
``` |
How to sort a list of objects in Python, based on an attribute of the objects? | 403,421 | 310 | 2008-12-31T16:41:32Z | 988,728 | 26 | 2009-06-12T19:54:58Z | [
"python",
"sorting",
"count"
] | I've got a list of Python objects that I'd like to sort by an attribute of the objects themselves. The list looks like:
```
>>> ut
[<Tag: 128>, <Tag: 2008>, <Tag: <>, <Tag: actionscript>, <Tag: addresses>, <Tag: aes>, <Tag: ajax> ...]
```
Each object has a count:
```
>>> ut[1].count
1L
```
I need to sort the list by number of counts descending.
I've seen several methods for this, but I'm looking for best practice in Python. | Readers should notice that the key= method:
```
ut.sort(key=lambda x: x.count, reverse=True)
```
is many times faster than adding rich comparison operators to the objects. I was surprised to read this (page 485 of "Python in a Nutshell"). You can confirm this by running tests on this little program:
```
#!/usr/bin/env python
import random
class C:
def __init__(self,count):
self.count = count
def __cmp__(self,other):
return cmp(self.count,other.count)
longList = [C(random.random()) for i in xrange(1000000)] #about 6.1 secs
longList2 = longList[:]
longList.sort() #about 52 - 6.1 = 46 secs
longList2.sort(key = lambda c: c.count) #about 9 - 6.1 = 3 secs
```
My, very minimal, tests show the first sort is more than 10 times slower, but the book says it is only about 5 times slower in general. The reason they say is due to the highly optimizes sort algorithm used in python (**timsort**).
Still, its very odd that .sort(lambda) is faster than plain old .sort(). I hope they fix that. |
Optimization in Python - do's, don'ts and rules of thumb | 403,794 | 8 | 2008-12-31T18:57:59Z | 403,821 | 12 | 2008-12-31T19:13:19Z | [
"python",
"optimization"
] | Well I was reading this [post](http://handyfloss.wordpress.com/2008/02/17/summary-of-my-python-optimization-adventures/) and then I came across a code which was:
```
jokes=range(1000000)
domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)]
```
I thought wouldn't it be better to calculate the value of len(jokes) once outside the list comprehension?
Well I tried it and timed three codes
```
jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)]'
10000000 loops, best of 3: 0.0352 usec per loop
jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);l=len(jokes);domain=[(0,(l*2)-i-1) for i in range(0,l*2)]'
10000000 loops, best of 3: 0.0343 usec per loop
jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);l=len(jokes)*2;domain=[(0,l-i-1) for i in range(0,l)]'
10000000 loops, best of 3: 0.0333 usec per loop
```
Observing the marginal difference 2.55% between the first and the second made me think - is the first list comprehension
```
domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)]
```
optimized internally by python? or is 2.55% a big enough optimization (given that the len(jokes)=1000000)?
If this is - What are the other implicit/internal optimizations in Python ?
What are the `developer's rules of thumb for optimization in Python`?
**Edit1**: Since most of the answers are "don't optimize, do it later if its slow" and I got some tips and links from `Triptych` and `Ali A` for the **do's**.
I will change the question a bit and request for **don'ts**.
Can we have some experiences from people who faced the '**slowness**', what was the problem and how it was corrected?
**Edit2**: For those who haven't here is an [interesting read](http://www.python.org/doc/essays/list2str/)
**Edit3:** Incorrect usage of `timeit` in question please see **dF's** answer for correct usage and hence timings for the three codes. | You're not using [`timeit`](http://docs.python.org/library/timeit.html) correctly: the argument to `-s` (setup) is a statement to be executed once initially, so you're really just testing an empty statement. You want to do
```
$ python -m timeit -s "jokes=range(1000000)" "domain=[(0,(len(jokes)*2)-i-1) for i in range(0, len(jokes)*2)]"
10 loops, best of 3: 1.08 sec per loop
$ python -m timeit -s "jokes=range(1000000)" "l=len(jokes);domain=[(0,(l*2)-i-1) for i in range(0, l*2)]"
10 loops, best of 3: 908 msec per loop
$ python -m timeit -s "jokes=range(1000000)" "l=len(jokes*2);domain=[(0,l-i-1) for i in range(0, l)]"
10 loops, best of 3: 813 msec per loop
```
While the speedup is still not dramatic, it's more significant (16% and 25% respectively). So since it doesn't make the code any more complicated, this simple optimization is probably worth it.
To address the actual question... the usual rule of thumb in Python is to
1. Favor straightforward and readable code over optimization when coding.
2. Profile your code ([`profile / cProfile` and `pstats`](http://docs.python.org/library/profile.html) are your friends) to figure out what you need to optimize (usually things like tight loops).
3. As a last resort, re-implement these as C extensions, which is made much easier with tools like [pyrex](http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/) and [cython](http://www.cython.org/).
One thing to watch out for: compared to many other languages, function calls are relatively expensive in Python which is why the optimization in your example made a difference even though `len` is O(1) for lists. |
Calling from a parent file in python | 403,822 | 3 | 2008-12-31T19:14:59Z | 403,832 | 9 | 2008-12-31T19:20:03Z | [
"python"
] | I have a file called main.py and a file called classes.py
main.py contains the application and what's happening while class.py contains some classes.
main.py has the following code
**main.py**
```
import classes
def addItem(text):
print text
myClass = classes.ExampleClass()
```
And then we have classes.py
**classes.py**
```
class ExampleClass (object):
def __init__(self):
addItem('bob')
```
Surprisingly enough that's not the actual code I am using because I've stripped out anything that'd get in the way of you seeing what I want to do. I want to be able to call a method that's defined in main.py from a class within classes.py. How do I do this?
Thanks in advance | I couldn't answer this any better than [this post by Alex Martelli](http://mail.python.org/pipermail/python-list/2000-December/059926.html). Basically any way you try to do this will lead to trouble and you are much better off refactoring the code to avoid mutual dependencies between two modules...
If you have two modules A and B which depend on each other, the easiest way is to isolate a part of the code that they both depend on into a third module C, and have both of them import C. |
All code in one file | 403,934 | 5 | 2008-12-31T19:59:29Z | 403,944 | 12 | 2008-12-31T20:04:33Z | [
"python",
"version-control",
"project-management"
] | After asking [organising my Python project](http://stackoverflow.com/questions/391879/organising-my-python-project) and then [calling from a parent file in Python](http://stackoverflow.com/questions/403822/calling-from-a-parent-file-in-python) it's occurring to me that it'll be so much easier to put all my code in one file (data will be read in externally).
I've always thought that this was bad project organisation but it seems to be the easiest way to deal with the problems I'm thinking I will face. Have I simply gotten the wrong end of the stick with file count or have I not seen some great guide on large (for me) projects? | If you are planning to use any kind of SCM then you are going to be screwed. Having one file is a guaranteed way to have lots of collisions and merges that will be painstaking to deal with over time.
Stick to conventions and break apart your files. If nothing more than to save the guy who will one day have to maintain your code... |
Python program to calculate harmonic series | 404,346 | 4 | 2009-01-01T00:55:16Z | 404,425 | 12 | 2009-01-01T02:31:41Z | [
"python",
"math"
] | Does anyone know how to write a program in Python that will calculate the addition of the harmonic series. i.e. 1 + 1/2 +1/3 +1/4... | [@recursive's solution](http://stackoverflow.com/questions/404346/python-program-to-calculate-harmonic-series#404354) is correct for a floating point approximation. If you prefer, you can get the exact answer in Python 3.0 using the fractions module:
```
>>> from fractions import Fraction
>>> def calc_harmonic(n):
... return sum(Fraction(1, d) for d in range(1, n + 1))
...
>>> calc_harmonic(20) # sum of the first 20 terms
Fraction(55835135, 15519504)
```
Note that the number of digits grows quickly so this will require a lot of memory for large n. You could also use a generator to look at the series of partial sums if you wanted to get really fancy. |
Python program to calculate harmonic series | 404,346 | 4 | 2009-01-01T00:55:16Z | 404,843 | 17 | 2009-01-01T10:47:42Z | [
"python",
"math"
] | Does anyone know how to write a program in Python that will calculate the addition of the harmonic series. i.e. 1 + 1/2 +1/3 +1/4... | [@Kiv's answer](http://stackoverflow.com/questions/404346/python-program-to-calculate-harmonic-series#404425) is correct but it is slow for large n if you don't need an infinite precision. It is better to use an [asymptotic formula](http://en.wikipedia.org/wiki/Harmonic_number) in this case:

```
#!/usr/bin/env python
from math import log
def H(n):
"""Returns an approximate value of n-th harmonic number.
http://en.wikipedia.org/wiki/Harmonic_number
"""
# Euler-Mascheroni constant
gamma = 0.57721566490153286060651209008240243104215933593992
return gamma + log(n) + 0.5/n - 1./(12*n**2) + 1./(120*n**4)
```
[@Kiv's answer](http://stackoverflow.com/questions/404346/python-program-to-calculate-harmonic-series#404425) for Python 2.6:
```
from fractions import Fraction
harmonic_number = lambda n: sum(Fraction(1, d) for d in xrange(1, n+1))
```
Example:
```
>>> N = 100
>>> h_exact = harmonic_number(N)
>>> h = H(N)
>>> rel_err = (abs(h - h_exact) / h_exact)
>>> print n, "%r" % h, "%.2g" % rel_err
100 5.1873775176396242 6.8e-16
```
At `N = 100` relative error is less then `1e-15`. |
Determining application path in a Python EXE generated by pyInstaller | 404,744 | 30 | 2009-01-01T08:50:22Z | 404,750 | 62 | 2009-01-01T08:53:20Z | [
"python",
"executable",
"relative-path",
"pyinstaller"
] | I have an application that resides in a single .py file. I've been able to get pyInstaller to bundle it successfully into an EXE for Windows. The problem is, the application requires a .cfg file that always sits directly beside the application in the same directory.
Normally, I build the path using the following code:
```
import os
config_name = 'myapp.cfg'
config_path = os.path.join(sys.path[0], config_name)
```
However, it seems the sys.path is blank when its called from an EXE generated by pyInstaller. This same behaviour occurs when you run the python interactive command line and try to fetch sys.path[0].
Is there a more concrete way of getting the path of the currently running application so that I can find files that are relative to it? | I found a solution. You need to check if the application is running as a script or as a frozen exe:
```
import os
import sys
config_name = 'myapp.cfg'
# determine if application is a script file or frozen exe
if getattr(sys, 'frozen', False):
application_path = os.path.dirname(sys.executable)
elif __file__:
application_path = os.path.dirname(__file__)
config_path = os.path.join(application_path, config_name)
``` |
LBYL vs EAFP in Java? | 404,795 | 39 | 2009-01-01T09:50:56Z | 404,999 | 10 | 2009-01-01T14:30:19Z | [
"java",
"python",
"error-handling",
"idioms"
] | I was recently teaching myself Python and discovered the LBYL/EAFP idioms with regards to error checking before code execution. In Python, it seems the accepted style is EAFP, and it seems to work well with the language.
LBYL (***L***ook **B**efore **Y**ou **L**eap):
```
def safe_divide_1(x, y):
if y == 0:
print "Divide-by-0 attempt detected"
return None
else:
return x/y
```
EAFP (*it's **E**asier to **A**sk **F**orgiveness than **P**ermission*):
```
def safe_divide_2(x, y):
try:
return x/y
except ZeroDivisionError:
print "Divide-by-0 attempt detected"
return None
```
My question is this: I had never even heard of using EAFP as the primary data validation construct, coming from a Java and C++ background. Is EAFP something that is wise to use in Java? Or is there too much overhead from exceptions? I know that there is only overhead when an exception is actually thrown, so I'm unsure as to why the simpler method of EAFP is not used. Is it just preference? | Exceptions are handled more efficiently in Python than in Java, which is at least *partly* why you see that construct in Python. In Java, it's more inefficient (in terms of performance) to use exceptions in that way. |
LBYL vs EAFP in Java? | 404,795 | 39 | 2009-01-01T09:50:56Z | 405,220 | 89 | 2009-01-01T17:52:24Z | [
"java",
"python",
"error-handling",
"idioms"
] | I was recently teaching myself Python and discovered the LBYL/EAFP idioms with regards to error checking before code execution. In Python, it seems the accepted style is EAFP, and it seems to work well with the language.
LBYL (***L***ook **B**efore **Y**ou **L**eap):
```
def safe_divide_1(x, y):
if y == 0:
print "Divide-by-0 attempt detected"
return None
else:
return x/y
```
EAFP (*it's **E**asier to **A**sk **F**orgiveness than **P**ermission*):
```
def safe_divide_2(x, y):
try:
return x/y
except ZeroDivisionError:
print "Divide-by-0 attempt detected"
return None
```
My question is this: I had never even heard of using EAFP as the primary data validation construct, coming from a Java and C++ background. Is EAFP something that is wise to use in Java? Or is there too much overhead from exceptions? I know that there is only overhead when an exception is actually thrown, so I'm unsure as to why the simpler method of EAFP is not used. Is it just preference? | If you are accessing files, EAFP is more reliable than LBYL, because the operations involved in LBYL are not atomic, and the file system might change between the time you look and the time you leap. Actually, the standard name is TOCTOU - Time of Check, Time of Use; bugs caused by inaccurate checking are TOCTOU bugs.
Consider creating a temporary file that must have a unique name. The best way to find out whether the chosen file name exists yet is to try creating it - making sure you use options to ensure that your operation fails if the file does already exist (in POSIX/Unix terms, the O\_EXCL flag to `open()`). If you try to test whether the file already exists (probably using `access()`), then between the time when that says "No" and the time you try to create the file, someone or something else may have created the file.
Conversely, suppose that you try to read an existing file. Your check that the file exists (LBYL) may say "it is there", but when you actually open it, you find "it is not there".
In both these cases, you have to check the final operation - and the LBYL didn't automatically help.
(If you are messing with SUID or SGID programs, `access()` asks a different question; it may be relevant to LBYL, but the code still has to take into account the possibility of failure.) |
LBYL vs EAFP in Java? | 404,795 | 39 | 2009-01-01T09:50:56Z | 408,305 | 37 | 2009-01-02T23:31:52Z | [
"java",
"python",
"error-handling",
"idioms"
] | I was recently teaching myself Python and discovered the LBYL/EAFP idioms with regards to error checking before code execution. In Python, it seems the accepted style is EAFP, and it seems to work well with the language.
LBYL (***L***ook **B**efore **Y**ou **L**eap):
```
def safe_divide_1(x, y):
if y == 0:
print "Divide-by-0 attempt detected"
return None
else:
return x/y
```
EAFP (*it's **E**asier to **A**sk **F**orgiveness than **P**ermission*):
```
def safe_divide_2(x, y):
try:
return x/y
except ZeroDivisionError:
print "Divide-by-0 attempt detected"
return None
```
My question is this: I had never even heard of using EAFP as the primary data validation construct, coming from a Java and C++ background. Is EAFP something that is wise to use in Java? Or is there too much overhead from exceptions? I know that there is only overhead when an exception is actually thrown, so I'm unsure as to why the simpler method of EAFP is not used. Is it just preference? | In addition to the relative cost of exceptions in Python and Java, keep in mind that there's a difference in philosophy / attitude between them. Java tries to be very strict about types (and everything else), requiring explicit, detailed declarations of class/method signatures. It assumes that you should know, at any point, exactly what type of object you're using and what it is capable of doing. In contrast, Python's "duck typing" means that you don't know for sure (and shouldn't care) what the manifest type of an object is, you only need to care that it quacks when you ask it to. In this kind of permissive environment, the only sane attitude is to presume that things will work, but be ready to deal with the consequences if they don't. Java's natural restrictiveness doesn't fit well with such a casual approach. (This is not intended to disparage either approach or language, but rather to say that these attitudes are part of each language's idiom, and copying idioms between different languages can often lead to awkwardness and poor communication...) |
Line reading chokes on 0x1A | 405,058 | 12 | 2009-01-01T15:29:35Z | 405,061 | 26 | 2009-01-01T15:31:38Z | [
"python",
"windows",
"binary-data"
] | I have the following file:
```
abcde
kwakwa
<0x1A>
line3
linllll
```
Where `<0x1A>` represents a byte with the hex value of 0x1A. When attempting to read this file in Python as:
```
for line in open('t.txt'):
print line,
```
It only reads the first two lines, and exits the loop.
The solution seems to be to open the file in binary (or universal newline mode) - 'rb' or 'rU'. Can you explain this behavior ? | 0x1A is Ctrl-Z, and DOS historically used that as an end-of-file marker. For example, try using a command prompt, and "type"ing your file. It will only display the content up the Ctrl-Z.
Python uses the Windows CRT function \_wfopen, which implements the "Ctrl-Z is EOF" semantics. |
Line reading chokes on 0x1A | 405,058 | 12 | 2009-01-01T15:29:35Z | 405,169 | 8 | 2009-01-01T17:13:22Z | [
"python",
"windows",
"binary-data"
] | I have the following file:
```
abcde
kwakwa
<0x1A>
line3
linllll
```
Where `<0x1A>` represents a byte with the hex value of 0x1A. When attempting to read this file in Python as:
```
for line in open('t.txt'):
print line,
```
It only reads the first two lines, and exits the loop.
The solution seems to be to open the file in binary (or universal newline mode) - 'rb' or 'rU'. Can you explain this behavior ? | [Ned](http://stackoverflow.com/questions/405058/line-reading-chokes-on-0x1a#405061) is of course correct.
If your curiosity runs a little deeper, the root cause is backwards compatibility taken to an extreme. Windows is compatible with DOS, which used Ctrl-Z as an optional end of file marker for text files. What you might not know is that DOS was compatible with CP/M, which was popular on small computers before the PC. CP/M's file system didn't keep track of file sizes down to the byte level, it only kept track by the number of floppy disk sectors. If your file wasn't an exact multiple of 128 bytes, you needed a way to mark the end of the text. [This Wikipedia article](http://en.wikipedia.org/wiki/Ascii) implies that the selection of Ctrl-Z was based on an even older convention used by DEC. |
Please advise on Ruby vs Python, for someone who likes LISP a lot | 405,165 | 20 | 2009-01-01T17:12:12Z | 405,188 | 24 | 2009-01-01T17:24:30Z | [
"python",
"ruby",
"lisp"
] | I am a C++ developer, slowly getting into web development. I like LISP a lot but don't like AllegroCL and web-frameworks available for LISP. I am looking for more freedom and ability to do cool hacks on language level. I don't consider tabs as a crime against nature.
Which one is closer to LISP: Python or Ruby?
I can't seem to be able to choose from Python and Ruby: they seem very similar but apparently Ruby is more functional and object-oriented, which are good things, while Python is more like Perl: a simple scripting language. Do I have the right impression?
PS - This might seem like a flame bait but it's not really, I'm just trying not to go crazy from OCD about switching from RoR to Python/Django and back. | I'd go with Ruby. It's got all kinds of metaprogramming and duck punching hacks that make it really easy to extend. Features like blocks may not seem like much at first, but they make for some really clean syntax if you use them right. Open classes can be debugging hell if you screw them up, but if you're a responsible programmer, you can do things like `2.days.from_now` (example from Rails) really easily (Python can do this too, I think, but with a bit more pain)
PS: Check out ["Why Ruby is an acceptable LISP"](http://www.randomhacks.net/articles/2005/12/03/why-ruby-is-an-acceptable-lisp). |
Please advise on Ruby vs Python, for someone who likes LISP a lot | 405,165 | 20 | 2009-01-01T17:12:12Z | 405,206 | 31 | 2009-01-01T17:40:13Z | [
"python",
"ruby",
"lisp"
] | I am a C++ developer, slowly getting into web development. I like LISP a lot but don't like AllegroCL and web-frameworks available for LISP. I am looking for more freedom and ability to do cool hacks on language level. I don't consider tabs as a crime against nature.
Which one is closer to LISP: Python or Ruby?
I can't seem to be able to choose from Python and Ruby: they seem very similar but apparently Ruby is more functional and object-oriented, which are good things, while Python is more like Perl: a simple scripting language. Do I have the right impression?
PS - This might seem like a flame bait but it's not really, I'm just trying not to go crazy from OCD about switching from RoR to Python/Django and back. | [Peter Norvig](http://norvig.com), [a famous and great lisper](http://norvig.com/paip.html), converted to Python. He wrote the article [Python for Lisp Programmers](http://norvig.com/python-lisp.html), which you might find interesting with its detailed comparison of features.
Python looks like executable pseudo-code. It's easy to pick up, and often using your intuition will just work. Python allows you to easily put your ideas into code.
Now, for web development, Python might seem like a more scattered option than Ruby, with the plethora of Python web frameworks available. Still, in general, Python is a very nice and useful language to know. As Ruby and Python's niches overlap, I agree with Kiv that it is partly a matter of personal taste which one you pick. |
Please advise on Ruby vs Python, for someone who likes LISP a lot | 405,165 | 20 | 2009-01-01T17:12:12Z | 405,228 | 12 | 2009-01-01T18:00:04Z | [
"python",
"ruby",
"lisp"
] | I am a C++ developer, slowly getting into web development. I like LISP a lot but don't like AllegroCL and web-frameworks available for LISP. I am looking for more freedom and ability to do cool hacks on language level. I don't consider tabs as a crime against nature.
Which one is closer to LISP: Python or Ruby?
I can't seem to be able to choose from Python and Ruby: they seem very similar but apparently Ruby is more functional and object-oriented, which are good things, while Python is more like Perl: a simple scripting language. Do I have the right impression?
PS - This might seem like a flame bait but it's not really, I'm just trying not to go crazy from OCD about switching from RoR to Python/Django and back. | Speaking as a "Rubyist", I'd agree with Kiv. The two languages both grant a nice amount of leeway when it comes to programming paradigms, but are also have benefits/shortcomings. I think that the compromises you make either way are a lot about your own programming style and taste.
Personally, I think Ruby can read more like pseudo-code than Python. First, Python has active whitespace, which while elegant in the eyes of many, doesn't tend to enter the equation when writing pseudo-code. Also, Ruby's syntax is quite flexible. That flexibility causes a lot of quirks that can confuse, but also allows code that's quite expressive and pretty to look at.
Finally, I'd really say that Ruby feels more Perl-ish to me. That's partly because I'm far more comfortable with it, so I can hack out scripts rather quickly. A lot of Ruby's syntax was borrowed from Perl though, and I haven't seen much Python code that feels similar (though again, I have little experience with Python).
Depending on the approach to web programming you'd like to take, I think that the types of web frameworks available in each language could perhaps be a factor in deciding as well. I'd say try them both. You can get a working knowledge of each of them in an afternoon, and while you won't be writing awesome Ruby or Python, you can probably establish a feel for each, and decide which you like more.
**Update:** I think your question should actually be two separate discussions: one with Ruby, one with Python. The comparisons are less important because you start debating the merits of the differences, as opposed to which language will work better for you. If you have questions about Ruby, I'd be more than happy to answer as best I can. |
Please advise on Ruby vs Python, for someone who likes LISP a lot | 405,165 | 20 | 2009-01-01T17:12:12Z | 405,310 | 8 | 2009-01-01T19:01:50Z | [
"python",
"ruby",
"lisp"
] | I am a C++ developer, slowly getting into web development. I like LISP a lot but don't like AllegroCL and web-frameworks available for LISP. I am looking for more freedom and ability to do cool hacks on language level. I don't consider tabs as a crime against nature.
Which one is closer to LISP: Python or Ruby?
I can't seem to be able to choose from Python and Ruby: they seem very similar but apparently Ruby is more functional and object-oriented, which are good things, while Python is more like Perl: a simple scripting language. Do I have the right impression?
PS - This might seem like a flame bait but it's not really, I'm just trying not to go crazy from OCD about switching from RoR to Python/Django and back. | Both Ruby and Python are fairly distant from the Lisp traditions of immutable data, programs as data, and macros. But Ruby is very nearly a clone of Smalltalk (and I hope will grow more like Smalltalk as the Perlish cruft is deprecated), and Smalltalk, like Lisp, is a language that takes one idea to extremes. Based on your desire to do **cool hacks on the language level** I'd go with Ruby, as it inherits a lot of the metaprogramming mindset from Smalltalk, and that mindset *is* connected to the Lisp tradition. |
Please advise on Ruby vs Python, for someone who likes LISP a lot | 405,165 | 20 | 2009-01-01T17:12:12Z | 405,342 | 17 | 2009-01-01T19:23:00Z | [
"python",
"ruby",
"lisp"
] | I am a C++ developer, slowly getting into web development. I like LISP a lot but don't like AllegroCL and web-frameworks available for LISP. I am looking for more freedom and ability to do cool hacks on language level. I don't consider tabs as a crime against nature.
Which one is closer to LISP: Python or Ruby?
I can't seem to be able to choose from Python and Ruby: they seem very similar but apparently Ruby is more functional and object-oriented, which are good things, while Python is more like Perl: a simple scripting language. Do I have the right impression?
PS - This might seem like a flame bait but it's not really, I'm just trying not to go crazy from OCD about switching from RoR to Python/Django and back. | **Devils Advocate: Who Cares?**
They are both good systems and have an ecosystem of good web frameworks and active developer communities. I'm guessing that you're framing your decision based on the wrong criteria. The question sounds like you're fretting about whether you will hit implementation problems or other difficulties by choosing one over the other. **Don't.**
This is similar to Java/.Net decisions. There may be compelling reasons in a specific instance, but soft factors like the architect's familiarity with the platform are a much stronger predictor of project success.
I will admit that I've used Python much more than Ruby, but I wouldn't say that I have any great preference between the two apart from familiarity. I've used Python off and on since about 1998 and I like the Smalltalkish-ness of Ruby as I used Smalltalk briefly about 15 years ago. They both do similar things slightly differently.
I would like certain features from Ruby (or Smalltalk for that matter) but Python doesn't work that way. Instead, it has other features and language idioms are slightly different from Ruby or Smalltalk. [Several](http://stackoverflow.com/questions/405165/please-advise-on-ruby-vs-python-for-someone-who-likes-lisp-a-lot#405206) of the [other](http://stackoverflow.com/questions/405165/please-advise-on-ruby-vs-python-for-someone-who-likes-lisp-a-lot#405188) [posters](http://stackoverflow.com/questions/405165/please-advise-on-ruby-vs-python-for-someone-who-likes-lisp-a-lot#405312) have linked out to articles that compare the two.
If you're worrying about Rails vs. Django, that suggests you're looking for a platform for web applications. Both languages have good tool support and an active developer community. Django seems to be the winner of the Python web framework melee and Rails seems to be in the process of 'crossing the chasm' and bringing Ruby along with it. Both are reasonably mature systems and have been demonstrated to work well for respectable traffic volumes.
**ProTip:** The presence of religious wars is a good indicator that neither side has a compelling arguement.
So, I'm going to play devil's advocate and say that worrying about the choice is pointless. The languages have advantages and disadvantages with respect to each other but nothing that could be viewed as compelling in the general case. Fretting about the detailed merits of one platform or the other is framing the decision wrongly.
Pick one and use it. You will be able to build systems effectively with either. |
Many instances of a class | 405,282 | 3 | 2009-01-01T18:47:32Z | 405,292 | 7 | 2009-01-01T18:52:03Z | [
"python",
"class",
"object",
"multiple-instances"
] | I am trying to write a life simulation in python with a variety of animals. It is impossible to name each instance of the classes I am going to use because I have no way of knowing how many there will be.
So, my question:
How can I automatically give a name to an object?
I was thinking of creating a "Herd" class which could be all the animals of that type alive at the same time... | Hm, well you normally just stuff all those instances in a list and then iterate over that list if you want to do something with them. If you want to automatically keep track of each instance created you can also make the adding to the list implicit in the class' constructor or create a factory method that keeps track of the created instances. |
Mysql connection pooling question: is it worth it? | 405,352 | 15 | 2009-01-01T19:32:29Z | 405,914 | 10 | 2009-01-02T02:41:50Z | [
"python",
"mysql",
"sqlalchemy",
"connection-pooling"
] | I recall hearing that the connection process in mysql was designed to be very fast compared to other RDBMSes, and that therefore using [a library that provides connection pooling](http://www.sqlalchemy.org/) (SQLAlchemy) won't actually help you that much if you enable the connection pool.
Does anyone have any experience with this?
I'm leery of enabling it because of the possibility that if some code does something stateful to a db connection and (perhaps mistakenly) doesn't clean up after itself, that state which would normally get cleaned up upon closing the connection will instead get propagated to subsequent code that gets a recycled connection. | There's no need to worry about residual state on a connection when using SQLA's connection pool, unless your application is changing connectionwide options like transaction isolation levels (which generally is not the case). SQLA's connection pool issues a connection.rollback() on the connection when its checked back in, so that any transactional state or locks are cleared.
It is possible that MySQL's connection time is pretty fast, especially if you're connecting over unix sockets on the same machine. If you do use a connection pool, you also want to ensure that connections are recycled after some period of time as MySQL's client library will shut down connections that are idle for more than 8 hours automatically (in SQLAlchemy this is the pool\_recycle option).
You can quickly do some benching of connection pool vs. non with a SQLA application by changing the pool implementation from the default of QueuePool to NullPool, which is a pool implementation that doesn't actually pool anything - it connects and disconnects for real when the proxied connection is acquired and later closed. |
Python update object from dictionary | 405,489 | 11 | 2009-01-01T21:12:51Z | 405,492 | 21 | 2009-01-01T21:14:32Z | [
"python",
"iterable-unpacking"
] | Is there a built-in function/operator I could use to unpack values from a dictionary and assign it into instance variables?
This is what I intend to do:
```
c = MyClass()
c.foo = 123
c.bar = 123
# c.foo == 123 and c.bar == 123
d = {'bar': 456}
c.update(d)
# c.foo == 123 and c.bar == 456
```
Something akin to dictionary `update()` which load values from another dictionary but for plain object/class instance? | Have you tried
```
f.__dict__.update( b )
```
? |
Python update object from dictionary | 405,489 | 11 | 2009-01-01T21:12:51Z | 408,016 | 24 | 2009-01-02T21:02:16Z | [
"python",
"iterable-unpacking"
] | Is there a built-in function/operator I could use to unpack values from a dictionary and assign it into instance variables?
This is what I intend to do:
```
c = MyClass()
c.foo = 123
c.bar = 123
# c.foo == 123 and c.bar == 123
d = {'bar': 456}
c.update(d)
# c.foo == 123 and c.bar == 456
```
Something akin to dictionary `update()` which load values from another dictionary but for plain object/class instance? | there is also another way of doing it by looping through the items in d. this doesn't have the same assuption that they will get stored in `c.__dict__` which isn't always true.
```
d = {'bar': 456}
for key,value in d.items():
setattr(c,key,value)
```
or you could write a `update` method as part of `MyClass` so that `c.update(d)` works like you expected it to.
```
def update(self,newdata):
for key,value in newdata:
setattr(self,key,value)
```
check out the help for setattr
> ```
> setattr(...)
> setattr(object, name, value)
> Set a named attribute on an object; setattr(x, 'y', v) is equivalent to
> ''x.y = v''.
> ``` |
if all in list == something | 405,516 | 13 | 2009-01-01T21:41:22Z | 405,519 | 13 | 2009-01-01T21:43:29Z | [
"python",
"python-2.6"
] | Using python 2.6
is there a way to check if all the items of a sequence equals a given value, in one statement?
```
[pseudocode]
my_sequence = (2,5,7,82,35)
if all the values in (type(i) for i in my_sequence) == int:
do()
```
instead of, say:
```
my_sequence = (2,5,7,82,35)
all_int = True
for i in my_sequence:
if type(i) is not int:
all_int = False
break
if all_int:
do()
``` | Do you mean
```
all( type(i) is int for i in my_list )
```
?
Edit: Changed to `is`. Slightly faster. |
if all in list == something | 405,516 | 13 | 2009-01-01T21:41:22Z | 405,520 | 34 | 2009-01-01T21:45:31Z | [
"python",
"python-2.6"
] | Using python 2.6
is there a way to check if all the items of a sequence equals a given value, in one statement?
```
[pseudocode]
my_sequence = (2,5,7,82,35)
if all the values in (type(i) for i in my_sequence) == int:
do()
```
instead of, say:
```
my_sequence = (2,5,7,82,35)
all_int = True
for i in my_sequence:
if type(i) is not int:
all_int = False
break
if all_int:
do()
``` | Use:
```
all( type(i) is int for i in lst )
```
Example:
```
In [1]: lst = range(10)
In [2]: all( type(i) is int for i in lst )
Out[2]: True
In [3]: lst.append('steve')
In [4]: all( type(i) is int for i in lst )
Out[4]: False
```
[Edit]. Made cleaner as per comments. |
What is a cyclic data structure good for? | 405,540 | 13 | 2009-01-01T22:06:09Z | 405,564 | 18 | 2009-01-01T22:17:30Z | [
"python",
"data-structures",
"recursion",
"cyclic-reference"
] | I was just reading through ["Learning Python" by Mark Lutz and came across this code sample](http://books.google.com/books?id=1HxWGezDZcgC&lpg=PP1&dq=learning%20python&pg=PA254#v=onepage&q=cyclic&f=false):
```
>>> L = ['grail']
>>> L.append(L)
>>> L
['grail', [...]]
```
It was identified as a cyclic data structure.
So I was wondering, and here is my question:
## **What is a 'cyclic data structure' used for in real life programming?**
There seems to be a little confusion, which i think stems from the very brief code sample... here's a few more lines using the same object L
```
>>> L[0]
'grail'
>>> L[1][0]
'grail'
>>> L[1][1][0]
'grail'
``` | Lots of things. Circular buffer, for example: you have some collection of data with a front and a back, but an arbitrary number of nodes, and the "next" item from the last should take you back to the first.
Graph structures are often cyclic; acyclicity is a special case. Consider, for example, a graph containing all the cities and roads in a traveling salesman problem.
---
Okay, here's a particular example for you. I set up a collection of towns here in Colorado:
```
V=["Boulder", "Denver", "Colorado Springs", "Pueblo", "Limon"]
```
I then set up pairs of cities where there is a road connecting them.
```
E=[["Boulder", "Denver"],
["Denver", "Colorado Springs"],
["Colorado Springs", "Pueblo"],
["Denver", "Limon"],
["Colorado Springs", "Limon"]]
```
This has a bunch of cycles. For example, you can drive from Colorado Springs, to Limon, to Denver, and back to Colorado Springs.
If you create a data structure that contains all the cities in V and all the roads in E, that's a *graph* data structure. This graph would have cycles. |
Function and class documentation best practices for Python | 405,582 | 68 | 2009-01-01T22:30:39Z | 405,710 | 45 | 2009-01-01T23:54:52Z | [
"python",
"documentation",
"coding-style"
] | I am looking for best practices for function/class/module documentation, i.e. comments in the code itself. Ideally I would like a comment template which is both human readable and consumable by Python documentation utilities.
I have read the [Python documentation on docstrings](http://docs.python.org/tutorial/controlflow.html).
I understand this part:
> The first line should always be a short, concise summary of the objectâs purpose. For brevity, it should not explicitly state the objectâs name or type, since these are available by other means (except if the name happens to be a verb describing a functionâs operation). This line should begin with a capital letter and end with a period.
>
> If there are more lines in the documentation string, the second line should be blank, visually separating the summary from the rest of the description.
This sentence needs a bit more explanation:
> The following lines should be one or more paragraphs describing the objectâs calling conventions, its side effects, etc.
Specifically, I am looking for examples of well-commented functions and classes. | You should [use reStructuredText](http://www.python.org/dev/peps/pep-0287/) and check out the [Sphinx markup constructs](http://sphinx.pocoo.org/markup/index.html). All the cool kids are doing it.
You should [follow docstring conventions](http://www.python.org/dev/peps/pep-0257/). i.e.
> It prescribes the function or method's
> effect as a command ("Do this",
> "Return that").
You should avoid repeating yourself unnecessarily or explaining the eminently obvious. Example of what not to do:
```
def do_things(verbose=False):
"""Do some things.
:param verbose: Be verbose (give additional messages).
"""
raise NotImplementedError
```
If you wanted to describe something non-obvious it would be a different story; for example, that verbose causes messages to occur on `stdout` or a `logging` stream. This is not specific to Python, but follows from more hand-wavy ideals such as [self-documenting code](http://en.wikipedia.org/wiki/Self-documenting) and [code/documentation DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself).
Try to avoid mentioning specific types if possible (abstract or interface-like types are generally okay). Mentioning *protocols* is typically more helpful from a duck typing perspective (i.e. "iterable" instead of `set`, or "mutable ordered sequence" instead of `list`). I've seen some code that is very literal and heavy WRT the `:rtype:` and the `:type param:` function-level documentation, which I've found to be at odds with the duck typing mentality. |
Function and class documentation best practices for Python | 405,582 | 68 | 2009-01-01T22:30:39Z | 406,346 | 8 | 2009-01-02T09:05:02Z | [
"python",
"documentation",
"coding-style"
] | I am looking for best practices for function/class/module documentation, i.e. comments in the code itself. Ideally I would like a comment template which is both human readable and consumable by Python documentation utilities.
I have read the [Python documentation on docstrings](http://docs.python.org/tutorial/controlflow.html).
I understand this part:
> The first line should always be a short, concise summary of the objectâs purpose. For brevity, it should not explicitly state the objectâs name or type, since these are available by other means (except if the name happens to be a verb describing a functionâs operation). This line should begin with a capital letter and end with a period.
>
> If there are more lines in the documentation string, the second line should be blank, visually separating the summary from the rest of the description.
This sentence needs a bit more explanation:
> The following lines should be one or more paragraphs describing the objectâs calling conventions, its side effects, etc.
Specifically, I am looking for examples of well-commented functions and classes. | As Emji said, Django is a good project to follow for clear, consistent style guides.
For example, their [Contribute to Django style guide](http://docs.djangoproject.com/en/dev/internals/contributing/?from=olddocs#coding-style) even goes as far as describing how they'd like to see documentation. Specifically they mention:
*In docstrings, use âaction wordsâ such as:*
```
def foo():
"""
Calculates something and returns the result.
"""
pass
```
*Here's an example of what not to do:*
```
def foo():
"""
Calculate something and return the result.
"""
pass
``` |
Accepting File Argument in Python (from Send To context menu) | 405,612 | 3 | 2009-01-01T22:50:42Z | 405,629 | 7 | 2009-01-01T22:57:47Z | [
"python",
"urllib2",
"contextmenu",
"sendto"
] | I'm going to start of by noting that I have next to no python experience.

As you may know, by simply dropping a shortcut in the Send To folder on your Windows PC, you can allow a program to take a file as an argument.
How would I write a python program that takes this file as an argument?
And, as a bonus if anyone gets a chance --
How would I integrate that with a urllib2 to POST the file to a PHP script on my server?
Thanks in advance.
Edit-- also, how do I make something show up in the Sendto menu? I was under the impression that you just drop a shortcut into the SendTo folder and it automatically adds an option in the menu...
Never mind. I figured out what I was doing wrong :) | 1. Find out what the dragged file was: <http://docs.python.org/library/sys.html#sys.argv>
2. Open it: <http://docs.python.org/library/functions.html#open>
3. Read it in: <http://docs.python.org/library/stdtypes.html#file.read>
4. Post it: <http://docs.python.org/library/urllib2.html#urllib2.urlopen> |
Passing arguments with wildcards to a Python script | 405,652 | 12 | 2009-01-01T23:08:01Z | 405,662 | 20 | 2009-01-01T23:15:46Z | [
"python",
"windows",
"command-line",
"arguments"
] | I want to do something like this:
```
c:\data\> python myscript.py *.csv
```
and pass all of the .csv files in the directory to my python script (such that `sys.argv` contains `["file1.csv", "file2.csv"]`, etc.)
But `sys.argv` just receives `["*.csv"]` indicating that the wildcard was not expanded, so this doesn't work.
I feel like there is a simple way to do this, but can't find it on Google. Any ideas? | You can use the glob module, that way you won't depend on the behavior of a particular shell (well, you still depend on the shell not expanding the arguments, but at least you can get this to happen in Unix by escaping the wildcards :-) ).
```
from glob import glob
filelist = glob('*.csv') #You can pass the sys.argv argument
``` |
Passing arguments with wildcards to a Python script | 405,652 | 12 | 2009-01-01T23:08:01Z | 405,667 | 11 | 2009-01-01T23:19:50Z | [
"python",
"windows",
"command-line",
"arguments"
] | I want to do something like this:
```
c:\data\> python myscript.py *.csv
```
and pass all of the .csv files in the directory to my python script (such that `sys.argv` contains `["file1.csv", "file2.csv"]`, etc.)
But `sys.argv` just receives `["*.csv"]` indicating that the wildcard was not expanded, so this doesn't work.
I feel like there is a simple way to do this, but can't find it on Google. Any ideas? | In Unix, the shell expands wildcards, so programs get the expanded list of filenames. Windows doesn't do this: the shell passes the wildcards directly to the program, which has to expand them itself.
Vinko is right: the glob module does the job:
```
import glob, sys
for arg in glob.glob(sys.argv[1]):
print "Arg:", arg
``` |
Floating Point Limitations | 406,361 | 4 | 2009-01-02T09:17:26Z | 406,367 | 18 | 2009-01-02T09:22:15Z | [
"python",
"floating-point",
"precision",
"floating-accuracy"
] | My code:
```
a = '2.3'
```
I wanted to display `a` as a floating point value.
Since `a` is a string, I tried:
```
float(a)
```
The result I got was :
```
2.2999999999999998
```
I want a solution for this problem. Please, kindly help me.
I was following [this tutorial](http://docs.python.org/tutorial/floatingpoint.html). | I think it reflects more on your understanding of floating point types than on Python. See [my article about floating point numbers](http://csharpindepth.com/Articles/General/FloatingPoint.aspx) (.NET-based, but still relevant) for the reasons behind this "inaccuracy". If you need to keep the exact decimal representation, you should use the [decimal module](http://docs.python.org/library/decimal.html). |
Why does Ruby have Rails while Python has no central framework? | 406,907 | 8 | 2009-01-02T14:26:57Z | 406,925 | 7 | 2009-01-02T14:33:24Z | [
"python",
"ruby-on-rails",
"ruby",
"frameworks",
"history"
] | This is a(n) historical question, not a comparison-between-languages question:
[This article from 2005](http://tomayko.com/writings/no-rails-for-python) talks about the lack of a single, central framework for Python. For Ruby, this framework is clearly Rails. **Why, historically speaking, did this happen for Ruby but not for Python?** (or did it happen, and that framework is Django?)
Also, the hypothetical questions: **would Python be more popular if it had one, good framework? Would Ruby be less popular if it had no central framework?**
**[Please avoid discussions of whether Ruby or Python is better, which is just too open-ended to answer.]**
**Edit:** Though I thought this is obvious, I'm not saying that other frameworks do not exist for Ruby, but rather that the big one **in terms of popularity** is Rails. Also, I should mention that I'm not saying that frameworks for Python are not as good (or better than) Rails. Every framework has its pros and cons, but Rails seems to, as Ben Blank says in the one of the comments below, have surpassed Ruby in terms of popularity. There are no examples of that on the Python side. WHY? That's the question. | I don't think it's right to characterise Rails as 'the' 'single' 'central' Ruby framework.
Other frameworks for Ruby include Merb, Camping and Ramaze.
... which sort of invalidates the question. |
Why does Ruby have Rails while Python has no central framework? | 406,907 | 8 | 2009-01-02T14:26:57Z | 407,003 | 35 | 2009-01-02T15:03:40Z | [
"python",
"ruby-on-rails",
"ruby",
"frameworks",
"history"
] | This is a(n) historical question, not a comparison-between-languages question:
[This article from 2005](http://tomayko.com/writings/no-rails-for-python) talks about the lack of a single, central framework for Python. For Ruby, this framework is clearly Rails. **Why, historically speaking, did this happen for Ruby but not for Python?** (or did it happen, and that framework is Django?)
Also, the hypothetical questions: **would Python be more popular if it had one, good framework? Would Ruby be less popular if it had no central framework?**
**[Please avoid discussions of whether Ruby or Python is better, which is just too open-ended to answer.]**
**Edit:** Though I thought this is obvious, I'm not saying that other frameworks do not exist for Ruby, but rather that the big one **in terms of popularity** is Rails. Also, I should mention that I'm not saying that frameworks for Python are not as good (or better than) Rails. Every framework has its pros and cons, but Rails seems to, as Ben Blank says in the one of the comments below, have surpassed Ruby in terms of popularity. There are no examples of that on the Python side. WHY? That's the question. | As I see it, Rails put Ruby on the map. The simple fact is that before Rails, Ruby was a minor esoteric language, with very little adoption. Ruby owes its success to Rails. As such, Rails has a central place in the Ruby ecosystem. As slim points out, there are other web frameworks, but it's going to be very difficult to overtake Rails as the leader.
Python on the other hand, had a very different adoption curve. Before Rails, Python was much more widely used than Ruby, and so had a number of competing web frameworks, each slowly building their constituencies. Django has done a good job consolidating support, and becoming the leader in the Python web framework world, but it will never be the One True Framework simply because of the way the community developed. |
Why does Ruby have Rails while Python has no central framework? | 406,907 | 8 | 2009-01-02T14:26:57Z | 408,146 | 7 | 2009-01-02T22:08:20Z | [
"python",
"ruby-on-rails",
"ruby",
"frameworks",
"history"
] | This is a(n) historical question, not a comparison-between-languages question:
[This article from 2005](http://tomayko.com/writings/no-rails-for-python) talks about the lack of a single, central framework for Python. For Ruby, this framework is clearly Rails. **Why, historically speaking, did this happen for Ruby but not for Python?** (or did it happen, and that framework is Django?)
Also, the hypothetical questions: **would Python be more popular if it had one, good framework? Would Ruby be less popular if it had no central framework?**
**[Please avoid discussions of whether Ruby or Python is better, which is just too open-ended to answer.]**
**Edit:** Though I thought this is obvious, I'm not saying that other frameworks do not exist for Ruby, but rather that the big one **in terms of popularity** is Rails. Also, I should mention that I'm not saying that frameworks for Python are not as good (or better than) Rails. Every framework has its pros and cons, but Rails seems to, as Ben Blank says in the one of the comments below, have surpassed Ruby in terms of popularity. There are no examples of that on the Python side. WHY? That's the question. | The real technical answer is that there are three major approaches to web-development in Python: one is CGI-based, where the application is built just like an old one-off Perl application to run through CGI or FastCGI, e.g. [Trac](http://trac.edgewall.org/); then there is [Zope](http://zope.org/), which is a bizarro overengineered framework with its own DB concept, a strange misguided through-the-web software development concept, etc. (but [Plone](http://plone.org) is still quite popular); and then there is Django (and [Turbogears](http://turbogears.org/), etc.), which is guided by the same just-the-tools-needed philosophy as Rails (it can be argued who got there first or who did it better). A lot of people would probably agree that the Django/Rails/[CakePHP](http://cakephp.org) approach is better than the older approaches, but as the older language Python has a lot more legacy frameworks that are still trying to evolve and stay relevant. These frameworks will hang on because there is already developer buy-in for them. For example, in hindsight many people would probably say that Zope (especially ZODB) was a terrible mistake, but Zope 3 is much better than Zope 2, and there are already whole companies built around Zope technologies. |
How to intercept special (alt / ctrl) key press? | 406,933 | 8 | 2009-01-02T14:34:45Z | 407,022 | 10 | 2009-01-02T15:09:56Z | [
"python",
"curses"
] | How can I catch key combinations like `ALT`+`K` or `CTRL`+`ALT`+`H` in python `curses`? | A terminal converts the control key in combination with a letter key to a normal ASCII code. This can be read from the [getch](http://www.opengroup.org/onlinepubs/007908799/xcurses/wgetch.html)() function like any other key press.
```
CTRL-A: getch() returns 1
CTRL-B: getch() returns 2
...
CTRL-Z: getch() returns 26
```
Also, the [keypad](http://www.mkssoftware.com/docs/man3/curs_inopts.3.asp)() function must be called to enable other special function keys (e.g. left arrow, F1, home, etc).
I don't believe there is a portable way to receive meta-keys (e.g. ALT-H) through a terminal. Some send an escape (0x1B) followed by the modified key while others set the high-bit and only send one byte. |
Python: Set Bits Count (popcount) | 407,587 | 13 | 2009-01-02T18:09:07Z | 407,674 | 8 | 2009-01-02T18:42:26Z | [
"python",
"bit-manipulation"
] | Few blob's have been duplicated in my database(oracle 11g), performed XOR operations on the blob using UTL\_RAW.BIT\_XOR. After that i wanted to count the number of set bits in the binary string, so wrote the code above.
During a small experiment, i wanted to see what is the hex and the integer value produced and wrote this procedure..
```
SQL> declare
2
3 vblob1 blob;
4
5 BEGIN
6
7 select leftiriscode INTO vblob1 FROM irisdata WHERE irisid=1;
8
9 dbms_output.put_line(rawtohex(vblob1));
10
11
12 dbms_output.put_line(UTL_RAW.CAST_TO_binary_integer(vblob1));
13
14
15 END;
16 /
```
OUTPUT: HEXVALUE:
```
0F0008020003030D030C1D1C3C383C330A3311373724764C54496C0A6B029B84840547A341BBA83D
BB5FB9DE4CDE5EFE96E1FC6169438344D604681D409F9F9F3BC07EE0C4E0C033A23B37791F59F84F
F94E4F664E3072B0229DA09D9F0F1FC600C2E380D6988C198B39517D157E7D66FE675237673D3D28
3A016C01411003343C76740F710F0F4F8FE976E1E882C186D316A63C0C7D7D7D7D397F016101B043
0176C37E767C7E0C7D010C8302C2D3E4F2ACE42F8D3F3F367A46F54285434ABB61BDB53CBF6C7CC0
F4C1C3F349B3F7BEB30E4A0CFE1C85180DC338C2C1C6E7A5CE3104303178724CCC5F451F573F3B24
7F24052000202003291F130F1B0E070C0E0D0F0E0F0B0B07070F1E1B330F27073F3F272E2F2F6F7B
2F2E1F2E4F7EFF7EDF3EBF253F3D2F39BF3D7F7FFED72FF39FE7773DBE9DBFBB3FE7A76E777DF55C
5F5F7ADF7FBD7F6AFE7B7D1FBE7F7F7DD7F63FBFBF2D3B7F7F5F2F7F3D7F7D3B3F3B7FFF4D676F7F
5D9FAD7DD17F7F6F6F0B6F7F3F767F1779364737370F7D3F5F377F2F3D3F7F1F2FE7709FB7BCB77B
0B77CF1DF5BF1F7F3D3E4E7F197F571F7D7E3F7F7F7D7F6F4F75FF6F7ECE2FFF793EFFEDB7BDDD1F
FF3BCE3F7F3FBF3D6C7FFF7F7F4FAF7F6FFFFF8D7777BF3AE30FAEEEEBCF5FEEFEE75FFEACFFDF0F
DFFFF77FFF677F4FFF7F7F1B5F1F5F146F1F1E1B3B1F3F273303170F370E250B
INTEGER VALUE: 15
```
There was a variance between the hex code and the integer value produced, so used the following python code to check the actual integer value.
```
print int("0F0008020003030D030C1D1C3C383C330A3311373724764C54496C0A6B029B84840547A341BBA83D
BB5FB9DE4CDE5EFE96E1FC6169438344D604681D409F9F9F3BC07EE0C4E0C033A23B37791F59F84F
F94E4F664E3072B0229DA09D9F0F1FC600C2E380D6988C198B39517D157E7D66FE675237673D3D28
3A016C01411003343C76740F710F0F4F8FE976E1E882C186D316A63C0C7D7D7D7D397F016101B043
0176C37E767C7E0C7D010C8302C2D3E4F2ACE42F8D3F3F367A46F54285434ABB61BDB53CBF6C7CC0
F4C1C3F349B3F7BEB30E4A0CFE1C85180DC338C2C1C6E7A5CE3104303178724CCC5F451F573F3B24
7F24052000202003291F130F1B0E070C0E0D0F0E0F0B0B07070F1E1B330F27073F3F272E2F2F6F7B
2F2E1F2E4F7EFF7EDF3EBF253F3D2F39BF3D7F7FFED72FF39FE7773DBE9DBFBB3FE7A76E777DF55C
5F5F7ADF7FBD7F6AFE7B7D1FBE7F7F7DD7F63FBFBF2D3B7F7F5F2F7F3D7F7D3B3F3B7FFF4D676F7F
5D9FAD7DD17F7F6F6F0B6F7F3F767F1779364737370F7D3F5F377F2F3D3F7F1F2FE7709FB7BCB77B
0B77CF1DF5BF1F7F3D3E4E7F197F571F7D7E3F7F7F7D7F6F4F75FF6F7ECE2FFF793EFFEDB7BDDD1F
FF3BCE3F7F3FBF3D6C7FFF7F7F4FAF7F6FFFFF8D7777BF3AE30FAEEEEBCF5FEEFEE75FFEACFFDF0F
DFFFF77FFF677F4FFF7F7F1B5F1F5F146F1F1E1B3B1F3F273303170F370E250B",16)
```
Answer:
```
611951595100708231079693644541095422704525056339295086455197024065285448917042457
942011979060274412229909425184116963447100932992139876977824261789243946528467423
887840013630358158845039770703659333212332565531927875442166643379024991542726916
563271158141698128396823655639931773363878078933197184072343959630467756337300811
165816534945075483141582643531294791665590339000206551162697220540050652439977992
246472159627917169957822698172925680112854091876671868161705785698942483896808137
210721991100755736178634253569843464062494863175653771387230991126430841565373390
924951878267929443498220727531299945275045612499928105876210478958806304156695438
684335624641395635997624911334453040399012259638042898470872203581555352191122920
004010193837249388365999010692555403377045768493630826307316376698443166439386014
145858084176544890282148970436631175577000673079418699845203671050174181808397880
048734270748095682582556024378558289251964544327507321930196203199459115159756564
507340111030285226951393012863778670390172056906403480159339130447254293412506482
027099835944315172972281427649277354815211185293109925602315480350955479477144523
387689192243720928249121486221114300503766209279369960344185651810101969585926336
07333771272398091
```
To get the set-bit count I have written the following code in C:
```
int bitsoncount(unsigned x)
{
unsigned int b=0;
if(x > 1)
b=1;
while(x &= (x - 1))
b++;
return b;
}
```
When I tried the same code in python it did not work. I am new to python through curiosity I'm experimenting, excuse me if am wrong.
```
def bitsoncount(x):
b=0;
if(x>1):
b=1;
while(x &= (x-1)):
```
I get an error at the last line, need some help in resolving this and implementing the logic in python :-)
I was interested in checking out the set bits version in python after what i have seen!
Thank You,
Chaitanya
Related question: [Best algorithm to count the number of set bits in a 32-bit integer?](http://stackoverflow.com/questions/109023/best-algorithm-to-count-the-number-of-set-bits-in-a-32-bit-integer) | what you're looking for is called the [Hamming Weight](http://en.wikipedia.org/wiki/Hamming_weight).
in python 2.6/3.0 it can be found rather easily with:
```
bits = sum( b == '1' for b in bin(x)[2:] )
``` |
Python: Set Bits Count (popcount) | 407,587 | 13 | 2009-01-02T18:09:07Z | 407,758 | 29 | 2009-01-02T19:12:55Z | [
"python",
"bit-manipulation"
] | Few blob's have been duplicated in my database(oracle 11g), performed XOR operations on the blob using UTL\_RAW.BIT\_XOR. After that i wanted to count the number of set bits in the binary string, so wrote the code above.
During a small experiment, i wanted to see what is the hex and the integer value produced and wrote this procedure..
```
SQL> declare
2
3 vblob1 blob;
4
5 BEGIN
6
7 select leftiriscode INTO vblob1 FROM irisdata WHERE irisid=1;
8
9 dbms_output.put_line(rawtohex(vblob1));
10
11
12 dbms_output.put_line(UTL_RAW.CAST_TO_binary_integer(vblob1));
13
14
15 END;
16 /
```
OUTPUT: HEXVALUE:
```
0F0008020003030D030C1D1C3C383C330A3311373724764C54496C0A6B029B84840547A341BBA83D
BB5FB9DE4CDE5EFE96E1FC6169438344D604681D409F9F9F3BC07EE0C4E0C033A23B37791F59F84F
F94E4F664E3072B0229DA09D9F0F1FC600C2E380D6988C198B39517D157E7D66FE675237673D3D28
3A016C01411003343C76740F710F0F4F8FE976E1E882C186D316A63C0C7D7D7D7D397F016101B043
0176C37E767C7E0C7D010C8302C2D3E4F2ACE42F8D3F3F367A46F54285434ABB61BDB53CBF6C7CC0
F4C1C3F349B3F7BEB30E4A0CFE1C85180DC338C2C1C6E7A5CE3104303178724CCC5F451F573F3B24
7F24052000202003291F130F1B0E070C0E0D0F0E0F0B0B07070F1E1B330F27073F3F272E2F2F6F7B
2F2E1F2E4F7EFF7EDF3EBF253F3D2F39BF3D7F7FFED72FF39FE7773DBE9DBFBB3FE7A76E777DF55C
5F5F7ADF7FBD7F6AFE7B7D1FBE7F7F7DD7F63FBFBF2D3B7F7F5F2F7F3D7F7D3B3F3B7FFF4D676F7F
5D9FAD7DD17F7F6F6F0B6F7F3F767F1779364737370F7D3F5F377F2F3D3F7F1F2FE7709FB7BCB77B
0B77CF1DF5BF1F7F3D3E4E7F197F571F7D7E3F7F7F7D7F6F4F75FF6F7ECE2FFF793EFFEDB7BDDD1F
FF3BCE3F7F3FBF3D6C7FFF7F7F4FAF7F6FFFFF8D7777BF3AE30FAEEEEBCF5FEEFEE75FFEACFFDF0F
DFFFF77FFF677F4FFF7F7F1B5F1F5F146F1F1E1B3B1F3F273303170F370E250B
INTEGER VALUE: 15
```
There was a variance between the hex code and the integer value produced, so used the following python code to check the actual integer value.
```
print int("0F0008020003030D030C1D1C3C383C330A3311373724764C54496C0A6B029B84840547A341BBA83D
BB5FB9DE4CDE5EFE96E1FC6169438344D604681D409F9F9F3BC07EE0C4E0C033A23B37791F59F84F
F94E4F664E3072B0229DA09D9F0F1FC600C2E380D6988C198B39517D157E7D66FE675237673D3D28
3A016C01411003343C76740F710F0F4F8FE976E1E882C186D316A63C0C7D7D7D7D397F016101B043
0176C37E767C7E0C7D010C8302C2D3E4F2ACE42F8D3F3F367A46F54285434ABB61BDB53CBF6C7CC0
F4C1C3F349B3F7BEB30E4A0CFE1C85180DC338C2C1C6E7A5CE3104303178724CCC5F451F573F3B24
7F24052000202003291F130F1B0E070C0E0D0F0E0F0B0B07070F1E1B330F27073F3F272E2F2F6F7B
2F2E1F2E4F7EFF7EDF3EBF253F3D2F39BF3D7F7FFED72FF39FE7773DBE9DBFBB3FE7A76E777DF55C
5F5F7ADF7FBD7F6AFE7B7D1FBE7F7F7DD7F63FBFBF2D3B7F7F5F2F7F3D7F7D3B3F3B7FFF4D676F7F
5D9FAD7DD17F7F6F6F0B6F7F3F767F1779364737370F7D3F5F377F2F3D3F7F1F2FE7709FB7BCB77B
0B77CF1DF5BF1F7F3D3E4E7F197F571F7D7E3F7F7F7D7F6F4F75FF6F7ECE2FFF793EFFEDB7BDDD1F
FF3BCE3F7F3FBF3D6C7FFF7F7F4FAF7F6FFFFF8D7777BF3AE30FAEEEEBCF5FEEFEE75FFEACFFDF0F
DFFFF77FFF677F4FFF7F7F1B5F1F5F146F1F1E1B3B1F3F273303170F370E250B",16)
```
Answer:
```
611951595100708231079693644541095422704525056339295086455197024065285448917042457
942011979060274412229909425184116963447100932992139876977824261789243946528467423
887840013630358158845039770703659333212332565531927875442166643379024991542726916
563271158141698128396823655639931773363878078933197184072343959630467756337300811
165816534945075483141582643531294791665590339000206551162697220540050652439977992
246472159627917169957822698172925680112854091876671868161705785698942483896808137
210721991100755736178634253569843464062494863175653771387230991126430841565373390
924951878267929443498220727531299945275045612499928105876210478958806304156695438
684335624641395635997624911334453040399012259638042898470872203581555352191122920
004010193837249388365999010692555403377045768493630826307316376698443166439386014
145858084176544890282148970436631175577000673079418699845203671050174181808397880
048734270748095682582556024378558289251964544327507321930196203199459115159756564
507340111030285226951393012863778670390172056906403480159339130447254293412506482
027099835944315172972281427649277354815211185293109925602315480350955479477144523
387689192243720928249121486221114300503766209279369960344185651810101969585926336
07333771272398091
```
To get the set-bit count I have written the following code in C:
```
int bitsoncount(unsigned x)
{
unsigned int b=0;
if(x > 1)
b=1;
while(x &= (x - 1))
b++;
return b;
}
```
When I tried the same code in python it did not work. I am new to python through curiosity I'm experimenting, excuse me if am wrong.
```
def bitsoncount(x):
b=0;
if(x>1):
b=1;
while(x &= (x-1)):
```
I get an error at the last line, need some help in resolving this and implementing the logic in python :-)
I was interested in checking out the set bits version in python after what i have seen!
Thank You,
Chaitanya
Related question: [Best algorithm to count the number of set bits in a 32-bit integer?](http://stackoverflow.com/questions/109023/best-algorithm-to-count-the-number-of-set-bits-in-a-32-bit-integer) | Python 2.6 or 3.0:
```
def bitsoncount(x):
return bin(x).count('1')
```
Example:
```
>>> x = 123
>>> bin(x)
'0b1111011'
>>> bitsoncount(x)
6
```
Or
[Matt Howells's answer](http://stackoverflow.com/questions/109023/best-algorithm-to-count-the-number-of-set-bits-in-a-32-bit-integer#109025) in Python:
```
def bitsoncount(i):
assert 0 <= i < 0x100000000
i = i - ((i >> 1) & 0x55555555)
i = (i & 0x33333333) + ((i >> 2) & 0x33333333)
return (((i + (i >> 4) & 0xF0F0F0F) * 0x1010101) & 0xffffffff) >> 24
``` |
A generic priority queue for Python | 407,734 | 33 | 2009-01-02T19:05:54Z | 407,755 | 31 | 2009-01-02T19:11:31Z | [
"python",
"queue"
] | I need to use a priority queue in my Python code. Looking around for something efficient, I came upon [heapq](http://docs.python.org/library/heapq.html). It looks good, but seems to be specified only for integers. I suppose it works with any objects that have comparison operators, but it doesn't specify what comparison operators it needs.
Besides, `heapq` seems to be implemented in Python, so it's not fast.
Are you aware of any fast implementations for priority queues in Python ? Optimally, I'd like the queue to be generic (i.e. work well for any object with a specified comparison operator).
Thanks in advance
**Update:**
Re comparison in `heapq`, I can either use a `(priority, object)` as Charlie Martin suggests, or just implement `__cmp__` for my object.
I'm still looking for something faster than `heapq`. | Um, [Queue.PriorityQueue](http://docs.python.org/library/queue.html#Queue.PriorityQueue) ? Recall that Python isn't strongly typed, so you can save anything you like: just make a tuple of (priority,thing) and you're set. |
A generic priority queue for Python | 407,734 | 33 | 2009-01-02T19:05:54Z | 407,759 | 7 | 2009-01-02T19:13:33Z | [
"python",
"queue"
] | I need to use a priority queue in my Python code. Looking around for something efficient, I came upon [heapq](http://docs.python.org/library/heapq.html). It looks good, but seems to be specified only for integers. I suppose it works with any objects that have comparison operators, but it doesn't specify what comparison operators it needs.
Besides, `heapq` seems to be implemented in Python, so it's not fast.
Are you aware of any fast implementations for priority queues in Python ? Optimally, I'd like the queue to be generic (i.e. work well for any object with a specified comparison operator).
Thanks in advance
**Update:**
Re comparison in `heapq`, I can either use a `(priority, object)` as Charlie Martin suggests, or just implement `__cmp__` for my object.
I'm still looking for something faster than `heapq`. | Did you look at the ["Show Source" link](http://docs.python.org/_sources/library/heapq.txt) on the heapq page? There's an example a little less than halfway down of using a heap with a list of (int, char) tuples as a priority queue. |
A generic priority queue for Python | 407,734 | 33 | 2009-01-02T19:05:54Z | 407,922 | 16 | 2009-01-02T20:19:36Z | [
"python",
"queue"
] | I need to use a priority queue in my Python code. Looking around for something efficient, I came upon [heapq](http://docs.python.org/library/heapq.html). It looks good, but seems to be specified only for integers. I suppose it works with any objects that have comparison operators, but it doesn't specify what comparison operators it needs.
Besides, `heapq` seems to be implemented in Python, so it's not fast.
Are you aware of any fast implementations for priority queues in Python ? Optimally, I'd like the queue to be generic (i.e. work well for any object with a specified comparison operator).
Thanks in advance
**Update:**
Re comparison in `heapq`, I can either use a `(priority, object)` as Charlie Martin suggests, or just implement `__cmp__` for my object.
I'm still looking for something faster than `heapq`. | I ended up implementing a wrapper for `heapq`, adding a dict for maintaining the queue's elements unique. The result should be quite efficient for all operators:
```
class PriorityQueueSet(object):
"""
Combined priority queue and set data structure.
Acts like a priority queue, except that its items are guaranteed to be
unique. Provides O(1) membership test, O(log N) insertion and O(log N)
removal of the smallest item.
Important: the items of this data structure must be both comparable and
hashable (i.e. must implement __cmp__ and __hash__). This is true of
Python's built-in objects, but you should implement those methods if you
want to use the data structure for custom objects.
"""
def __init__(self, items=[]):
"""
Create a new PriorityQueueSet.
Arguments:
items (list): An initial item list - it can be unsorted and
non-unique. The data structure will be created in O(N).
"""
self.set = dict((item, True) for item in items)
self.heap = self.set.keys()
heapq.heapify(self.heap)
def has_item(self, item):
"""Check if ``item`` exists in the queue."""
return item in self.set
def pop_smallest(self):
"""Remove and return the smallest item from the queue."""
smallest = heapq.heappop(self.heap)
del self.set[smallest]
return smallest
def add(self, item):
"""Add ``item`` to the queue if doesn't already exist."""
if item not in self.set:
self.set[item] = True
heapq.heappush(self.heap, item)
``` |
How to produce a 303 Http Response in Django? | 408,541 | 17 | 2009-01-03T03:20:07Z | 408,549 | 23 | 2009-01-03T03:25:12Z | [
"python",
"django",
"http",
"rest"
] | Last couple of days we were discussing at [another question](http://stackoverflow.com/questions/401191/how-to-return-random-items-restfully) the best to manage randomness in a RESTful way; today I went to play a little bit with some ideas in Django only to find that there is no easy standard way of returning a 303 response (nor a 300 one, btw), that is, there doesn't seem to exist an HttpResponseSeeOther inside django.HTTP or in another place.
Do you know any means for achieving this? | You could just override HttpResponse, like the other Responses do:
```
class HttpResponseSeeOther(HttpResponseRedirect):
status_code = 303
return HttpResponseSeeOther('/other-url/')
``` |
How to produce a 303 Http Response in Django? | 408,541 | 17 | 2009-01-03T03:20:07Z | 408,572 | 19 | 2009-01-03T03:54:35Z | [
"python",
"django",
"http",
"rest"
] | Last couple of days we were discussing at [another question](http://stackoverflow.com/questions/401191/how-to-return-random-items-restfully) the best to manage randomness in a RESTful way; today I went to play a little bit with some ideas in Django only to find that there is no easy standard way of returning a 303 response (nor a 300 one, btw), that is, there doesn't seem to exist an HttpResponseSeeOther inside django.HTTP or in another place.
Do you know any means for achieving this? | The generic HttpResponse object lets you specify any status code you want:
```
response = HttpResponse(content="", status=303)
response["Location"] = "http://example.com/redirect/here/"
```
If you need something re-usable then Gerald's answer is definitely valid; simply create your own HttpResponseSeeOther class. Django only provides these specific classes for a few of the most common status codes. |
Python human readable object serialization | 408,866 | 15 | 2009-01-03T10:30:40Z | 408,889 | 11 | 2009-01-03T10:52:05Z | [
"python",
"json",
"yaml",
"serialization",
"pickle"
] | i need to store Python structures made of lists / dictionaries, tuples into a human readable format. The idea is like using something similar to [pickle](http://docs.python.org/library/pickle.html), but pickle is not human-friendly. Other options that come to my mind are [YAML](http://en.wikipedia.org/wiki/Yaml) (through [PyYAML](http://pyyaml.org/) and [JSON](http://www.json.org/) (through [simplejson](http://pypi.python.org/pypi/simplejson/)) serializers.
Any other option that comes to your mind?
Thanks in advance. | For simple cases pprint() and eval() come to mind.
Using your example:
```
>>> d = {'age': 27,
... 'name': 'Joe',
... 'numbers': [1,
... 2,
... 3,
... 4,
... 5],
... 'subdict': {
... 'first': 1,
... 'second': 2,
... 'third': 3
... }
... }
>>>
>>> from pprint import pprint
>>> pprint(d)
{'age': 27,
'name': 'Joe',
'numbers': [1, 2, 3, 4, 5],
'subdict': {'first': 1, 'second': 2, 'third': 3}}
>>>
```
I would think twice about fixing two requirements with the same tool. Have you considered using pickle for the serializing and then pprint() (or a more fancy object viewer) for humans looking at the objects? |
Python human readable object serialization | 408,866 | 15 | 2009-01-03T10:30:40Z | 408,912 | 12 | 2009-01-03T11:04:55Z | [
"python",
"json",
"yaml",
"serialization",
"pickle"
] | i need to store Python structures made of lists / dictionaries, tuples into a human readable format. The idea is like using something similar to [pickle](http://docs.python.org/library/pickle.html), but pickle is not human-friendly. Other options that come to my mind are [YAML](http://en.wikipedia.org/wiki/Yaml) (through [PyYAML](http://pyyaml.org/) and [JSON](http://www.json.org/) (through [simplejson](http://pypi.python.org/pypi/simplejson/)) serializers.
Any other option that comes to your mind?
Thanks in advance. | If its **just** Python list, dictionary and tuple object. - **JSON** is the way to go. Its human readable, very easy to handle and language independent too.
Caution: Tuples will be converted to lists in simplejson.
```
In [109]: simplejson.loads(simplejson.dumps({'d':(12,3,4,4,5)}))
Out[109]: {u'd': [12, 3, 4, 4, 5]}
``` |
Sorting and Grouping Nested Lists in Python | 409,370 | 23 | 2009-01-03T17:05:30Z | 409,423 | 37 | 2009-01-03T17:29:07Z | [
"python"
] | I have the following data structure (a list of lists)
```
[
['4', '21', '1', '14', '2008-10-24 15:42:58'],
['3', '22', '4', '2somename', '2008-10-24 15:22:03'],
['5', '21', '3', '19', '2008-10-24 15:45:45'],
['6', '21', '1', '1somename', '2008-10-24 15:45:49'],
['7', '22', '3', '2somename', '2008-10-24 15:45:51']
]
```
I would like to be able to
1. Use a function to reorder the list so that I can group by each item in the list. For example I'd like to be able to group by the second column (so that all the 21's are together)
2. Use a function to only display certain values from each inner list. For example i'd like to reduce this list to only contain the 4th field value of '2somename'
so the list would look like this
```
[
['3', '22', '4', '2somename', '2008-10-24 15:22:03'],
['7', '22', '3', '2somename', '2008-10-24 15:45:51']
]
``` | For the first question, the first thing you should do is sort the list by the second field:
```
x = [
['4', '21', '1', '14', '2008-10-24 15:42:58'],
['3', '22', '4', '2somename', '2008-10-24 15:22:03'],
['5', '21', '3', '19', '2008-10-24 15:45:45'],
['6', '21', '1', '1somename', '2008-10-24 15:45:49'],
['7', '22', '3', '2somename', '2008-10-24 15:45:51']
]
from operator import itemgetter
x.sort(key=itemgetter(1))
```
Then you can use itertools' groupby function:
```
from itertools import groupby
y = groupby(x, itemgetter(1))
```
Now y is an iterator containing tuples of (element, item iterator). It's more confusing to explain these tuples than it is to show code:
```
for elt, items in groupby(x, itemgetter(1)):
print(elt, items)
for i in items:
print(i)
```
Which prints:
```
21 <itertools._grouper object at 0x511a0>
['4', '21', '1', '14', '2008-10-24 15:42:58']
['5', '21', '3', '19', '2008-10-24 15:45:45']
['6', '21', '1', '1somename', '2008-10-24 15:45:49']
22 <itertools._grouper object at 0x51170>
['3', '22', '4', '2somename', '2008-10-24 15:22:03']
['7', '22', '3', '2somename', '2008-10-24 15:45:51']
```
For the second part, you should use list comprehensions as mentioned already here:
```
from pprint import pprint as pp
pp([y for y in x if y[3] == '2somename'])
```
Which prints:
```
[['3', '22', '4', '2somename', '2008-10-24 15:22:03'],
['7', '22', '3', '2somename', '2008-10-24 15:45:51']]
``` |
Loop function parameters for sanity check | 409,449 | 3 | 2009-01-03T17:41:09Z | 409,564 | 7 | 2009-01-03T18:43:01Z | [
"python",
"function",
"parameters",
"arguments",
"sanitization"
] | I have a Python function in which I am doing some sanitisation of the input parameters:
```
def func(param1, param2, param3):
param1 = param1 or ''
param2 = param2 or ''
param3 = param3 or ''
```
This caters for the arguments being passed as *None* rather than empty strings. Is there an easier/more concise way to loop round the function parameters to apply such an expression to all of them. My actual function has nine parameters. | This looks like a good job for a decorator. How about this:
```
def sanitized(func):
def sfunc(*args, **kwds):
return func(*[arg or '' for arg in args],
**dict((k, v or '') for k,v in kwds.iteritems()))
sfunc.func_name = func.func_name
sfunc.func_doc = func.func_doc
return sfunc
```
You would use this on your function like so:
```
@sanitized
def func(param1, param2, param3):
print param1, param2, param3
```
Then the parameters will be replaced by the empty string if they are false:
```
>>> func('foo', None, 'spam')
foo spam
```
(Note that this will still mess up the function signature as Ned Batchelder points out in his answer. To fix that you could use [Michele Simionato's decorator module](http://pypi.python.org/pypi/decorator)-- I think you'd just need to add a `@decorator` before the definition of `sanitized`) |
Avoid exceptions? | 409,529 | 8 | 2009-01-03T18:19:50Z | 409,704 | 7 | 2009-01-03T19:52:37Z | [
"python",
"django",
"exception"
] | This particular example relates to Django in Python, but should apply to any language supporting exceptions:
```
try:
object = ModelClass.objects.get(search=value)
except DoesNotExist:
pass
if object:
# do stuff
```
The Django model class provides a simple method *get* which allows me to search for *one and only one* object from the database, if it finds more or less it raises an exception. If can find zero or more with an alternative *filter* method, which returns a list:
```
objects = ModelClass.objects.filter(search=value)
if len(objects) == 1:
object = objects[0]
# do stuff
```
**Am I overly averse to exceptions?** To me the exception seems a little wasteful, at a guess, a quarter-to-a-half of the time will be 'exceptional'. I'd much prefer a function that returns *None* on failure. Would I be better to use Django's *filter* method and process the list myself? | There's a big schism in programming languages around the use of exceptions.
* The majority view is that **exceptions should be exceptional**. In most languages with exceptions, transfer of control by exception is considerably more expensive than by procedure return, for example.
* There is a strong minority view that exceptions are just another control-flow construct, and they should be cheap. The [Standard ML of New Jersey](http://www.smlnj.org/) and [Objective Caml](http://caml.inria.fr/) compilers subscribe to that view. If you have cheap exceptions you can code some fancy backtracking algorithms in ways that are more difficult to code cleanly using other mechanisms.
I've seen this debate repeated many times for new language designs, and almost always, the winner decides that exceptions should be expensive and rare. When you care about performanced, you'd be wise to program with this in mind. |
Avoid exceptions? | 409,529 | 8 | 2009-01-03T18:19:50Z | 410,115 | 8 | 2009-01-03T23:57:18Z | [
"python",
"django",
"exception"
] | This particular example relates to Django in Python, but should apply to any language supporting exceptions:
```
try:
object = ModelClass.objects.get(search=value)
except DoesNotExist:
pass
if object:
# do stuff
```
The Django model class provides a simple method *get* which allows me to search for *one and only one* object from the database, if it finds more or less it raises an exception. If can find zero or more with an alternative *filter* method, which returns a list:
```
objects = ModelClass.objects.filter(search=value)
if len(objects) == 1:
object = objects[0]
# do stuff
```
**Am I overly averse to exceptions?** To me the exception seems a little wasteful, at a guess, a quarter-to-a-half of the time will be 'exceptional'. I'd much prefer a function that returns *None* on failure. Would I be better to use Django's *filter* method and process the list myself? | Believe it or not, this actually is an issue that is a bit different in each language. In Python, exceptions are regularly thrown for events that aren't exceptional by the language itself. Thus I think that the "you should only throw exceptions under exceptional circumstances" rule doesn't quite apply. I think the results you'll get on this forum will be slanted towards that point of view though, considering the high number of .Net programmers (see [this question](http://stackoverflow.com/questions/351400/why-are-net-programmers-so-afraid-of-exceptions)) for more info on that).
At a very minimum, I'd better not catch anyone who sticks to that rule ever using a generator or a for loop in Python (both of which involve throwing exceptions for non-exceptional circumstances). |
Python: Alter elements of a list | 409,732 | 15 | 2009-01-03T20:07:28Z | 409,744 | 13 | 2009-01-03T20:13:14Z | [
"python",
"coding-style"
] | I have a list of booleans where occasionally I reset them all to false. After first writing the reset as:
```
for b in bool_list:
b = False
```
I found it doesn't work. I spent a moment scratching my head, then remembered that of course it won't work since I'm only changing a reference to the bool, not its value. So I rewrote as:
```
for i in xrange(len(bool_list)):
bool_list[i] = False
```
and everything works fine. But I found myself asking, "Is that really the most pythonic way to alter all elements of a list?" Are there other ways that manage to be either more efficient or clearer? | If you only have one reference to the list, the following may be easier:
```
bool_list = [False] * len(bool_list)
```
This creates a new list populated with `False` elements.
See my answer to [Python dictionary clear](http://stackoverflow.com/questions/369898/python-dictionary-clear#369925) for a similar example. |
Python: Alter elements of a list | 409,732 | 15 | 2009-01-03T20:07:28Z | 409,764 | 13 | 2009-01-03T20:30:42Z | [
"python",
"coding-style"
] | I have a list of booleans where occasionally I reset them all to false. After first writing the reset as:
```
for b in bool_list:
b = False
```
I found it doesn't work. I spent a moment scratching my head, then remembered that of course it won't work since I'm only changing a reference to the bool, not its value. So I rewrote as:
```
for i in xrange(len(bool_list)):
bool_list[i] = False
```
and everything works fine. But I found myself asking, "Is that really the most pythonic way to alter all elements of a list?" Are there other ways that manage to be either more efficient or clearer? | Here's another version:
```
bool_list = [False for item in bool_list]
``` |
Python: Alter elements of a list | 409,732 | 15 | 2009-01-03T20:07:28Z | 410,067 | 11 | 2009-01-03T23:11:59Z | [
"python",
"coding-style"
] | I have a list of booleans where occasionally I reset them all to false. After first writing the reset as:
```
for b in bool_list:
b = False
```
I found it doesn't work. I spent a moment scratching my head, then remembered that of course it won't work since I'm only changing a reference to the bool, not its value. So I rewrote as:
```
for i in xrange(len(bool_list)):
bool_list[i] = False
```
and everything works fine. But I found myself asking, "Is that really the most pythonic way to alter all elements of a list?" Are there other ways that manage to be either more efficient or clearer? | ```
bool_list[:] = [False] * len(bool_list)
```
or
```
bool_list[:] = [False for item in bool_list]
``` |
Python: Alter elements of a list | 409,732 | 15 | 2009-01-03T20:07:28Z | 410,213 | 12 | 2009-01-04T01:09:25Z | [
"python",
"coding-style"
] | I have a list of booleans where occasionally I reset them all to false. After first writing the reset as:
```
for b in bool_list:
b = False
```
I found it doesn't work. I spent a moment scratching my head, then remembered that of course it won't work since I'm only changing a reference to the bool, not its value. So I rewrote as:
```
for i in xrange(len(bool_list)):
bool_list[i] = False
```
and everything works fine. But I found myself asking, "Is that really the most pythonic way to alter all elements of a list?" Are there other ways that manage to be either more efficient or clearer? | **Summary**
Performance-wise, numpy or a list multiplication are clear winners, as they are 10-20x faster than other approaches.
I did some performance testing on the various options proposed. I used Python 2.5.2, on Linux (Ubuntu 8.10), with a 1.5 Ghz Pentium M.
**Original:**
```
python timeit.py -s 'bool_list = [True] * 1000' 'for x in xrange(len(bool_list)): bool_list[x] = False'
```
1000 loops, best of 3: 280 usec per loop
**Slice-based replacement with a list comprehension:**
```
python timeit.py -s 'bool_list = [True] * 1000' 'bool_list[:] = [False for element in bool_list]'
```
1000 loops, best of 3: 215 usec per loop
**Slice-based replacement with a generator comprehension:**
```
python timeit.py -s 'bool_list = [True] * 1000' 'bool_list[:] = (False for element in bool_list)'
```
1000 loops, best of 3: 265 usec per loop
**Enumerate**:
```
python timeit.py -s 'bool_list = [True] * 1000' 'for i, v in enumerate(bool_list): bool_list[i] = False'
```
1000 loops, best of 3: 385 usec per loop
**Numpy**:
```
python timeit.py -s 'import numpy' -s 'bool_list = numpy.zeros((1000,), dtype=numpy.bool)' 'bool_list[:] = False'
```
10000 loops, best of 3: 15.9 usec per loop
**Slice-based replacement with list multiplication:**
```
python timeit.py -s 'bool_list = [True] * 1000' 'bool_list[:] = [False] * len(bool_list)'
```
10000 loops, best of 3: 23.3 usec per loop
**Reference replacement with list multiplication**
```
python timeit.py -s 'bool_list = [True] * 1000' 'bool_list = [False] * len(bool_list)'
```
10000 loops, best of 3: 11.3 usec per loop |
socket.shutdown vs socket.close | 409,783 | 87 | 2009-01-03T20:37:55Z | 409,843 | 21 | 2009-01-03T21:10:08Z | [
"python",
"sockets",
"asynchronous"
] | I recently saw a bit of code that looked like this (with sock being a socket object of course):
```
sock.shutdown(socket.SHUT_RDWR)
sock.close()
```
What exactly is the purpose of calling shutdown on the socket and then closing it? If it makes a difference, this socket is being used for non-blocking IO. | Here's one [explanation](http://publib.boulder.ibm.com/infocenter/systems/index.jsp?topic=/com.ibm.aix.progcomm/doc/progcomc/skt_shutdn.htm):
> Once a socket is no longer required,
> the calling program can discard the
> socket by applying a close subroutine
> to the socket descriptor. If a
> reliable delivery socket has data
> associated with it when a close takes
> place, the system continues to attempt
> data transfer. However, if the data is
> still undelivered, the system discards
> the data. Should the application
> program have no use for any pending
> data, it can use the shutdown
> subroutine on the socket prior to
> closing it. |
socket.shutdown vs socket.close | 409,783 | 87 | 2009-01-03T20:37:55Z | 410,231 | 16 | 2009-01-04T01:28:29Z | [
"python",
"sockets",
"asynchronous"
] | I recently saw a bit of code that looked like this (with sock being a socket object of course):
```
sock.shutdown(socket.SHUT_RDWR)
sock.close()
```
What exactly is the purpose of calling shutdown on the socket and then closing it? If it makes a difference, this socket is being used for non-blocking IO. | Explanation of shutdown and close: [Graceful shutdown (msdn)](http://msdn.microsoft.com/en-us/library/ms738547(VS.85).aspx)
Shutdown (in your case) indicates to the other end of the connection there is no further intention to read from or write to the socket. Then close frees up any memory associated with the socket.
Omitting shutdown may cause the socket to linger in the OSs stack until the connection has been closed gracefully.
IMO the names 'shutdown' and 'close' are misleading, 'close' and 'destroy' would emphasise their differences. |
socket.shutdown vs socket.close | 409,783 | 87 | 2009-01-03T20:37:55Z | 598,759 | 159 | 2009-02-28T21:58:05Z | [
"python",
"sockets",
"asynchronous"
] | I recently saw a bit of code that looked like this (with sock being a socket object of course):
```
sock.shutdown(socket.SHUT_RDWR)
sock.close()
```
What exactly is the purpose of calling shutdown on the socket and then closing it? If it makes a difference, this socket is being used for non-blocking IO. | Calling `close` and `shutdown` have two different effects on the underlying socket.
The first thing to point out is that the socket is a resource in the underlying OS and **multiple processes can have a handle for the same underlying socket.**
When you call `close` it decrements the handle count by one and if the handle count has reached zero then the socket and associated connection goes through the normal close procedure (effectively sending a FIN / EOF to the peer) and the socket is deallocated.
The thing to pay attention to here is that if the handle count does not reach zero because another process still has a handle to the socket then the connection **is not closed and the socket is not deallocated.**
On the other hand calling `shutdown` for reading and writing closes the underlying connection and sends a FIN / EOF to the peer regardless of how many processes have handles to the socket. However, it **does not** deallocate the socket and you still need to call close afterward. |
How do you embed album art into an MP3 using Python? | 409,949 | 21 | 2009-01-03T21:59:30Z | 1,002,814 | 10 | 2009-06-16T17:13:44Z | [
"python",
"mp3",
"metadata",
"id3",
"albumart"
] | I've been using mutagen for reading and writing MP3 tags, but I want to be able to embed album art directly into the file. | I've used the [eyeD3](http://eyed3.nicfit.net/) module to do this exact thing.
```
def update_id3(mp3_file_name, artwork_file_name, artist, item_title):
#edit the ID3 tag to add the title, artist, artwork, date, and genre
tag = eyeD3.Tag()
tag.link(mp3_file_name)
tag.setVersion([2,3,0])
tag.addImage(0x08, artwork_file_name)
tag.setArtist(artist)
tag.setDate(localtime().tm_year)
tag.setTitle(item_title)
tag.setGenre("Trance")
tag.update()
``` |
How do you embed album art into an MP3 using Python? | 409,949 | 21 | 2009-01-03T21:59:30Z | 1,937,425 | 28 | 2009-12-20T23:02:50Z | [
"python",
"mp3",
"metadata",
"id3",
"albumart"
] | I've been using mutagen for reading and writing MP3 tags, but I want to be able to embed album art directly into the file. | Here is how to add example.png as album cover into example.mp3 with mutagen:
```
from mutagen.mp3 import MP3
from mutagen.id3 import ID3, APIC, error
audio = MP3('example.mp3', ID3=ID3)
# add ID3 tag if it doesn't exist
try:
audio.add_tags()
except error:
pass
audio.tags.add(
APIC(
encoding=3, # 3 is for utf-8
mime='image/png', # image/jpeg or image/png
type=3, # 3 is for the cover image
desc=u'Cover',
data=open('example.png').read()
)
)
audio.save()
``` |
For Python programmers, is there anything equivalent to Perl's CPAN? | 410,163 | 32 | 2009-01-04T00:30:37Z | 410,170 | 32 | 2009-01-04T00:34:17Z | [
"python",
"perl"
] | I'm learning Python now because of the Django framework. I have been a Perl programmer for a number of years and I'm so used to Perl's tools. One of the things that I really miss is Perl's CPAN and its tools. Is there anything equivalent in Python? I would like to be able to search, install and maintain Python modules as easy as CPAN. Also, a system that can handle dependencies automatically. I tried to install a module in Python by downloading a zip file from a website, unzipped it, then do:
`sudo python setup.py install`
but it's looking for another module. Now, lazy as I am, I don't like chasing dependencies and such, is there an easy way? | sammy, have a look at [pip](http://pip.openplans.org/), which will let you do "pip install foo", and will download and install its dependencies (as long as they're on [PyPI](http://pypi.python.org/)). There's also [EasyInstall](http://peak.telecommunity.com/DevCenter/EasyInstall), but pip is intended to replace that. |
For Python programmers, is there anything equivalent to Perl's CPAN? | 410,163 | 32 | 2009-01-04T00:30:37Z | 412,211 | 10 | 2009-01-05T03:32:27Z | [
"python",
"perl"
] | I'm learning Python now because of the Django framework. I have been a Perl programmer for a number of years and I'm so used to Perl's tools. One of the things that I really miss is Perl's CPAN and its tools. Is there anything equivalent in Python? I would like to be able to search, install and maintain Python modules as easy as CPAN. Also, a system that can handle dependencies automatically. I tried to install a module in Python by downloading a zip file from a website, unzipped it, then do:
`sudo python setup.py install`
but it's looking for another module. Now, lazy as I am, I don't like chasing dependencies and such, is there an easy way? | It might be useful to note that pip and easy\_install both use the [Python Package Index (PyPI)](http://pypi.python.org/pypi), sometimes called the "Cheeseshop", to search for packages. Easy\_install is currently the most universally supported, as it works with both setuptools and distutils style packaging, completely. See [James Bennett's commentary](http://www.b-list.org/weblog/2008/dec/14/packaging/) on python packaging for good reasons to use pip, and [Ian Bicking's reply](http://blog.ianbicking.org/2008/12/14/a-few-corrections-to-on-packaging/) for some clarifications on the differences. |
Natural/Relative days in Python | 410,221 | 31 | 2009-01-04T01:20:55Z | 410,335 | 7 | 2009-01-04T03:02:03Z | [
"python",
"datetime",
"human-readable",
"datetime-parsing",
"humanize"
] | I'd like a way to show natural times for dated items in Python. Similar to how Twitter will show a message from "a moment ago", "a few minutes ago", "two hours ago", "three days ago", etc.
Django 1.0 has a "humanize" method in django.contrib. I'm not using the Django framework, and even if I were, it's more limited than what I'd like.
Please let me (and generations of future searchers) know if there is a good working solution already. Since this is a common enough task, I imagine there must be something. | Or you could easily adapt [timesince.py](http://code.djangoproject.com/browser/django/trunk/django/utils/timesince.py) from Django which only has 2 other dependencies to itself: one for translation (which you might not need) and one for timezones (which can be easily adapted).
By the way, [Django has a BSD license](http://code.djangoproject.com/browser/django/trunk/LICENSE) which is pretty flexible, you'll be able to use it in whatever project you are currently using. |
Natural/Relative days in Python | 410,221 | 31 | 2009-01-04T01:20:55Z | 410,482 | 16 | 2009-01-04T04:52:36Z | [
"python",
"datetime",
"human-readable",
"datetime-parsing",
"humanize"
] | I'd like a way to show natural times for dated items in Python. Similar to how Twitter will show a message from "a moment ago", "a few minutes ago", "two hours ago", "three days ago", etc.
Django 1.0 has a "humanize" method in django.contrib. I'm not using the Django framework, and even if I were, it's more limited than what I'd like.
Please let me (and generations of future searchers) know if there is a good working solution already. Since this is a common enough task, I imagine there must be something. | While not useful to you at this very moment, it may be so for future searchers:
The babel module, which deals with all sorts of locale stuff, has a function for doing more or less what you want. Currently it's only in their trunk though, not in the latest public release (version 0.9.4). Once the functionality lands in a release, you could do something like:
```
from datetime import timedelta
from babel.dates import format_timedelta
delta = timedelta(days=6)
format_timedelta(delta, locale='en_US')
u'1 week'
```
This is taken straight from [the babel documentation on time delta formatting](http://babel.edgewall.org/wiki/Documentation/dates.html#time-delta-formatting). This will at least get you parts of the way. It wont do fuzziness down to the level of "moments ago" and such, but it will do "n minutes" etc. correctly pluralized.
For what it's worth, the babel module also contains functions for formatting dates and times according to locale, Which might be useful when the time delta is large. |
Natural/Relative days in Python | 410,221 | 31 | 2009-01-04T01:20:55Z | 5,164,027 | 29 | 2011-03-02T06:09:12Z | [
"python",
"datetime",
"human-readable",
"datetime-parsing",
"humanize"
] | I'd like a way to show natural times for dated items in Python. Similar to how Twitter will show a message from "a moment ago", "a few minutes ago", "two hours ago", "three days ago", etc.
Django 1.0 has a "humanize" method in django.contrib. I'm not using the Django framework, and even if I were, it's more limited than what I'd like.
Please let me (and generations of future searchers) know if there is a good working solution already. Since this is a common enough task, I imagine there must be something. | Twitter dates in specific are interesting because they are relative only for the first day. After 24 hours they just show the month and day. After a year they start showing the last two digits of the year. Here's a sample function that does something more akin to Twitter relative dates, though it always shows the year too after 24 hours. It's US locale only, but you can always alter it as needed.
```
# tested in Python 2.7
import datetime
def prettydate(d):
diff = datetime.datetime.utcnow() - d
s = diff.seconds
if diff.days > 7 or diff.days < 0:
return d.strftime('%d %b %y')
elif diff.days == 1:
return '1 day ago'
elif diff.days > 1:
return '{} days ago'.format(diff.days)
elif s <= 1:
return 'just now'
elif s < 60:
return '{} seconds ago'.format(s)
elif s < 120:
return '1 minute ago'
elif s < 3600:
return '{} minutes ago'.format(s/60)
elif s < 7200:
return '1 hour ago'
else:
return '{} hours ago'.format(s/3600)
``` |
what's the 5 character alphanumeric id in reddit URL? | 410,485 | 12 | 2009-01-04T05:00:52Z | 410,504 | 24 | 2009-01-04T05:21:43Z | [
"python",
"url",
"slug",
"reddit"
] | Whats the `7n5lu` in the reddit URL
`http://www.reddit.com/r/reddit.com/comments/7n5lu/man_can_fly_if_you_watch_one_video_in_2`
how is it generated?
update:
@Gerald, Thanks for the code. I initially thought this is some obfuscation of the id.
But, it is just doing the conversion from integer to a more compact representation. I am thinking, why is this being done? why not use the original integer itself!!
```
>>> to36(4000)
'334'
>>> to36(4001)
'335'
``` | The reddit source code [is available](https://github.com/reddit/reddit)! Here is what I found for generating that string:
```
def to_base(q, alphabet):
if q < 0: raise ValueError, "must supply a positive integer"
l = len(alphabet)
converted = []
while q != 0:
q, r = divmod(q, l)
converted.insert(0, alphabet[r])
return "".join(converted) or '0'
def to36(q):
return to_base(q, '0123456789abcdefghijklmnopqrstuvwxyz')
```
and elsewhere, under the "Link" class:
```
@property
def _id36(self):
return to36(self._id)
``` |
How do I parse XML from a Google app engine app? | 410,954 | 14 | 2009-01-04T13:05:41Z | 410,971 | 8 | 2009-01-04T13:27:43Z | [
"python",
"xml",
"google-app-engine",
"parsing"
] | How do I parse XML from a Google app engine app? Any examples? | Take a look at [existing answers on XML and Python](http://stackoverflow.com/search?q=iterparse).
Something like this could work:
```
from cStringIO import StringIO
from xml.etree import cElementTree as etree
xml = "<a>aaa<b>bbb</b></a>"
for event, elem in etree.iterparse(StringIO(xml)):
print elem.text
```
It prints:
```
bbb
aaa
``` |
How do I parse XML from a Google app engine app? | 410,954 | 14 | 2009-01-04T13:05:41Z | 711,167 | 20 | 2009-04-02T19:09:49Z | [
"python",
"xml",
"google-app-engine",
"parsing"
] | How do I parse XML from a Google app engine app? Any examples? | Since the question was asked, Google has whitelisted pyexpat, which includes minidom, so you can use the following code without having to upload any libraries:
```
from xml.dom import minidom
dom = minidom.parseString('<eg>example text</eg>')
```
More information:
<http://docs.python.org/library/xml.dom.minidom.html> |
Variable number of inputs with Django forms possible? | 411,761 | 17 | 2009-01-04T22:00:33Z | 411,852 | 7 | 2009-01-04T22:57:03Z | [
"python",
"django",
"django-forms"
] | Is it possible to have a variable number of fields using django forms?
The specific application is this:
A user can upload as many pictures as they want on the image upload form. Once the pictures are uploaded they are taken to a page where they can give the pictures a name and description. The number of pictures will depend on how many the user has chosen to upload.
So **how do I get django to generate a form using a variable number of input fields** (which could be passed as an argument if necessary)?
**edit:** a few things have changed since the [article mentioned in jeff bauer's answer](http://www.b-list.org/weblog/2008/nov/09/dynamic-forms/) was written.
Namely this line of code which doesn't seem to work:
```
# BAD CODE DO NOT USE!!!
return type('ContactForm', [forms.BaseForm], { 'base_fields': fields })
```
So here is what I came up with...
# The Answer I used:
```
from tagging.forms import TagField
from django import forms
def make_tagPhotos_form(photoIdList):
"Expects a LIST of photo objects (ie. photo_sharing.models.photo)"
fields = {}
for id in photoIdList:
id = str(id)
fields[id+'_name'] = forms.CharField()
fields[id+'_tags'] = TagField()
fields[id+'_description'] = forms.CharField(widget=forms.Textarea)
return type('tagPhotos', (forms.BaseForm,), { 'base_fields': fields })
```
note tagging is not part of django, but it is free and very useful. check it out: [django-tagging](http://code.google.com/p/django-tagging/) | If you run
```
python manage.py shell
```
and type:
```
from app.forms import PictureForm
p = PictureForm()
p.fields
type(p.fields)
```
you'll see that p.fields is a SortedDict. you just have to insert a new field. Something like
```
p.fields.insert(len(p.fields)-2, 'fieldname', Field())
```
In this case it would insert before the last field, a new field. You should now adapt to your code.
Other alternative is to make a for/while loop in your template and do the form in HTML, but django forms rock for some reason, right? |
Variable number of inputs with Django forms possible? | 411,761 | 17 | 2009-01-04T22:00:33Z | 411,862 | 7 | 2009-01-04T23:00:48Z | [
"python",
"django",
"django-forms"
] | Is it possible to have a variable number of fields using django forms?
The specific application is this:
A user can upload as many pictures as they want on the image upload form. Once the pictures are uploaded they are taken to a page where they can give the pictures a name and description. The number of pictures will depend on how many the user has chosen to upload.
So **how do I get django to generate a form using a variable number of input fields** (which could be passed as an argument if necessary)?
**edit:** a few things have changed since the [article mentioned in jeff bauer's answer](http://www.b-list.org/weblog/2008/nov/09/dynamic-forms/) was written.
Namely this line of code which doesn't seem to work:
```
# BAD CODE DO NOT USE!!!
return type('ContactForm', [forms.BaseForm], { 'base_fields': fields })
```
So here is what I came up with...
# The Answer I used:
```
from tagging.forms import TagField
from django import forms
def make_tagPhotos_form(photoIdList):
"Expects a LIST of photo objects (ie. photo_sharing.models.photo)"
fields = {}
for id in photoIdList:
id = str(id)
fields[id+'_name'] = forms.CharField()
fields[id+'_tags'] = TagField()
fields[id+'_description'] = forms.CharField(widget=forms.Textarea)
return type('tagPhotos', (forms.BaseForm,), { 'base_fields': fields })
```
note tagging is not part of django, but it is free and very useful. check it out: [django-tagging](http://code.google.com/p/django-tagging/) | Use either multiple forms (django.forms.Form not the tag)
```
class Foo(forms.Form):
field = forms.Charfield()
forms = [Foo(prefix=i) for i in xrange(x)]
```
or add multiple fields to the form dynamically using self.fields.
```
class Bar(forms.Form):
def __init__(self, fields, *args, **kwargs):
super(Bar, self).__init__(*args, **kwargs)
for i in xrange(fields):
self.fields['my_field_%i' % i] = forms.Charfield()
``` |
Variable number of inputs with Django forms possible? | 411,761 | 17 | 2009-01-04T22:00:33Z | 412,149 | 13 | 2009-01-05T02:35:12Z | [
"python",
"django",
"django-forms"
] | Is it possible to have a variable number of fields using django forms?
The specific application is this:
A user can upload as many pictures as they want on the image upload form. Once the pictures are uploaded they are taken to a page where they can give the pictures a name and description. The number of pictures will depend on how many the user has chosen to upload.
So **how do I get django to generate a form using a variable number of input fields** (which could be passed as an argument if necessary)?
**edit:** a few things have changed since the [article mentioned in jeff bauer's answer](http://www.b-list.org/weblog/2008/nov/09/dynamic-forms/) was written.
Namely this line of code which doesn't seem to work:
```
# BAD CODE DO NOT USE!!!
return type('ContactForm', [forms.BaseForm], { 'base_fields': fields })
```
So here is what I came up with...
# The Answer I used:
```
from tagging.forms import TagField
from django import forms
def make_tagPhotos_form(photoIdList):
"Expects a LIST of photo objects (ie. photo_sharing.models.photo)"
fields = {}
for id in photoIdList:
id = str(id)
fields[id+'_name'] = forms.CharField()
fields[id+'_tags'] = TagField()
fields[id+'_description'] = forms.CharField(widget=forms.Textarea)
return type('tagPhotos', (forms.BaseForm,), { 'base_fields': fields })
```
note tagging is not part of django, but it is free and very useful. check it out: [django-tagging](http://code.google.com/p/django-tagging/) | Yes, it's possible to create forms dynamically in Django. You can even mix and match dynamic fields with normal fields.
```
class EligibilityForm(forms.Form):
def __init__(self, *args, **kwargs):
super(EligibilityForm, self).__init__(*args, **kwargs)
# dynamic fields here ...
self.fields['plan_id'] = CharField()
# normal fields here ...
date_requested = DateField()
```
For a better elaboration of this technique, see James Bennett's article: **So you want a dynamic form?**
<http://www.b-list.org/weblog/2008/nov/09/dynamic-forms/> |
How do I retrieve a Django model class dynamically? | 411,810 | 15 | 2009-01-04T22:32:50Z | 411,880 | 32 | 2009-01-04T23:11:28Z | [
"python",
"django",
"django-models",
"django-queryset"
] | Without having the full module path of a Django model, is it possible to do something like:
```
model = 'User' [in Django namespace]
model.objects.all()
```
...as opposed to:
```
User.objects.all().
```
EDIT: I am trying to make this call based on command-line input. Is it possible to avoid the import statement, e.g.,
```
model = django.authx.models.User
```
Without Django returning the error:
```
"global name django is not defined."
``` | I think you're looking for this:
```
from django.db.models.loading import get_model
model = get_model('app_name', 'model_name')
```
There are other methods, of course, but this is the way I'd handle it if you don't know what models file you need to import into your namespace. (Note there's really no way to safely get a model without first knowing what app it belongs to. Look at the source code to loading.py if you want to test your luck at iterating over all the apps' models.)
**Update:** According to Django's [deprecation timeline](https://docs.djangoproject.com/en/dev/internals/deprecation/#deprecation-removed-in-1-9), `django.db.models.loading` has been deprecated in Django 1.7 and will be removed in Django 1.9. As pointed out in [Alasdair's answer](http://stackoverflow.com/a/28380435/996114), a new API for dynamically loading models was added to Django 1.7. |
How do I retrieve a Django model class dynamically? | 411,810 | 15 | 2009-01-04T22:32:50Z | 28,380,435 | 11 | 2015-02-07T09:19:49Z | [
"python",
"django",
"django-models",
"django-queryset"
] | Without having the full module path of a Django model, is it possible to do something like:
```
model = 'User' [in Django namespace]
model.objects.all()
```
...as opposed to:
```
User.objects.all().
```
EDIT: I am trying to make this call based on command-line input. Is it possible to avoid the import statement, e.g.,
```
model = django.authx.models.User
```
Without Django returning the error:
```
"global name django is not defined."
``` | For Django 1.7+, there is an [applications registry](https://docs.djangoproject.com/en/1.8/ref/applications/#module-django.apps). You can use the [`apps.get_model`](https://docs.djangoproject.com/en/1.8/ref/applications/#django.apps.apps.get_model) method to dynamically get a model.
```
from django.apps import apps
MyModel = apps.get_model('app_label', 'MyModel')
``` |
Django file upload failing occasionally | 411,902 | 8 | 2009-01-04T23:28:35Z | 416,189 | 8 | 2009-01-06T11:33:53Z | [
"python",
"django",
"apache"
] | I am trying to port my first Django 1.0.2 application to run on OSX/Leopard with Apache + mod\_python 3.3.1 + python 2.6.1 (all running in 64-bit mode) and I am experiencing an occasional error when uploading a file that was not present when testing with the Django development server.
The code for the upload is similar to what described in the Django documentation:
```
class UploadFileForm(forms.Form):
file = forms.FileField()
description = forms.CharField(max_length=100)
notifygroup = forms.BooleanField(label='Notify Group?', required=False)
def upload_file(request, date, meetingid ):
print date, meetingid
if request.method == 'POST':
print 'before reloading the form...'
form = UploadFileForm(request.POST, request.FILES)
print 'after reloading the form'
if form.is_valid():
try:
handle_uploaded_file(request.FILES['file'], request.REQUEST['date'], request.REQUEST['description'], form.cleaned_data['notifygroup'], meetingid )
except:
return render_to_response('uploaded.html', { 'message': 'Error! File not uploaded!' })
return HttpResponseRedirect('/myapp/uploaded/')
else:
form = UploadFileForm()
return render_to_response('upload.html', {'form': form, 'date':date, 'meetingid':meetingid})
```
This code normally works correctly, but sometimes (say, once every 10 uploads) and after a fairly long waiting time, it fails with the following error:
```
IOError at /myapp/upload/2009-01-03/1
Client read error (Timeout?)
Request Method: POST
Request URL: http://192.168.0.164/myapp/upload/2009-01-03/1
Exception Type: IOError
Exception Value:
Client read error (Timeout?)
Exception Location: /Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/http/multipartparser.py in read, line 406
Python Executable: /usr/sbin/httpd
Python Version: 2.6.1
Python Path: ['/djangoapps/myapp/', '/djangoapps/', '/Library/Frameworks/Python64.framework/Versions/2.6/lib/python26.zip', '/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6', '/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/plat-darwin', '/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/plat-mac', '/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages', '/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/lib-tk', '/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/lib-old', '/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/lib-dynload', '/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages']
Server time: Sun, 4 Jan 2009 22:42:04 +0100
Environment:
Request Method: POST
Request URL: http://192.168.0.164/myapp/upload/2009-01-03/1
Django Version: 1.0.2 final
Python Version: 2.6.1
Installed Applications:
['django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.admin',
'myapp.application1']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware')
Traceback:
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/core/handlers/base.py" in get_response
86. response = callback(request, *callback_args, **callback_kwargs)
File "/djangoapps/myapp/../myapp/application1/views.py" in upload_file
137. form = UploadFileForm(request.POST, request.FILES)
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/core/handlers/modpython.py" in _get_post
113. self._load_post_and_files()
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/core/handlers/modpython.py" in _load_post_and_files
87. self._post, self._files = self.parse_file_upload(self.META, self._req)
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/http/__init__.py" in parse_file_upload
124. return parser.parse()
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/http/multipartparser.py" in parse
134. for item_type, meta_data, field_stream in Parser(stream, self._boundary):
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/http/multipartparser.py" in __iter__
607. for sub_stream in boundarystream:
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/http/multipartparser.py" in next
421. return LazyStream(BoundaryIter(self._stream, self._boundary))
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/http/multipartparser.py" in __init__
447. unused_char = self._stream.read(1)
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/http/multipartparser.py" in read
300. out = ''.join(parts())
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/http/multipartparser.py" in parts
293. chunk = self.next()
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/http/multipartparser.py" in next
315. output = self._producer.next()
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/http/multipartparser.py" in next
376. data = self.flo.read(self.chunk_size)
File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/django/http/multipartparser.py" in read
406. return self._file.read(num_bytes)
Exception Type: IOError at /myapp/upload/2009-01-03/1
Exception Value: Client read error (Timeout?)
```
I tried to run everything using mod\_wsgi and no difference.
Does anybody know what am I doing wrong?
Thanks in advance for your help!
ppdo
=====
Updated:
Though I succeeded uploading large files (60+ MB), when it fails it fails with no evident relationship with the size of the upload, i.e. it fails also with 10kB files that have successfully been uploaded before. | Using mod\_wsgi made the problem go away for Firefox.
Limiting my research to an interaction problem between Apache and Safari, I stumbled upon this bug report for Apache <https://bugs.webkit.org/show_bug.cgi?id=5760> that describes something very similar to what is happening and it is apparently still open. Reading this gave me the idea to try and disable the keepalive and, though I need to test it more extensively, it seems the problem is gone.
A simple:
BrowserMatch "Safari" nokeepalive
in the Apache configuration did the trick. |
How to (simply) connect Python to my web site? | 412,368 | 3 | 2009-01-05T05:57:25Z | 412,371 | 11 | 2009-01-05T06:00:41Z | [
"python",
"website"
] | I've been playing with Python for a while and wrote a little program to make a database to keep track of some info (its really basic, and hand written). I want to add the ability to create a website from the data that I will then pass to my special little place on the internet. What should I use to build up the website? After dabbling with Django, I've found it overkill and over my head, but if that's the only option I'll learn to use it.
Does anyone know an easy way to output a database of arbitrary format to one or more HTML (or different format) files? | I would generate a page or two of HTML using a template engine ([Jinja](http://jinja.pocoo.org/2/) is my personal choice) and just stick them in your `public_html` directory or wherever the webserver's root is. |
Algorithm to keep a list of percentages to add up to 100% | 412,943 | 4 | 2009-01-05T12:18:09Z | 412,979 | 9 | 2009-01-05T12:33:25Z | [
"python",
"algorithm"
] | (code examples are python)
Lets assume we have a list of percentages that add up to 100:
```
mylist = [2.0, 7.0, 12.0, 35.0, 21.0, 23.0]
```
Some values of mylist may be changed, others must stay fixed.
Lets assume the first 3 (2.0, 7.0, 12.0) must stay fixed and the last three (35.0, 21.0, 23.0) may be changed.
```
fix = mylist[:3]
vari = mylist[3:]
```
The goal is to add a new item to mylist, while sum(mylist) stays 100.0 and vari
items keep their relations to each other. For that we need to substract a CERTAIN
PERCENTAGE from each vari item. Example: lets assume we want to add 4.0 to mylist.
Using an ugly aproximation loop I found out that i need to substract ca. 5.0634%
of each vari item (CERTAIN PERCENTAGE = 5.0634):
```
adjusted =[]
for number in vari:
adjusted.append(number-(number*(5.0634/100.0)))
adjusted.extend(fix)
adjusted.append(4.0)
```
adjusted now contains my desired result.
# My question is how to calculate CERTAIN PERCENTAGE ;.) | How's this?
```
def adjustAppend( v, n ):
weight= -n/sum(v)
return [ i+i*weight for i in v ] + [n]
```
Given a list of numbers *v*, append a new number, *n*.
Weight the existing number to keep the sum the same.
```
sum(v) == sum( v + [n] )
```
Each element of *v*, *i*, must be reduced by some function of *i*, *r*(*i*) such that
```
sum(r(i)) == -n
```
or
```
sum( map( r, v ) ) == -n
```
Therefore, the weighting function is `-(n*i)/sum(v)` |
How to implement property() with dynamic name (in python) | 412,951 | 9 | 2009-01-05T12:19:45Z | 412,997 | 10 | 2009-01-05T12:42:31Z | [
"python",
"oop",
"parameters",
"properties"
] | I am programming a simulations for single neurons. Therefore I have to handle a lot of Parameters. Now the Idea is that I have two classes, one for a SingleParameter and a Collection of parameters. I use property() to access the parameter value easy and to make the code more readable. This works perfect for a sinlge parameter but I don't know how to implement it for the collection as I want to name the property in Collection after the SingleParameter. Here an example:
```
class SingleParameter(object):
def __init__(self, name, default_value=0, unit='not specified'):
self.name = name
self.default_value = default_value
self.unit = unit
self.set(default_value)
def get(self):
return self._v
def set(self, value):
self._v = value
v = property(fget=get, fset=set, doc='value of parameter')
par1 = SingleParameter(name='par1', default_value=10, unit='mV')
par2 = SingleParameter(name='par2', default_value=20, unit='mA')
# par1 and par2 I can access perfectly via 'p1.v = ...'
# or get its value with 'p1.v'
class Collection(object):
def __init__(self):
self.dict = {}
def __getitem__(self, name):
return self.dict[name] # get the whole object
# to get the value instead:
# return self.dict[name].v
def add(self, parameter):
self.dict[parameter.name] = parameter
# now comes the part that I don't know how to implement with property():
# It shoule be something like
# self.__dict__[parameter.name] = property(...) ?
col = Collection()
col.add(par1)
col.add(par2)
col['par1'] # gives the whole object
# Now here is what I would like to get:
# col.par1 -> should result like col['par1'].v
# col.par1 = 5 -> should result like col['par1'].v = 5
```
---
Other questions that I put to understand property():
* [Why do managed attributes just work for class attributes and not for instance attributes in python?](http://stackoverflow.com/questions/428264 "Why do managed attributes just work for class attributes and not for instance attributes in python?")
* [How can I assign a new class attribute via \_\_dict\_\_ in python?](http://stackoverflow.com/questions/432786/ "setting class attributes with setattr()") | Look at built-in functions `getattr` and `setattr`. You'll probably be a lot happier. |
How to implement property() with dynamic name (in python) | 412,951 | 9 | 2009-01-05T12:19:45Z | 439,708 | 7 | 2009-01-13T16:33:49Z | [
"python",
"oop",
"parameters",
"properties"
] | I am programming a simulations for single neurons. Therefore I have to handle a lot of Parameters. Now the Idea is that I have two classes, one for a SingleParameter and a Collection of parameters. I use property() to access the parameter value easy and to make the code more readable. This works perfect for a sinlge parameter but I don't know how to implement it for the collection as I want to name the property in Collection after the SingleParameter. Here an example:
```
class SingleParameter(object):
def __init__(self, name, default_value=0, unit='not specified'):
self.name = name
self.default_value = default_value
self.unit = unit
self.set(default_value)
def get(self):
return self._v
def set(self, value):
self._v = value
v = property(fget=get, fset=set, doc='value of parameter')
par1 = SingleParameter(name='par1', default_value=10, unit='mV')
par2 = SingleParameter(name='par2', default_value=20, unit='mA')
# par1 and par2 I can access perfectly via 'p1.v = ...'
# or get its value with 'p1.v'
class Collection(object):
def __init__(self):
self.dict = {}
def __getitem__(self, name):
return self.dict[name] # get the whole object
# to get the value instead:
# return self.dict[name].v
def add(self, parameter):
self.dict[parameter.name] = parameter
# now comes the part that I don't know how to implement with property():
# It shoule be something like
# self.__dict__[parameter.name] = property(...) ?
col = Collection()
col.add(par1)
col.add(par2)
col['par1'] # gives the whole object
# Now here is what I would like to get:
# col.par1 -> should result like col['par1'].v
# col.par1 = 5 -> should result like col['par1'].v = 5
```
---
Other questions that I put to understand property():
* [Why do managed attributes just work for class attributes and not for instance attributes in python?](http://stackoverflow.com/questions/428264 "Why do managed attributes just work for class attributes and not for instance attributes in python?")
* [How can I assign a new class attribute via \_\_dict\_\_ in python?](http://stackoverflow.com/questions/432786/ "setting class attributes with setattr()") | Using the same get/set functions for both classes forces you into an ugly hack with the argument list. Very sketchy, this is how I would do it:
In class SingleParameter, define get and set as usual:
```
def get(self):
return self._s
def set(self, value):
self._s = value
```
In class Collection, you cannot know the information until you create the property, so you define the metaset/metaget function and particularize them only later with a lambda function:
```
def metaget(self, par):
return par.s
def metaset(self, value, par):
par.s = value
def add(self, par):
self[par.name] = par
setattr(Collection, par.name,
property(
fget=lambda x : Collection.metaget(x, par),
fset=lambda x, y : Collection.metaset(x,y, par))
``` |
Is there a Python library than can simulate network traffic from different addresses | 414,025 | 16 | 2009-01-05T18:20:39Z | 414,078 | 17 | 2009-01-05T18:44:27Z | [
"python",
"networking"
] | Is there a python library out there than can allow me to send UDP packets to a machine (sending to localhost is ok) from different source addresses and ports? I remember that one existed, but can't find it anymore. | You can spoof an IP address using [Scapy](http://www.secdev.org/projects/scapy/) library.
Here's an example from [Packet Wizardry: Ruling the Network with Python](http://web.archive.org/web/20120401161821/http://packetstorm.linuxsecurity.com/papers/general/blackmagic.txt):
```
#!/usr/bin/env python
import sys
from scapy import *
conf.verb=0
if len(sys.argv) != 4:
print "Usage: ./spoof.py <target> <spoofed_ip> <port>"
sys.exit(1)
target = sys.argv[1]
spoofed_ip = sys.argv[2]
port = int(sys.argv[3])
p1=IP(dst=target,src=spoofed_ip)/TCP(dport=port,sport=5000,flags='S')
send(p1)
print "Okay, SYN sent. Enter the sniffed sequence number now: "
seq=sys.stdin.readline()
print "Okay, using sequence number " + seq
seq=int(seq[:-1])
p2=IP(dst=target,src=spoofed_ip)/TCP(dport=port,sport=5000,flags='A',
ack=seq+1,seq=1)
send(p2)
print "Okay, final ACK sent. Check netstat on your target :-)"
``` |
python "'NoneType' object has no attribute 'encode'" | 414,230 | 5 | 2009-01-05T19:35:09Z | 414,239 | 12 | 2009-01-05T19:39:27Z | [
"python",
"urlencode"
] | I wrote this tiny Python snippet that scrapes a feed and prints it out. When I run the code, something in the feed triggers the error message you see here as my question. Here's the complete console output on error:
```
> Traceback (most recent call last):
> File "/home/vijay/ffour/ffour5.py",
> line 20, in <module>
> myfeed() File "/home/vijay/ffour/ffour5.py", line
> 15, in myfeed
> sys.stdout.write(entry["title"]).encode('utf-8')
> AttributeError: 'NoneType' object has
> no attribute 'encode'
``` | > ```
> > sys.stdout.write(entry["title"]).encode('utf-8')
> ```
This is the culprit. You probably mean:
```
sys.stdout.write(entry["title"].encode('utf-8'))
```
(Notice the position of the last closing bracket.) |
Django serialize to JSON | 414,543 | 15 | 2009-01-05T21:10:09Z | 414,696 | 10 | 2009-01-05T22:11:28Z | [
"python",
"django",
"json"
] | I have a Django model (schedule) with the class of entity, that is the parent of `Activity`, that is the parent of `Event`.
```
class Entity(models.Model):
<...>
class Activity(models.Model):
<...>
team_entity = models.ForeignKey(Entity)
<...>
class Event(models.Model):
<...>
activity = models.ForeignKey(Activity)
<...>
```
How do I serialize and get both the child object and grand children as part of the JSON file? | Before you do serialization, when retrieving your objects, to preserve the relationships use select\_related() to get children, grandchildren, etc
see <http://docs.djangoproject.com/en/dev/ref/models/querysets/> |
Django serialize to JSON | 414,543 | 15 | 2009-01-05T21:10:09Z | 918,010 | 7 | 2009-05-27T21:05:55Z | [
"python",
"django",
"json"
] | I have a Django model (schedule) with the class of entity, that is the parent of `Activity`, that is the parent of `Event`.
```
class Entity(models.Model):
<...>
class Activity(models.Model):
<...>
team_entity = models.ForeignKey(Entity)
<...>
class Event(models.Model):
<...>
activity = models.ForeignKey(Activity)
<...>
```
How do I serialize and get both the child object and grand children as part of the JSON file? | I now use django-piston. This does the trick. |
Add class to Django label_tag() output | 414,679 | 16 | 2009-01-05T22:05:15Z | 415,770 | 8 | 2009-01-06T08:00:00Z | [
"python",
"django",
"forms",
"newforms"
] | I need some way to add a class attribute to the output of the label\_tag() method for a forms field.
I see that there is the ability to pass in an attrs dictionary and I have tested it in the shell and I can do something like:
```
for field in form:
print field.label_tag(attrs{'class':'Foo'})
```
I will see the class='Foo' in my output, but I don't see a way to add an attrs argument from the template - in fact, templates are designed specifically against that, no?
Is there a way in my form definition to define the class to be displayed in the label?
In the form, I can do the following to give the inputs a class
```
self.fields['some_field'].widget.attrs['class'] = 'Foo'
```
I just need to have it output the class for the as well. | A [custom template tag](http://docs.djangoproject.com/en/dev/howto/custom-template-tags/#howto-custom-template-tags) seems to be the solution. A custom filter would also do, although it can be less elegant. But you would need to fall back to custom form rendering in both cases.
If this is a task of high importance; I'd create a Mixin that allows me to annotate the form fields with label classes and supplies form rendering methods using those classes. So that the following code works:
```
{{ form.as_table_with_label_classes }}
```
But I'd like to ask; Do you really need a class on the label tag? I mean HTML design-wise. It is absolutely necessary to *add* a class in there? Couldn't it be solved with some CSS like:
```
encapsulating_selector label {
some-attr: some-value;
}
```
I sometimes use [jQuery](http://jquery.com) for such cases where; *it will improve the page if it works, but it won't be a disaster if it doesn't*. And keep the HTML source as lean as possible. |
Add class to Django label_tag() output | 414,679 | 16 | 2009-01-05T22:05:15Z | 1,933,711 | 10 | 2009-12-19T18:16:02Z | [
"python",
"django",
"forms",
"newforms"
] | I need some way to add a class attribute to the output of the label\_tag() method for a forms field.
I see that there is the ability to pass in an attrs dictionary and I have tested it in the shell and I can do something like:
```
for field in form:
print field.label_tag(attrs{'class':'Foo'})
```
I will see the class='Foo' in my output, but I don't see a way to add an attrs argument from the template - in fact, templates are designed specifically against that, no?
Is there a way in my form definition to define the class to be displayed in the label?
In the form, I can do the following to give the inputs a class
```
self.fields['some_field'].widget.attrs['class'] = 'Foo'
```
I just need to have it output the class for the as well. | How about adding the CSS class to the form field in the forms.py, like:
```
class MyForm(forms.Form):
title = forms.CharField(widget=forms.TextInput(attrs={'class': 'foo'}))
```
then I just do the following in the template:
```
<label for="id_{{form.title.name}}" class="bar">
{{ form.title }}
</label>
```
Of course this can easily be modified to work within a for loop tag in the template. |
Add class to Django label_tag() output | 414,679 | 16 | 2009-01-05T22:05:15Z | 11,584,458 | 7 | 2012-07-20T17:49:31Z | [
"python",
"django",
"forms",
"newforms"
] | I need some way to add a class attribute to the output of the label\_tag() method for a forms field.
I see that there is the ability to pass in an attrs dictionary and I have tested it in the shell and I can do something like:
```
for field in form:
print field.label_tag(attrs{'class':'Foo'})
```
I will see the class='Foo' in my output, but I don't see a way to add an attrs argument from the template - in fact, templates are designed specifically against that, no?
Is there a way in my form definition to define the class to be displayed in the label?
In the form, I can do the following to give the inputs a class
```
self.fields['some_field'].widget.attrs['class'] = 'Foo'
```
I just need to have it output the class for the as well. | **Technique 1**
I take issue with another answer's assertion that a filter would be "less elegant." As you can see, it's very elegant indeed.
```
@register.filter(is_safe=True)
def label_with_classes(value, arg):
return value.label_tag(attrs={'class': arg})
```
Using this in a template is just as elegant:
```
{{ form.my_field|label_with_classes:"class1 class2"}}
```
**Technique 2**
Alternatively, one of the more interesting technique I've found is: [Adding \* to required fields](http://www.thebitguru.com/blog/view/299-Adding%20%2a%20to%20required%20fields).
You create a decorator for BoundField.label\_tag that will call it with *attrs* set appropriately. Then you monkey patch BoundField so that calling BoundField.label\_tag calls the decorated function.
```
from django.forms.forms import BoundField
def add_control_label(f):
def control_label_tag(self, contents=None, attrs=None):
if attrs is None: attrs = {}
attrs['class'] = 'control-label'
return f(self, contents, attrs)
return control_label_tag
BoundField.label_tag = add_control_label(BoundField.label_tag)
``` |
SQLAlchemy DateTime timezone | 414,952 | 24 | 2009-01-05T23:46:02Z | 462,028 | 14 | 2009-01-20T16:24:22Z | [
"python",
"datetime",
"timezone",
"sqlalchemy"
] | SQLAlchemy's `DateTime` type allows for a `timezone=True` argument to save a non-naive datetime object to the datbase, and to return it as such. Is there any way to modify the timezone of the `tzinfo` that SQLAlchemy passes in so it could be, for instance, UTC? I realize that I could just use `default=datetime.datetime.utcnow`; however, this is a naive time that would happily accept someone passing in a naive localtime-based datetime, even if I used `timezone=True` with it, because it makes local or UTC time non-naive without having a base timezone to normailze it with. I have tried (using [pytz](http://pytz.sourceforge.net/)) to make the datetime object non-naive, but when I save this to the db it comes back as naive.
Note how datetime.datetime.utcnow does not work with timezone=True so well:
```
import sqlalchemy as sa
from sqlalchemy.sql import select
import datetime
metadata = sa.MetaData('postgres://user:pass@machine/db')
data_table = sa.Table('data', metadata,
sa.Column('id', sa.types.Integer, primary_key=True),
sa.Column('date', sa.types.DateTime(timezone=True), default=datetime.datetime.utcnow)
)
metadata.create_all()
engine = metadata.bind
conn = engine.connect()
result = conn.execute(data_table.insert().values(id=1))
s = select([data_table])
result = conn.execute(s)
row = result.fetchone()
```
> (1, datetime.datetime(2009, 1, 6, 0, 9, 36, 891887))
```
row[1].utcoffset()
```
> datetime.timedelta(-1, 64800) # that's my localtime offset!!
```
datetime.datetime.now(tz=pytz.timezone("US/Central"))
```
> datetime.timedelta(-1, 64800)
```
datetime.datetime.now(tz=pytz.timezone("UTC"))
```
> datetime.timedelta(0) #UTC
Even if I change it to explicitly use UTC:
...
```
data_table = sa.Table('data', metadata,
sa.Column('id', sa.types.Integer, primary_key=True),
sa.Column('date', sa.types.DateTime(timezone=True), default=datetime.datetime.now(tz=pytz.timezone('UTC')))
)
row[1].utcoffset()
```
...
> datetime.timedelta(-1, 64800) # it did not use the timezone I explicitly added
Or if I drop the timezone=True:
...
```
data_table = sa.Table('data', metadata,
sa.Column('id', sa.types.Integer, primary_key=True),
sa.Column('date', sa.types.DateTime(), default=datetime.datetime.now(tz=pytz.timezone('UTC')))
)
row[1].utcoffset() is None
```
...
> True # it didn't even save a timezone to the db this time | <http://www.postgresql.org/docs/8.3/interactive/datatype-datetime.html#DATATYPE-TIMEZONES>
> All timezone-aware dates and times are stored internally in UTC. They are converted to local time in the zone specified by the timezone configuration parameter before being displayed to the client.
The only way to store it with postgresql is to store it separately. |
Why does gethostbyaddr(gethostname()) return my IPv6 IP? | 415,407 | 3 | 2009-01-06T04:01:41Z | 415,417 | 11 | 2009-01-06T04:07:16Z | [
"python",
"sockets",
"ip-address",
"ipv6",
"ipv4"
] | I'm working on making a simple server application with python, and I'm trying to get the IP to bind the listening socket to. An example I looked at uses this:
```
HOST = gethostbyaddr(gethostname())
```
With a little more processing after this, it should give me just the host IP as a string. This should return the IPv4 address. But when I run this code, it returns my IPv6 address. Why does it do this and how can I get my IPv4 address?
If its relevant, I'm using windows vista and python 2.5 | Getting your IP address is harder than you might think.
Check [this answer](http://stackoverflow.com/questions/270745/how-do-i-determine-all-of-my-ip-addresses-when-i-have-multiple-nics#274644) I gave for the one reliable way I've found.
Here's what the answer says in case you don't like clicking on things:
Use the [`netifaces`](http://alastairs-place.net/netifaces/) module. Because networking is complex, using netifaces can be a little tricky, but here's how to do what you want:
```
>>> import netifaces
>>> netifaces.interfaces()
['lo', 'eth0']
>>> netifaces.ifaddresses('eth0')
{17: [{'broadcast': 'ff:ff:ff:ff:ff:ff', 'addr': '00:11:2f:32:63:45'}], 2: [{'broadcast': '10.0.0.255', 'netmask': '255.255.255.0', 'addr': '10.0.0.2'}], 10: [{'netmask': 'ffff:ffff:ffff:ffff::', 'addr': 'fe80::211:2fff:fe32:6345%eth0'}]}
>>> for interface in netifaces.interfaces():
... print netifaces.ifaddresses(interface)[netifaces.AF_INET]
...
[{'peer': '127.0.0.1', 'netmask': '255.0.0.0', 'addr': '127.0.0.1'}]
[{'broadcast': '10.0.0.255', 'netmask': '255.255.255.0', 'addr': '10.0.0.2'}]
>>> for interface in netifaces.interfaces():
... for link in netifaces.ifaddresses(interface)[netifaces.AF_INET]:
... print link['addr']
...
127.0.0.1
10.0.0.2
```
This can be made a little more readable like this:
```
from netifaces import interfaces, ifaddresses, AF_INET
def ip4_addresses():
ip_list = []
for interface in interfaces():
for link in ifaddresses(interface)[AF_INET]:
ip_list.append(link['addr'])
return ip_list
```
If you want IPv6 addresses, use `AF_INET6` instead of `AF_INET`. If you're wondering why `netifaces` uses lists and dictionaries all over the place, it's because a single computer can have multiple NICs, and each NIC can have multiple addresses, and each address has its own set of options. |
How to get current time in Python | 415,511 | 1,153 | 2009-01-06T04:54:23Z | 415,519 | 1,151 | 2009-01-06T04:57:05Z | [
"python",
"datetime",
"time"
] | What is the module/method used to get current time? | ```
>>> import datetime
>>> datetime.datetime.now()
datetime(2009, 1, 6, 15, 8, 24, 78915)
```
And just the time:
```
>>> datetime.datetime.time(datetime.datetime.now())
datetime.time(15, 8, 24, 78915)
```
The same but slightly more compact:
```
>>> datetime.datetime.now().time()
```
See the [**documentation**](https://docs.python.org/3/library/datetime.html) for more info.
To save typing, you can import the `datetime` object from the `datetime` module:
```
>>> from datetime import datetime
```
Then remove the leading `datetime.` from all the above. |
How to get current time in Python | 415,511 | 1,153 | 2009-01-06T04:54:23Z | 415,525 | 475 | 2009-01-06T04:59:52Z | [
"python",
"datetime",
"time"
] | What is the module/method used to get current time? | You can use [`time.strftime()`](http://docs.python.org/3.3/library/time.html?highlight=time.strftime#time.strftime):
```
>>> from time import gmtime, strftime
>>> strftime("%Y-%m-%d %H:%M:%S", gmtime())
'2009-01-05 22:14:39'
``` |
How to get current time in Python | 415,511 | 1,153 | 2009-01-06T04:54:23Z | 415,527 | 66 | 2009-01-06T05:02:43Z | [
"python",
"datetime",
"time"
] | What is the module/method used to get current time? | ```
>>> from time import gmtime, strftime
>>> strftime("%a, %d %b %Y %X +0000", gmtime())
'Tue, 06 Jan 2009 04:54:56 +0000'
```
That outputs the current GMT in the specified format. There is also a localtime() method.
This [page](http://docs.python.org/library/time.html#module-time) has more details. |
How to get current time in Python | 415,511 | 1,153 | 2009-01-06T04:54:23Z | 416,605 | 74 | 2009-01-06T13:55:23Z | [
"python",
"datetime",
"time"
] | What is the module/method used to get current time? | Do
```
from time import time
t = time()
```
* `t` - float number, good for time interval measurement.
There is some difference for Unix and Windows platforms. |
How to get current time in Python | 415,511 | 1,153 | 2009-01-06T04:54:23Z | 4,538,034 | 26 | 2010-12-27T10:24:12Z | [
"python",
"datetime",
"time"
] | What is the module/method used to get current time? | If you need current time as a `time` object:
```
>>> import datetime
>>> now = datetime.datetime.now()
>>> datetime.time(now.hour, now.minute, now.second)
datetime.time(11, 23, 44)
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.