title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
How do I check if a string is a number (float) in Python? | 354,038 | 872 | 2008-12-09T20:03:42Z | 26,336,546 | 8 | 2014-10-13T09:17:53Z | [
"python",
"casting",
"floating-point",
"type-conversion"
] | What is the best possible way to check if a string can be represented as a number in Python?
The function I currently have right now is:
```
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
```
Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse. | Lets say you have digits in string.
str = "100949"
and you would like to check if it has only numbers
```
if str.isdigit():
returns TRUE or FALSE
```
[isdigit docs](http://docs.python.org/2/library/stdtypes.html#str.isdigit)
otherwise your method works great to find the occurrence of a digit in a string. |
How do I check if a string is a number (float) in Python? | 354,038 | 872 | 2008-12-09T20:03:42Z | 32,453,110 | 10 | 2015-09-08T08:42:14Z | [
"python",
"casting",
"floating-point",
"type-conversion"
] | What is the best possible way to check if a string can be represented as a number in Python?
The function I currently have right now is:
```
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
```
Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse. | For `int` use this:
```
>>> "1221323".isdigit()
True
```
But for `float` we need some tricks ;-). Every float number has one point...
```
>>> "12.34".isdigit()
False
>>> "12.34".replace('.','',1).isdigit()
True
>>> "12.3.4".replace('.','',1).isdigit()
False
```
Also for negative numbers just add `lstrip()`:
```
>>> '-12'.lstrip('-')
'12'
```
And now we get a universal way:
```
>>> '-12.34'.lstrip('-').replace('.','',1).isdigit()
True
>>> '.-234'.lstrip('-').replace('.','',1).isdigit()
False
``` |
How do I check if a string is a number (float) in Python? | 354,038 | 872 | 2008-12-09T20:03:42Z | 34,615,173 | 7 | 2016-01-05T15:21:27Z | [
"python",
"casting",
"floating-point",
"type-conversion"
] | What is the best possible way to check if a string can be represented as a number in Python?
The function I currently have right now is:
```
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
```
Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse. | I know this is particularly old but I would add an answer I believe covers the information missing from the highest voted answer that could be very valuable to any who find this:
For each of the following methods connect them with a count if you need any input to be accepted. (Assuming we are using vocal definitions of integers rather than 0-255, etc.)
`x.isdigit()`
works well for checking if x is an integer.
`x.replace('-','').isdigit()`
works well for checking if x is a negative.(Check - in first position)
`x.replace('.','').isdigit()`
works well for checking if x is a decimal.
`x.replace(':','').isdigit()`
works well for checking if x is a ratio.
`x.replace('/','').isdigit()`
works well for checking if x is a fraction. |
How to configure vim to not put comments at the beginning of lines while editing python files | 354,097 | 40 | 2008-12-09T20:21:40Z | 354,422 | 26 | 2008-12-09T22:07:18Z | [
"python",
"vim"
] | When I add a # in insert mode on an empty line in Vim while editing python files, vim moves the # to the beginning of the line, but I would like the # to be inserted at the tab level where I entered it.
For example, when writing this in vim
```
for i in range(10):
#
```
the # does not stay there where I entered it.
It is moved like so, by vim.
```
for i in range(10):
#
```
Does anyone know of a configuration item in vim that would change this?
If it helps, I am using Ubuntu 8.10. | I found an answer here <http://vim.wikia.com/wiki/Restoring_indent_after_typing_hash>
It seems that the vim smartindent option is the cause of the problem.
The referenced page above describes work-a-rounds but after reading the help in smartindent in vim itself (:help smartindent), I decided to try cindent instead of smartindent.
I replaced
```
set smartindent
```
with
```
set cindent
```
in my .vimrc file
and so far it is working perfectly.
This changed also fixed the behavior of '<<' and '>>' for indenting visual blocks that include python comments.
There are more configuration options for and information on indentation in the vim help for smartindent and cindent (:help smartindent and :help cindent). |
How to configure vim to not put comments at the beginning of lines while editing python files | 354,097 | 40 | 2008-12-09T20:21:40Z | 385,724 | 8 | 2008-12-22T07:27:50Z | [
"python",
"vim"
] | When I add a # in insert mode on an empty line in Vim while editing python files, vim moves the # to the beginning of the line, but I would like the # to be inserted at the tab level where I entered it.
For example, when writing this in vim
```
for i in range(10):
#
```
the # does not stay there where I entered it.
It is moved like so, by vim.
```
for i in range(10):
#
```
Does anyone know of a configuration item in vim that would change this?
If it helps, I am using Ubuntu 8.10. | I have the following lines in my .vimrc, seems to be installed by default with my Ubuntu 8.10
```
set smartindent
inoremap # X^H#
set autoindent
```
And I don't observe the problem. Maybe you can try this. (Note that ^H should be entered by Ctrl-V Ctrl-H) |
How to configure vim to not put comments at the beginning of lines while editing python files | 354,097 | 40 | 2008-12-09T20:21:40Z | 777,385 | 15 | 2009-04-22T14:07:40Z | [
"python",
"vim"
] | When I add a # in insert mode on an empty line in Vim while editing python files, vim moves the # to the beginning of the line, but I would like the # to be inserted at the tab level where I entered it.
For example, when writing this in vim
```
for i in range(10):
#
```
the # does not stay there where I entered it.
It is moved like so, by vim.
```
for i in range(10):
#
```
Does anyone know of a configuration item in vim that would change this?
If it helps, I am using Ubuntu 8.10. | @PolyThinker Though I see that response a lot to this question, in my opinion it's not a good solution. The editor still thinks it should be indented all the way to left - check this by pushing == on a line that starts with a hash, or pushing = while a block of code with comments in it is highlighted to reindent.
I would strongly recommend `filetype indent on`, and remove the `set smartindent` and `set autoindent` (or `set cindent`) lines from your vimrc. Someone else (appparently David Bustos) was kind enough to write a full indentation parser for us; it's located at $VIMDIRECTORY/indent/python.vim.
(Paul's `cindent` solution probably works for python, but `filetype indent on` is much more generally useful.) |
Are there statistical studies that indicates that Python is "more productive"? | 354,124 | 44 | 2008-12-09T20:29:40Z | 354,249 | 16 | 2008-12-09T21:09:45Z | [
"python",
"productivity"
] | If I do a google search with the string "python productive" the first results is a page <http://www.ferg.org/projects/python_java_side-by-side.html> claiming that "python is more productive of Java". Many Python programmers that I have talked with claim that Python is "more productive", and most of them report the arguments listed in the above cited article.
The article could be summarized in these stametements:
1. Python allow you write terser code
2. A more terser is written in lesser time
3. **Then** Python is more productive
But the article does not reports any statisticals evidences that support the hypothesis that a more terse code could be developed (not written) in lesser time.
Do you know if there any article that reports statistical evidences that Python is more productive of something else?
## Update:
I'm not interested in adovacy arguments. Please do not tell why something *should* be more productive of something else. I'm interested in studies that measures that the use of something is related or not related to the productivity.
I'm interested in statisticals evidences. If you claim that the productivity depends from many other factors then there should be a statistical study that proves that the language choiche is not statistically correlated to the productivity. | All evidence is anecdotal.
You can't ever find published studies that show the general superiority of one language over another because there are too many confounds:
* Individual programmers differ greatly in ability
* Some tasks are more amenable to a given language/library than others (what constitues representative set of tasks?)
* Different languages have different core libraries
* Different languages have different tool chains
* Interaction of two or more of the above (e.g. familiarity of programmer X with tools Y)
* etc. the list goes on and on and on
Even though you can design experiments to control for some of these, the variability still requires a huge amount of statistical power to get any meaningful result, and no one ever does studies where like 1000 programmers do the exact same task in different languages, so there's never anything definitive.
The upshot is, each of us knows what the best languages/tools are, and so we can advocate them without fear of being shot down by a published study. :) |
Are there statistical studies that indicates that Python is "more productive"? | 354,124 | 44 | 2008-12-09T20:29:40Z | 354,250 | 19 | 2008-12-09T21:09:52Z | [
"python",
"productivity"
] | If I do a google search with the string "python productive" the first results is a page <http://www.ferg.org/projects/python_java_side-by-side.html> claiming that "python is more productive of Java". Many Python programmers that I have talked with claim that Python is "more productive", and most of them report the arguments listed in the above cited article.
The article could be summarized in these stametements:
1. Python allow you write terser code
2. A more terser is written in lesser time
3. **Then** Python is more productive
But the article does not reports any statisticals evidences that support the hypothesis that a more terse code could be developed (not written) in lesser time.
Do you know if there any article that reports statistical evidences that Python is more productive of something else?
## Update:
I'm not interested in adovacy arguments. Please do not tell why something *should* be more productive of something else. I'm interested in studies that measures that the use of something is related or not related to the productivity.
I'm interested in statisticals evidences. If you claim that the productivity depends from many other factors then there should be a statistical study that proves that the language choiche is not statistically correlated to the productivity. | Yes, there's an excellent paper by Lutz Prechelt on this subject:
[An Empirical Comparison of Seven Programming Languages](http://page.mi.fu-berlin.de/prechelt/Biblio//jccpprt_computer2000.pdf)
Of course, this paper doesnât âproveâ the superiority of any particular language. But it probably comes as close as any scientific study *can* come to the truth. On the other hand, the data of this study is very dated, and in the fast-developing world of software engineering this actually plays an important role since both the languages and the tools have vastly improved over time. |
Are there statistical studies that indicates that Python is "more productive"? | 354,124 | 44 | 2008-12-09T20:29:40Z | 354,361 | 28 | 2008-12-09T21:50:45Z | [
"python",
"productivity"
] | If I do a google search with the string "python productive" the first results is a page <http://www.ferg.org/projects/python_java_side-by-side.html> claiming that "python is more productive of Java". Many Python programmers that I have talked with claim that Python is "more productive", and most of them report the arguments listed in the above cited article.
The article could be summarized in these stametements:
1. Python allow you write terser code
2. A more terser is written in lesser time
3. **Then** Python is more productive
But the article does not reports any statisticals evidences that support the hypothesis that a more terse code could be developed (not written) in lesser time.
Do you know if there any article that reports statistical evidences that Python is more productive of something else?
## Update:
I'm not interested in adovacy arguments. Please do not tell why something *should* be more productive of something else. I'm interested in studies that measures that the use of something is related or not related to the productivity.
I'm interested in statisticals evidences. If you claim that the productivity depends from many other factors then there should be a statistical study that proves that the language choiche is not statistically correlated to the productivity. | Yes, and there are also statistical studies that prove that dogs are more productive than cats. Both are equally valid. ;-)
by popular demand, here are a couple of "studies" - take them with a block of salt!
1. [An empirical comparison of C, C++, Java, Perl, Python, Rexx, and Tcl](http://page.mi.fu-berlin.de/prechelt/Biblio/jccpprt_computer2000.pdf) PDF warning!
2. [Programming Language Productivity](http://www.connellybarnes.com/documents/language_productivity.pdf) PDF warning! Note that these statistics are for a "string processing problem", so one might expect the winner to be...Perl of course!
and [Jeff Atwood's musings](http://blog.codinghorror.com/are-all-programming-languages-the-same/) are interesting as well
the issues of programmer productivity are far more complex than what language is being used. Productivity among programmers can vary wildly, and is affected by the problem domain plus many other factors. Thus no "study" can ever be "definitive". See [Understanding Software Productivity](http://www.usc.edu/dept/ATRIUM/Papers/Software_Productivity.html) and [Productivity Variations Among Software Developers and Teams](http://forums.construx.com/blogs/stevemcc/archive/2008/03/27/productivity-variations-among-software-developers-and-teams-the-origin-of-quot-10x-quot.aspx) for additional information.
Finally, **the right tool for the right job** is still the rule. No exceptions. |
How is ** implemented in Python? | 354,421 | 9 | 2008-12-09T22:07:10Z | 354,626 | 23 | 2008-12-09T23:33:18Z | [
"python"
] | I'm wondering where I find the source to show how the operator \*\* is implemented in Python. Can someone point me in the right direction? | The python grammar definition (from which the parser is generated using [pgen](http://www.python.org/dev/peps/pep-0269/)), look for 'power': [Gramar/Gramar](http://svn.python.org/view/python/trunk/Grammar/Grammar?rev=65872&view=markup)
The python ast, look for 'ast\_for\_power': [Python/ast.c](http://svn.python.org/view/python/trunk/Python/ast.c?rev=67590&view=markup)
The python eval loop, look for 'BINARY\_POWER': [Python/ceval.c](http://svn.python.org/view/python/trunk/Python/ceval.c?rev=67666&view=markup)
Which calls PyNumber\_Power (implemented in [Objects/abstract.c](http://svn.python.org/view/python/trunk/Objects/abstract.c?rev=66043&view=markup)):
```
PyObject *
PyNumber_Power(PyObject *v, PyObject *w, PyObject *z)
{
return ternary_op(v, w, z, NB_SLOT(nb_power), "** or pow()");
}
```
Essentially, invoke the **pow** slot. For long objects (the only default integer type in 3.0) this is implemented in the long\_pow function [Objects/longobject.c](http://svn.python.org/view/python/trunk/Objects/longobject.c?rev=65518&view=markup), for int objects (in the 2.x branches) it is implemented in the int\_pow function [Object/intobject.c](http://svn.python.org/view/python/trunk/Objects/intobject.c?rev=64753&view=markup)
If you dig into long\_pow, you can see that after vetting the arguments and doing a bit of set up, the heart of the exponentiation can be see here:
```
if (Py_SIZE(b) <= FIVEARY_CUTOFF) {
/* Left-to-right binary exponentiation (HAC Algorithm 14.79) */
/* http://www.cacr.math.uwaterloo.ca/hac/about/chap14.pdf */
for (i = Py_SIZE(b) - 1; i >= 0; --i) {
digit bi = b->ob_digit[i];
for (j = 1 << (PyLong_SHIFT-1); j != 0; j >>= 1) {
MULT(z, z, z)
if (bi & j)
MULT(z, a, z)
}
}
}
else {
/* Left-to-right 5-ary exponentiation (HAC Algorithm 14.82) */
Py_INCREF(z); /* still holds 1L */
table[0] = z;
for (i = 1; i < 32; ++i)
MULT(table[i-1], a, table[i])
for (i = Py_SIZE(b) - 1; i >= 0; --i) {
const digit bi = b->ob_digit[i];
for (j = PyLong_SHIFT - 5; j >= 0; j -= 5) {
const int index = (bi >> j) & 0x1f;
for (k = 0; k < 5; ++k)
MULT(z, z, z)
if (index)
MULT(z, table[index], z)
}
}
}
```
Which uses algorithms discussed in [Chapter 14.6](http://www.cacr.math.uwaterloo.ca/hac/about/chap14.pdf) of the [Handbook of Applied Cryptography](http://www.cacr.math.uwaterloo.ca/hac/) which describes efficient exponentiation algorithms for arbitrary precision arithmetic. |
How do you return multiple values in Python? | 354,883 | 435 | 2008-12-10T01:55:50Z | 354,892 | 9 | 2008-12-10T02:00:49Z | [
"python",
"coding-style",
"return",
"return-value"
] | The canonical way to return multiple values in languages that support it is often [tupling](http://stackoverflow.com/questions/38508/whats-the-best-way-to-return-multiple-values-from-a-function-in-python).
### Option: Using a tuple
Consider this trivial example:
```
def f(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return (y0,y1,y2)
```
However, this quickly gets problematic as the number of values returned increases. What if you want to return four or five values? Sure, you could keep tupling them, but it gets easy to forget which value is where. It's also rather ugly to unpack them wherever you want to receive them.
### Option: Using a dictionary
The next logical step seems to be to introduce some sort of 'record notation'. In python, the obvious way to do this is by means of a `dict`.
Consider the following:
```
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
```
(edit- Just to be clear, y0, y1 and y2 are just meant as abstract identifiers. As pointed out, in practice you'd use meaningful identifiers)
Now, we have a mechanism whereby we can project out a particular member of the returned object. For example,
```
result['y0']
```
### Option: Using a class
However, there is another option. We could instead return a specialized structure. I've framed this in the context of Python, but I'm sure it applies to other languages as well. Indeed, if you were working in C this might very well be your only option. Here goes:
```
class ReturnValue(object):
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return ReturnValue(y0, y1, y2)
```
In python the previous two are perhaps very similar in terms of plumbing- After all `{ y0, y1, y2 }` just end up being entries in the internal `__dict__` of the `ReturnValue`.
There is one additional feature provided by Python though for tiny objects, the `__slots__` attribute. The class could be expressed as:
```
class ReturnValue(object):
__slots__ = ["y0", "y1", "y2"]
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
```
From the [Python Reference Manual](http://www.network-theory.co.uk/docs/pylang/__slots__.html):
> The `__slots__` declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because `__dict__` is not created for each instance.
### Option: Using a list
Another suggestion which I'd overlooked comes from Bill the Lizard:
```
def h(x):
result = [x + 1]
result.append(x * 3)
result.append(y0 ** y3)
return result
```
This is my least favorite method though. I suppose I'm tainted by exposure to Haskell, but the idea of mixed-type lists has always felt uncomfortable to me. In this particular example the list is -not- mixed type, but it conceivably could be. A list used in this way really doesn't gain anything with respect to the tuple as far as I can tell. The only real difference between lists and tuples in Python is that lists are [mutable](http://docs.python.org/library/stdtypes.html#typesseq-mutable), wheras tuples are not. I personally tend to carry over the conventions from functional programming: use lists for any number of elements of the same type, and tuples for a fixed number of elements of predetermined types.
# Question
After the lengthy preamble, comes the inevitable question. Which method (do you think) is best?
I've typically found myself going the dictionary route because it involves less set-up work. From a types perspective however, you might be better off going the class route, since that may help you avoid confusing what a dictionary represents. On the other hand, there are some in the Python community that feel [implied interfaces should be preferred to explicit interfaces](http://www.canonical.org/~kragen/isinstance/), at which point the type of the object really isn't relevant, since you're basically relying on the convention that the same attribute will always have the same meaning.
So, how do -you- return multiple values in Python? | I prefer
```
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
```
it seems everything else is just extra code to do the same thing. |
How do you return multiple values in Python? | 354,883 | 435 | 2008-12-10T01:55:50Z | 354,918 | 7 | 2008-12-10T02:15:34Z | [
"python",
"coding-style",
"return",
"return-value"
] | The canonical way to return multiple values in languages that support it is often [tupling](http://stackoverflow.com/questions/38508/whats-the-best-way-to-return-multiple-values-from-a-function-in-python).
### Option: Using a tuple
Consider this trivial example:
```
def f(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return (y0,y1,y2)
```
However, this quickly gets problematic as the number of values returned increases. What if you want to return four or five values? Sure, you could keep tupling them, but it gets easy to forget which value is where. It's also rather ugly to unpack them wherever you want to receive them.
### Option: Using a dictionary
The next logical step seems to be to introduce some sort of 'record notation'. In python, the obvious way to do this is by means of a `dict`.
Consider the following:
```
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
```
(edit- Just to be clear, y0, y1 and y2 are just meant as abstract identifiers. As pointed out, in practice you'd use meaningful identifiers)
Now, we have a mechanism whereby we can project out a particular member of the returned object. For example,
```
result['y0']
```
### Option: Using a class
However, there is another option. We could instead return a specialized structure. I've framed this in the context of Python, but I'm sure it applies to other languages as well. Indeed, if you were working in C this might very well be your only option. Here goes:
```
class ReturnValue(object):
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return ReturnValue(y0, y1, y2)
```
In python the previous two are perhaps very similar in terms of plumbing- After all `{ y0, y1, y2 }` just end up being entries in the internal `__dict__` of the `ReturnValue`.
There is one additional feature provided by Python though for tiny objects, the `__slots__` attribute. The class could be expressed as:
```
class ReturnValue(object):
__slots__ = ["y0", "y1", "y2"]
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
```
From the [Python Reference Manual](http://www.network-theory.co.uk/docs/pylang/__slots__.html):
> The `__slots__` declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because `__dict__` is not created for each instance.
### Option: Using a list
Another suggestion which I'd overlooked comes from Bill the Lizard:
```
def h(x):
result = [x + 1]
result.append(x * 3)
result.append(y0 ** y3)
return result
```
This is my least favorite method though. I suppose I'm tainted by exposure to Haskell, but the idea of mixed-type lists has always felt uncomfortable to me. In this particular example the list is -not- mixed type, but it conceivably could be. A list used in this way really doesn't gain anything with respect to the tuple as far as I can tell. The only real difference between lists and tuples in Python is that lists are [mutable](http://docs.python.org/library/stdtypes.html#typesseq-mutable), wheras tuples are not. I personally tend to carry over the conventions from functional programming: use lists for any number of elements of the same type, and tuples for a fixed number of elements of predetermined types.
# Question
After the lengthy preamble, comes the inevitable question. Which method (do you think) is best?
I've typically found myself going the dictionary route because it involves less set-up work. From a types perspective however, you might be better off going the class route, since that may help you avoid confusing what a dictionary represents. On the other hand, there are some in the Python community that feel [implied interfaces should be preferred to explicit interfaces](http://www.canonical.org/~kragen/isinstance/), at which point the type of the object really isn't relevant, since you're basically relying on the convention that the same attribute will always have the same meaning.
So, how do -you- return multiple values in Python? | Generally, the "specialized structure" actually IS a sensible current state of an object, with its own methods.
```
class Some3SpaceThing(object):
def __init__(self,x):
self.g(x)
def g(self,x):
self.y0 = x + 1
self.y1 = x * 3
self.y2 = y0 ** y3
r = Some3SpaceThing( x )
r.y0
r.y1
r.y2
```
I like to find names for anonymous structures where possible. Meaningful names make things more clear. |
How do you return multiple values in Python? | 354,883 | 435 | 2008-12-10T01:55:50Z | 354,929 | 84 | 2008-12-10T02:22:28Z | [
"python",
"coding-style",
"return",
"return-value"
] | The canonical way to return multiple values in languages that support it is often [tupling](http://stackoverflow.com/questions/38508/whats-the-best-way-to-return-multiple-values-from-a-function-in-python).
### Option: Using a tuple
Consider this trivial example:
```
def f(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return (y0,y1,y2)
```
However, this quickly gets problematic as the number of values returned increases. What if you want to return four or five values? Sure, you could keep tupling them, but it gets easy to forget which value is where. It's also rather ugly to unpack them wherever you want to receive them.
### Option: Using a dictionary
The next logical step seems to be to introduce some sort of 'record notation'. In python, the obvious way to do this is by means of a `dict`.
Consider the following:
```
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
```
(edit- Just to be clear, y0, y1 and y2 are just meant as abstract identifiers. As pointed out, in practice you'd use meaningful identifiers)
Now, we have a mechanism whereby we can project out a particular member of the returned object. For example,
```
result['y0']
```
### Option: Using a class
However, there is another option. We could instead return a specialized structure. I've framed this in the context of Python, but I'm sure it applies to other languages as well. Indeed, if you were working in C this might very well be your only option. Here goes:
```
class ReturnValue(object):
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return ReturnValue(y0, y1, y2)
```
In python the previous two are perhaps very similar in terms of plumbing- After all `{ y0, y1, y2 }` just end up being entries in the internal `__dict__` of the `ReturnValue`.
There is one additional feature provided by Python though for tiny objects, the `__slots__` attribute. The class could be expressed as:
```
class ReturnValue(object):
__slots__ = ["y0", "y1", "y2"]
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
```
From the [Python Reference Manual](http://www.network-theory.co.uk/docs/pylang/__slots__.html):
> The `__slots__` declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because `__dict__` is not created for each instance.
### Option: Using a list
Another suggestion which I'd overlooked comes from Bill the Lizard:
```
def h(x):
result = [x + 1]
result.append(x * 3)
result.append(y0 ** y3)
return result
```
This is my least favorite method though. I suppose I'm tainted by exposure to Haskell, but the idea of mixed-type lists has always felt uncomfortable to me. In this particular example the list is -not- mixed type, but it conceivably could be. A list used in this way really doesn't gain anything with respect to the tuple as far as I can tell. The only real difference between lists and tuples in Python is that lists are [mutable](http://docs.python.org/library/stdtypes.html#typesseq-mutable), wheras tuples are not. I personally tend to carry over the conventions from functional programming: use lists for any number of elements of the same type, and tuples for a fixed number of elements of predetermined types.
# Question
After the lengthy preamble, comes the inevitable question. Which method (do you think) is best?
I've typically found myself going the dictionary route because it involves less set-up work. From a types perspective however, you might be better off going the class route, since that may help you avoid confusing what a dictionary represents. On the other hand, there are some in the Python community that feel [implied interfaces should be preferred to explicit interfaces](http://www.canonical.org/~kragen/isinstance/), at which point the type of the object really isn't relevant, since you're basically relying on the convention that the same attribute will always have the same meaning.
So, how do -you- return multiple values in Python? | For small projects I find it easiest to work with tuples. When that gets too hard to manage (and not before) I start grouping things into logical structures, however I think your suggested use of dictionaries and ReturnValue objects is wrong (or too simplistic).
Returning a dictionary with keys y0, y1, y2 etc doesn't offer any advantage over tuples. Returning a ReturnValue instance with properties .y0 .y1 .y2 etc doesn't offer any advantage over tuples either. You need to start naming things if you want to get anywhere, and you can do that using tuples anyway:
```
def getImageData(filename):
[snip]
return size, (format, version, compression), (width,height)
size, type, dimensions = getImageData(x)
```
IMHO, the only good technique beyond tuples is to return real objects with proper methods and properties, like you get from `re.match()` or `open(file)`. |
How do you return multiple values in Python? | 354,883 | 435 | 2008-12-10T01:55:50Z | 354,955 | 14 | 2008-12-10T02:40:43Z | [
"python",
"coding-style",
"return",
"return-value"
] | The canonical way to return multiple values in languages that support it is often [tupling](http://stackoverflow.com/questions/38508/whats-the-best-way-to-return-multiple-values-from-a-function-in-python).
### Option: Using a tuple
Consider this trivial example:
```
def f(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return (y0,y1,y2)
```
However, this quickly gets problematic as the number of values returned increases. What if you want to return four or five values? Sure, you could keep tupling them, but it gets easy to forget which value is where. It's also rather ugly to unpack them wherever you want to receive them.
### Option: Using a dictionary
The next logical step seems to be to introduce some sort of 'record notation'. In python, the obvious way to do this is by means of a `dict`.
Consider the following:
```
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
```
(edit- Just to be clear, y0, y1 and y2 are just meant as abstract identifiers. As pointed out, in practice you'd use meaningful identifiers)
Now, we have a mechanism whereby we can project out a particular member of the returned object. For example,
```
result['y0']
```
### Option: Using a class
However, there is another option. We could instead return a specialized structure. I've framed this in the context of Python, but I'm sure it applies to other languages as well. Indeed, if you were working in C this might very well be your only option. Here goes:
```
class ReturnValue(object):
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return ReturnValue(y0, y1, y2)
```
In python the previous two are perhaps very similar in terms of plumbing- After all `{ y0, y1, y2 }` just end up being entries in the internal `__dict__` of the `ReturnValue`.
There is one additional feature provided by Python though for tiny objects, the `__slots__` attribute. The class could be expressed as:
```
class ReturnValue(object):
__slots__ = ["y0", "y1", "y2"]
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
```
From the [Python Reference Manual](http://www.network-theory.co.uk/docs/pylang/__slots__.html):
> The `__slots__` declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because `__dict__` is not created for each instance.
### Option: Using a list
Another suggestion which I'd overlooked comes from Bill the Lizard:
```
def h(x):
result = [x + 1]
result.append(x * 3)
result.append(y0 ** y3)
return result
```
This is my least favorite method though. I suppose I'm tainted by exposure to Haskell, but the idea of mixed-type lists has always felt uncomfortable to me. In this particular example the list is -not- mixed type, but it conceivably could be. A list used in this way really doesn't gain anything with respect to the tuple as far as I can tell. The only real difference between lists and tuples in Python is that lists are [mutable](http://docs.python.org/library/stdtypes.html#typesseq-mutable), wheras tuples are not. I personally tend to carry over the conventions from functional programming: use lists for any number of elements of the same type, and tuples for a fixed number of elements of predetermined types.
# Question
After the lengthy preamble, comes the inevitable question. Which method (do you think) is best?
I've typically found myself going the dictionary route because it involves less set-up work. From a types perspective however, you might be better off going the class route, since that may help you avoid confusing what a dictionary represents. On the other hand, there are some in the Python community that feel [implied interfaces should be preferred to explicit interfaces](http://www.canonical.org/~kragen/isinstance/), at which point the type of the object really isn't relevant, since you're basically relying on the convention that the same attribute will always have the same meaning.
So, how do -you- return multiple values in Python? | I prefer to use tuples whenever a tuple feels "natural"; coordinates are a typical example, where the separate objects can stand on their own, e.g. in one-axis only scaling calculations.
I use dictionaries as a return value only when the grouped objects aren't always the same. Think optional email headers.
For the rest of the cases, where the grouped objects have inherent meaning inside the group or a fully-fledged object with its own methods is needed, I use a class. |
How do you return multiple values in Python? | 354,883 | 435 | 2008-12-10T01:55:50Z | 354,958 | 33 | 2008-12-10T02:42:05Z | [
"python",
"coding-style",
"return",
"return-value"
] | The canonical way to return multiple values in languages that support it is often [tupling](http://stackoverflow.com/questions/38508/whats-the-best-way-to-return-multiple-values-from-a-function-in-python).
### Option: Using a tuple
Consider this trivial example:
```
def f(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return (y0,y1,y2)
```
However, this quickly gets problematic as the number of values returned increases. What if you want to return four or five values? Sure, you could keep tupling them, but it gets easy to forget which value is where. It's also rather ugly to unpack them wherever you want to receive them.
### Option: Using a dictionary
The next logical step seems to be to introduce some sort of 'record notation'. In python, the obvious way to do this is by means of a `dict`.
Consider the following:
```
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
```
(edit- Just to be clear, y0, y1 and y2 are just meant as abstract identifiers. As pointed out, in practice you'd use meaningful identifiers)
Now, we have a mechanism whereby we can project out a particular member of the returned object. For example,
```
result['y0']
```
### Option: Using a class
However, there is another option. We could instead return a specialized structure. I've framed this in the context of Python, but I'm sure it applies to other languages as well. Indeed, if you were working in C this might very well be your only option. Here goes:
```
class ReturnValue(object):
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return ReturnValue(y0, y1, y2)
```
In python the previous two are perhaps very similar in terms of plumbing- After all `{ y0, y1, y2 }` just end up being entries in the internal `__dict__` of the `ReturnValue`.
There is one additional feature provided by Python though for tiny objects, the `__slots__` attribute. The class could be expressed as:
```
class ReturnValue(object):
__slots__ = ["y0", "y1", "y2"]
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
```
From the [Python Reference Manual](http://www.network-theory.co.uk/docs/pylang/__slots__.html):
> The `__slots__` declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because `__dict__` is not created for each instance.
### Option: Using a list
Another suggestion which I'd overlooked comes from Bill the Lizard:
```
def h(x):
result = [x + 1]
result.append(x * 3)
result.append(y0 ** y3)
return result
```
This is my least favorite method though. I suppose I'm tainted by exposure to Haskell, but the idea of mixed-type lists has always felt uncomfortable to me. In this particular example the list is -not- mixed type, but it conceivably could be. A list used in this way really doesn't gain anything with respect to the tuple as far as I can tell. The only real difference between lists and tuples in Python is that lists are [mutable](http://docs.python.org/library/stdtypes.html#typesseq-mutable), wheras tuples are not. I personally tend to carry over the conventions from functional programming: use lists for any number of elements of the same type, and tuples for a fixed number of elements of predetermined types.
# Question
After the lengthy preamble, comes the inevitable question. Which method (do you think) is best?
I've typically found myself going the dictionary route because it involves less set-up work. From a types perspective however, you might be better off going the class route, since that may help you avoid confusing what a dictionary represents. On the other hand, there are some in the Python community that feel [implied interfaces should be preferred to explicit interfaces](http://www.canonical.org/~kragen/isinstance/), at which point the type of the object really isn't relevant, since you're basically relying on the convention that the same attribute will always have the same meaning.
So, how do -you- return multiple values in Python? | I vote for the dictionary.
I find that if I make a function that returns anything more than 2-3 variables I'll fold them up in a dictionary. Otherwise I tend to forget the order and content of what I'm returning.
Also, introducing a 'special' structure makes your code more difficult to follow. (Someone else will have to search through the code to find out what it is)
If your concerned about type look up, use descriptive dictionary keys, for example, 'x-values list'.
```
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
``` |
How do you return multiple values in Python? | 354,883 | 435 | 2008-12-10T01:55:50Z | 355,036 | 9 | 2008-12-10T03:51:47Z | [
"python",
"coding-style",
"return",
"return-value"
] | The canonical way to return multiple values in languages that support it is often [tupling](http://stackoverflow.com/questions/38508/whats-the-best-way-to-return-multiple-values-from-a-function-in-python).
### Option: Using a tuple
Consider this trivial example:
```
def f(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return (y0,y1,y2)
```
However, this quickly gets problematic as the number of values returned increases. What if you want to return four or five values? Sure, you could keep tupling them, but it gets easy to forget which value is where. It's also rather ugly to unpack them wherever you want to receive them.
### Option: Using a dictionary
The next logical step seems to be to introduce some sort of 'record notation'. In python, the obvious way to do this is by means of a `dict`.
Consider the following:
```
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
```
(edit- Just to be clear, y0, y1 and y2 are just meant as abstract identifiers. As pointed out, in practice you'd use meaningful identifiers)
Now, we have a mechanism whereby we can project out a particular member of the returned object. For example,
```
result['y0']
```
### Option: Using a class
However, there is another option. We could instead return a specialized structure. I've framed this in the context of Python, but I'm sure it applies to other languages as well. Indeed, if you were working in C this might very well be your only option. Here goes:
```
class ReturnValue(object):
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return ReturnValue(y0, y1, y2)
```
In python the previous two are perhaps very similar in terms of plumbing- After all `{ y0, y1, y2 }` just end up being entries in the internal `__dict__` of the `ReturnValue`.
There is one additional feature provided by Python though for tiny objects, the `__slots__` attribute. The class could be expressed as:
```
class ReturnValue(object):
__slots__ = ["y0", "y1", "y2"]
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
```
From the [Python Reference Manual](http://www.network-theory.co.uk/docs/pylang/__slots__.html):
> The `__slots__` declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because `__dict__` is not created for each instance.
### Option: Using a list
Another suggestion which I'd overlooked comes from Bill the Lizard:
```
def h(x):
result = [x + 1]
result.append(x * 3)
result.append(y0 ** y3)
return result
```
This is my least favorite method though. I suppose I'm tainted by exposure to Haskell, but the idea of mixed-type lists has always felt uncomfortable to me. In this particular example the list is -not- mixed type, but it conceivably could be. A list used in this way really doesn't gain anything with respect to the tuple as far as I can tell. The only real difference between lists and tuples in Python is that lists are [mutable](http://docs.python.org/library/stdtypes.html#typesseq-mutable), wheras tuples are not. I personally tend to carry over the conventions from functional programming: use lists for any number of elements of the same type, and tuples for a fixed number of elements of predetermined types.
# Question
After the lengthy preamble, comes the inevitable question. Which method (do you think) is best?
I've typically found myself going the dictionary route because it involves less set-up work. From a types perspective however, you might be better off going the class route, since that may help you avoid confusing what a dictionary represents. On the other hand, there are some in the Python community that feel [implied interfaces should be preferred to explicit interfaces](http://www.canonical.org/~kragen/isinstance/), at which point the type of the object really isn't relevant, since you're basically relying on the convention that the same attribute will always have the same meaning.
So, how do -you- return multiple values in Python? | +1 on S.Lott's suggestion of a named container class.
For python 2.6 and up, a [named tuple](http://docs.python.org/library/collections.html#collections.namedtuple) provides a useful way of easily creating these container classes, and the results are "lightweight and require no more memory than regular tuples". |
How do you return multiple values in Python? | 354,883 | 435 | 2008-12-10T01:55:50Z | 356,695 | 278 | 2008-12-10T16:36:01Z | [
"python",
"coding-style",
"return",
"return-value"
] | The canonical way to return multiple values in languages that support it is often [tupling](http://stackoverflow.com/questions/38508/whats-the-best-way-to-return-multiple-values-from-a-function-in-python).
### Option: Using a tuple
Consider this trivial example:
```
def f(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return (y0,y1,y2)
```
However, this quickly gets problematic as the number of values returned increases. What if you want to return four or five values? Sure, you could keep tupling them, but it gets easy to forget which value is where. It's also rather ugly to unpack them wherever you want to receive them.
### Option: Using a dictionary
The next logical step seems to be to introduce some sort of 'record notation'. In python, the obvious way to do this is by means of a `dict`.
Consider the following:
```
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
```
(edit- Just to be clear, y0, y1 and y2 are just meant as abstract identifiers. As pointed out, in practice you'd use meaningful identifiers)
Now, we have a mechanism whereby we can project out a particular member of the returned object. For example,
```
result['y0']
```
### Option: Using a class
However, there is another option. We could instead return a specialized structure. I've framed this in the context of Python, but I'm sure it applies to other languages as well. Indeed, if you were working in C this might very well be your only option. Here goes:
```
class ReturnValue(object):
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return ReturnValue(y0, y1, y2)
```
In python the previous two are perhaps very similar in terms of plumbing- After all `{ y0, y1, y2 }` just end up being entries in the internal `__dict__` of the `ReturnValue`.
There is one additional feature provided by Python though for tiny objects, the `__slots__` attribute. The class could be expressed as:
```
class ReturnValue(object):
__slots__ = ["y0", "y1", "y2"]
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
```
From the [Python Reference Manual](http://www.network-theory.co.uk/docs/pylang/__slots__.html):
> The `__slots__` declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because `__dict__` is not created for each instance.
### Option: Using a list
Another suggestion which I'd overlooked comes from Bill the Lizard:
```
def h(x):
result = [x + 1]
result.append(x * 3)
result.append(y0 ** y3)
return result
```
This is my least favorite method though. I suppose I'm tainted by exposure to Haskell, but the idea of mixed-type lists has always felt uncomfortable to me. In this particular example the list is -not- mixed type, but it conceivably could be. A list used in this way really doesn't gain anything with respect to the tuple as far as I can tell. The only real difference between lists and tuples in Python is that lists are [mutable](http://docs.python.org/library/stdtypes.html#typesseq-mutable), wheras tuples are not. I personally tend to carry over the conventions from functional programming: use lists for any number of elements of the same type, and tuples for a fixed number of elements of predetermined types.
# Question
After the lengthy preamble, comes the inevitable question. Which method (do you think) is best?
I've typically found myself going the dictionary route because it involves less set-up work. From a types perspective however, you might be better off going the class route, since that may help you avoid confusing what a dictionary represents. On the other hand, there are some in the Python community that feel [implied interfaces should be preferred to explicit interfaces](http://www.canonical.org/~kragen/isinstance/), at which point the type of the object really isn't relevant, since you're basically relying on the convention that the same attribute will always have the same meaning.
So, how do -you- return multiple values in Python? | [Named tuples](http://docs.python.org/library/collections.html#namedtuple-factory-function-for-tuples-with-named-fields) were added in 2.6 for this purpose. Also see [os.stat](http://docs.python.org/library/os.html#os.stat) for a similar builtin example.
```
>>> import collections
>>> point = collections.namedtuple('Point', ['x', 'y'])
>>> p = point(1, y=2)
>>> p.x, p.y
1 2
>>> p[0], p[1]
1 2
``` |
How do you return multiple values in Python? | 354,883 | 435 | 2008-12-10T01:55:50Z | 21,970,184 | 13 | 2014-02-23T15:26:30Z | [
"python",
"coding-style",
"return",
"return-value"
] | The canonical way to return multiple values in languages that support it is often [tupling](http://stackoverflow.com/questions/38508/whats-the-best-way-to-return-multiple-values-from-a-function-in-python).
### Option: Using a tuple
Consider this trivial example:
```
def f(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return (y0,y1,y2)
```
However, this quickly gets problematic as the number of values returned increases. What if you want to return four or five values? Sure, you could keep tupling them, but it gets easy to forget which value is where. It's also rather ugly to unpack them wherever you want to receive them.
### Option: Using a dictionary
The next logical step seems to be to introduce some sort of 'record notation'. In python, the obvious way to do this is by means of a `dict`.
Consider the following:
```
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
```
(edit- Just to be clear, y0, y1 and y2 are just meant as abstract identifiers. As pointed out, in practice you'd use meaningful identifiers)
Now, we have a mechanism whereby we can project out a particular member of the returned object. For example,
```
result['y0']
```
### Option: Using a class
However, there is another option. We could instead return a specialized structure. I've framed this in the context of Python, but I'm sure it applies to other languages as well. Indeed, if you were working in C this might very well be your only option. Here goes:
```
class ReturnValue(object):
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return ReturnValue(y0, y1, y2)
```
In python the previous two are perhaps very similar in terms of plumbing- After all `{ y0, y1, y2 }` just end up being entries in the internal `__dict__` of the `ReturnValue`.
There is one additional feature provided by Python though for tiny objects, the `__slots__` attribute. The class could be expressed as:
```
class ReturnValue(object):
__slots__ = ["y0", "y1", "y2"]
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
```
From the [Python Reference Manual](http://www.network-theory.co.uk/docs/pylang/__slots__.html):
> The `__slots__` declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because `__dict__` is not created for each instance.
### Option: Using a list
Another suggestion which I'd overlooked comes from Bill the Lizard:
```
def h(x):
result = [x + 1]
result.append(x * 3)
result.append(y0 ** y3)
return result
```
This is my least favorite method though. I suppose I'm tainted by exposure to Haskell, but the idea of mixed-type lists has always felt uncomfortable to me. In this particular example the list is -not- mixed type, but it conceivably could be. A list used in this way really doesn't gain anything with respect to the tuple as far as I can tell. The only real difference between lists and tuples in Python is that lists are [mutable](http://docs.python.org/library/stdtypes.html#typesseq-mutable), wheras tuples are not. I personally tend to carry over the conventions from functional programming: use lists for any number of elements of the same type, and tuples for a fixed number of elements of predetermined types.
# Question
After the lengthy preamble, comes the inevitable question. Which method (do you think) is best?
I've typically found myself going the dictionary route because it involves less set-up work. From a types perspective however, you might be better off going the class route, since that may help you avoid confusing what a dictionary represents. On the other hand, there are some in the Python community that feel [implied interfaces should be preferred to explicit interfaces](http://www.canonical.org/~kragen/isinstance/), at which point the type of the object really isn't relevant, since you're basically relying on the convention that the same attribute will always have the same meaning.
So, how do -you- return multiple values in Python? | Another option would be using generators:
```
>>> def f(x):
y0 = x + 1
yield y0
yield x * 3
yield y0 ** 4
>>> a, b, c = f(5)
>>> a
6
>>> b
15
>>> c
1296
```
Although IMHO tuples are usually best, except in cases where the values being returned are candidates for encapsulation in a class. |
How do you return multiple values in Python? | 354,883 | 435 | 2008-12-10T01:55:50Z | 28,076,666 | 11 | 2015-01-21T20:57:30Z | [
"python",
"coding-style",
"return",
"return-value"
] | The canonical way to return multiple values in languages that support it is often [tupling](http://stackoverflow.com/questions/38508/whats-the-best-way-to-return-multiple-values-from-a-function-in-python).
### Option: Using a tuple
Consider this trivial example:
```
def f(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return (y0,y1,y2)
```
However, this quickly gets problematic as the number of values returned increases. What if you want to return four or five values? Sure, you could keep tupling them, but it gets easy to forget which value is where. It's also rather ugly to unpack them wherever you want to receive them.
### Option: Using a dictionary
The next logical step seems to be to introduce some sort of 'record notation'. In python, the obvious way to do this is by means of a `dict`.
Consider the following:
```
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
```
(edit- Just to be clear, y0, y1 and y2 are just meant as abstract identifiers. As pointed out, in practice you'd use meaningful identifiers)
Now, we have a mechanism whereby we can project out a particular member of the returned object. For example,
```
result['y0']
```
### Option: Using a class
However, there is another option. We could instead return a specialized structure. I've framed this in the context of Python, but I'm sure it applies to other languages as well. Indeed, if you were working in C this might very well be your only option. Here goes:
```
class ReturnValue(object):
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return ReturnValue(y0, y1, y2)
```
In python the previous two are perhaps very similar in terms of plumbing- After all `{ y0, y1, y2 }` just end up being entries in the internal `__dict__` of the `ReturnValue`.
There is one additional feature provided by Python though for tiny objects, the `__slots__` attribute. The class could be expressed as:
```
class ReturnValue(object):
__slots__ = ["y0", "y1", "y2"]
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
```
From the [Python Reference Manual](http://www.network-theory.co.uk/docs/pylang/__slots__.html):
> The `__slots__` declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because `__dict__` is not created for each instance.
### Option: Using a list
Another suggestion which I'd overlooked comes from Bill the Lizard:
```
def h(x):
result = [x + 1]
result.append(x * 3)
result.append(y0 ** y3)
return result
```
This is my least favorite method though. I suppose I'm tainted by exposure to Haskell, but the idea of mixed-type lists has always felt uncomfortable to me. In this particular example the list is -not- mixed type, but it conceivably could be. A list used in this way really doesn't gain anything with respect to the tuple as far as I can tell. The only real difference between lists and tuples in Python is that lists are [mutable](http://docs.python.org/library/stdtypes.html#typesseq-mutable), wheras tuples are not. I personally tend to carry over the conventions from functional programming: use lists for any number of elements of the same type, and tuples for a fixed number of elements of predetermined types.
# Question
After the lengthy preamble, comes the inevitable question. Which method (do you think) is best?
I've typically found myself going the dictionary route because it involves less set-up work. From a types perspective however, you might be better off going the class route, since that may help you avoid confusing what a dictionary represents. On the other hand, there are some in the Python community that feel [implied interfaces should be preferred to explicit interfaces](http://www.canonical.org/~kragen/isinstance/), at which point the type of the object really isn't relevant, since you're basically relying on the convention that the same attribute will always have the same meaning.
So, how do -you- return multiple values in Python? | ```
>>> def func():
... return [1,2,3]
...
>>> a,b,c = func()
>>> a
1
>>> b
2
>>> c
3
``` |
How do you return multiple values in Python? | 354,883 | 435 | 2008-12-10T01:55:50Z | 36,632,980 | 9 | 2016-04-14T20:08:13Z | [
"python",
"coding-style",
"return",
"return-value"
] | The canonical way to return multiple values in languages that support it is often [tupling](http://stackoverflow.com/questions/38508/whats-the-best-way-to-return-multiple-values-from-a-function-in-python).
### Option: Using a tuple
Consider this trivial example:
```
def f(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return (y0,y1,y2)
```
However, this quickly gets problematic as the number of values returned increases. What if you want to return four or five values? Sure, you could keep tupling them, but it gets easy to forget which value is where. It's also rather ugly to unpack them wherever you want to receive them.
### Option: Using a dictionary
The next logical step seems to be to introduce some sort of 'record notation'. In python, the obvious way to do this is by means of a `dict`.
Consider the following:
```
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return {'y0':y0, 'y1':y1 ,'y2':y2 }
```
(edit- Just to be clear, y0, y1 and y2 are just meant as abstract identifiers. As pointed out, in practice you'd use meaningful identifiers)
Now, we have a mechanism whereby we can project out a particular member of the returned object. For example,
```
result['y0']
```
### Option: Using a class
However, there is another option. We could instead return a specialized structure. I've framed this in the context of Python, but I'm sure it applies to other languages as well. Indeed, if you were working in C this might very well be your only option. Here goes:
```
class ReturnValue(object):
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
def g(x):
y0 = x + 1
y1 = x * 3
y2 = y0 ** y3
return ReturnValue(y0, y1, y2)
```
In python the previous two are perhaps very similar in terms of plumbing- After all `{ y0, y1, y2 }` just end up being entries in the internal `__dict__` of the `ReturnValue`.
There is one additional feature provided by Python though for tiny objects, the `__slots__` attribute. The class could be expressed as:
```
class ReturnValue(object):
__slots__ = ["y0", "y1", "y2"]
def __init__(self, y0, y1, y2):
self.y0 = y0
self.y1 = y1
self.y2 = y2
```
From the [Python Reference Manual](http://www.network-theory.co.uk/docs/pylang/__slots__.html):
> The `__slots__` declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because `__dict__` is not created for each instance.
### Option: Using a list
Another suggestion which I'd overlooked comes from Bill the Lizard:
```
def h(x):
result = [x + 1]
result.append(x * 3)
result.append(y0 ** y3)
return result
```
This is my least favorite method though. I suppose I'm tainted by exposure to Haskell, but the idea of mixed-type lists has always felt uncomfortable to me. In this particular example the list is -not- mixed type, but it conceivably could be. A list used in this way really doesn't gain anything with respect to the tuple as far as I can tell. The only real difference between lists and tuples in Python is that lists are [mutable](http://docs.python.org/library/stdtypes.html#typesseq-mutable), wheras tuples are not. I personally tend to carry over the conventions from functional programming: use lists for any number of elements of the same type, and tuples for a fixed number of elements of predetermined types.
# Question
After the lengthy preamble, comes the inevitable question. Which method (do you think) is best?
I've typically found myself going the dictionary route because it involves less set-up work. From a types perspective however, you might be better off going the class route, since that may help you avoid confusing what a dictionary represents. On the other hand, there are some in the Python community that feel [implied interfaces should be preferred to explicit interfaces](http://www.canonical.org/~kragen/isinstance/), at which point the type of the object really isn't relevant, since you're basically relying on the convention that the same attribute will always have the same meaning.
So, how do -you- return multiple values in Python? | A lot of the answers seem to suggest you need to return a collection of some sort, like a dictionary or a list. You could also leave off the extra syntax and just write out the return values, comma-separated. Note: this technically returns a tuple.
```
def f():
return True, False
x, y = f()
print x
print y
```
gives:
```
True
False
``` |
A get() like method for checking for Python attributes | 355,539 | 37 | 2008-12-10T09:49:39Z | 355,562 | 15 | 2008-12-10T09:59:36Z | [
"python",
"attributes"
] | If I had a dictionary `dict` and I wanted to check for `dict['key']` I could either do so in a `try` block (bleh!) or use the `get()` method, with `False` as a default value.
I'd like to do the same thing for `object.attribute`. That is, I already have object to return `False` if it hasn't been set, but then that gives me errors like
> AttributeError: 'bool' object has no attribute 'attribute' | Do you mean `hasattr()` perhaps?
```
hasattr(object, "attribute name") #Returns True or False
```
[Python.org doc - Built in functions - hasattr()](http://docs.python.org/library/functions.html#hasattr)
You can also do this, which is a bit more cluttered and doesn't work for methods.
```
"attribute" in obj.__dict__
``` |
A get() like method for checking for Python attributes | 355,539 | 37 | 2008-12-10T09:49:39Z | 356,227 | 58 | 2008-12-10T14:27:29Z | [
"python",
"attributes"
] | If I had a dictionary `dict` and I wanted to check for `dict['key']` I could either do so in a `try` block (bleh!) or use the `get()` method, with `False` as a default value.
I'd like to do the same thing for `object.attribute`. That is, I already have object to return `False` if it hasn't been set, but then that gives me errors like
> AttributeError: 'bool' object has no attribute 'attribute' | A more direct analogue to `dict.get(key, default)` than `hasattr` is `getattr`.
```
val = getattr(obj, 'attr_to_check', default_value)
```
(Where `default_value` is optional, raising an exception on no attribute if not found.)
For your example, you would pass `False`. |
How to compute the nth root of a very big integer | 356,090 | 17 | 2008-12-10T13:49:19Z | 356,187 | 14 | 2008-12-10T14:17:08Z | [
"python",
"math",
"nth-root"
] | I need a way to compute the nth root of a long integer in Python.
I tried `pow(m, 1.0/n)`, but it doesn't work:
> OverflowError: long int too large to convert to float
Any ideas?
By long integer I mean REALLY long integers like:
> 11968003966030964356885611480383408833172346450467339251
> 196093144141045683463085291115677488411620264826942334897996389
> 485046262847265769280883237649461122479734279424416861834396522
> 819159219215308460065265520143082728303864638821979329804885526
> 557893649662037092457130509980883789368448042961108430809620626
> 059287437887495827369474189818588006905358793385574832590121472
> 680866521970802708379837148646191567765584039175249171110593159
> 305029014037881475265618958103073425958633163441030267478942720
> 703134493880117805010891574606323700178176718412858948243785754
> 898788359757528163558061136758276299059029113119763557411729353
> 915848889261125855717014320045292143759177464380434854573300054
> 940683350937992500211758727939459249163046465047204851616590276
> 724564411037216844005877918224201569391107769029955591465502737
> 961776799311859881060956465198859727495735498887960494256488224
> 613682478900505821893815926193600121890632 | [Gmpy](http://code.google.com/p/gmpy/) is a C-coded Python extension module that wraps the GMP library to provide to Python code fast multiprecision arithmetic (integer, rational, and float), random number generation, advanced number-theoretical functions, and more.
Includes a `root` function:
> x.root(n): returns a 2-element tuple (y,m), such that y is the
> (possibly truncated) n-th root of x; m, an ordinary Python int,
> is 1 if the root is exact (x==y\*\*n), else 0. n must be an ordinary
> Python int, >=0.
For example, 20th root:
```
>>> import gmpy
>>> i0=11968003966030964356885611480383408833172346450467339251
>>> m0=gmpy.mpz(i0)
>>> m0
mpz(11968003966030964356885611480383408833172346450467339251L)
>>> m0.root(20)
(mpz(567), 0)
``` |
How to compute the nth root of a very big integer | 356,090 | 17 | 2008-12-10T13:49:19Z | 356,206 | 16 | 2008-12-10T14:22:21Z | [
"python",
"math",
"nth-root"
] | I need a way to compute the nth root of a long integer in Python.
I tried `pow(m, 1.0/n)`, but it doesn't work:
> OverflowError: long int too large to convert to float
Any ideas?
By long integer I mean REALLY long integers like:
> 11968003966030964356885611480383408833172346450467339251
> 196093144141045683463085291115677488411620264826942334897996389
> 485046262847265769280883237649461122479734279424416861834396522
> 819159219215308460065265520143082728303864638821979329804885526
> 557893649662037092457130509980883789368448042961108430809620626
> 059287437887495827369474189818588006905358793385574832590121472
> 680866521970802708379837148646191567765584039175249171110593159
> 305029014037881475265618958103073425958633163441030267478942720
> 703134493880117805010891574606323700178176718412858948243785754
> 898788359757528163558061136758276299059029113119763557411729353
> 915848889261125855717014320045292143759177464380434854573300054
> 940683350937992500211758727939459249163046465047204851616590276
> 724564411037216844005877918224201569391107769029955591465502737
> 961776799311859881060956465198859727495735498887960494256488224
> 613682478900505821893815926193600121890632 | If it's a REALLY big number. You could use a binary search.
```
def find_invpow(x,n):
"""Finds the integer component of the n'th root of x,
an integer such that y ** n <= x < (y + 1) ** n.
"""
high = 1
while high ** n <= x:
high *= 2
low = high/2
while low < high:
mid = (low + high) // 2
if low < mid and mid**n < x:
low = mid
elif high > mid and mid**n > x:
high = mid
else:
return mid
return mid + 1
```
For example:
```
>>> x = 237734537465873465
>>> n = 5
>>> y = find_invpow(x,n)
>>> y
2986
>>> y**n <= x <= (y+1)**n
True
>>>
>>> x = 119680039660309643568856114803834088331723464504673392511960931441>
>>> n = 45
>>> y = find_invpow(x,n)
>>> y
227661383982863143360L
>>> y**n <= x < (y+1)**n
True
>>> find_invpow(y**n,n) == y
True
>>>
``` |
Minimal Python build for my application's scripting needs? | 356,452 | 2 | 2008-12-10T15:22:41Z | 356,579 | 7 | 2008-12-10T16:00:10Z | [
"python",
"scripting",
"python-embedding"
] | what are your advices on building a very minimalistic version of Python(2.x) for my application's scripting needs.
My main motive here is to keep the foot print (both memory and disk wise) as low as possible so that my native application won't suffer from any major performance hit. Even the Python DLL size is in consideration because of the possibility of increasing boot up time of my application.
Can we go as low as [Lua](http://www.lua.org/) or other lightweight solutions? | Have you tried [Tiny Python](http://www.tinypy.org/)? |
Python regex findall numbers and dots | 356,483 | 6 | 2008-12-10T15:33:09Z | 356,494 | 12 | 2008-12-10T15:36:07Z | [
"python",
"regex",
"findall"
] | I'm using re.findall() to extract some version numbers from an HTML file:
```
>>> import re
>>> text = "<table><td><a href=\"url\">Test0.2.1.zip</a></td><td>Test0.2.1</td></table> Test0.2.1"
>>> re.findall("Test([\.0-9]*)", text)
['0.2.1.', '0.2.1', '0.2.1']
```
but I would like to only get the ones that do not end in a dot.
The filename might not always be .zip so I can't just stick .zip in the regex.
I wanna end up with:
```
['0.2.1', '0.2.1']
```
Can anyone suggest a better regex to use? :) | ```
re.findall(r"Test([0-9.]*[0-9]+)", text)
```
or, a bit shorter:
```
re.findall(r"Test([\d.]*\d+)", text)
```
By the way - you must not escape the dot in a character class:
```
[\.0-9] // matches: 0 1 2 3 4 5 6 7 8 9 . \
[.0-9] // matches: 0 1 2 3 4 5 6 7 8 9 .
``` |
How would you parse indentation (python style)? | 356,638 | 9 | 2008-12-10T16:17:25Z | 356,954 | 10 | 2008-12-10T17:48:55Z | [
"python",
"parsing",
"indentation",
"lexer"
] | How would you define your parser and lexer rules to parse a language that uses indentation for defining scope.
I have already googled and found a clever approach for parsing it by generating INDENT and DEDENT tokens in the lexer.
I will go deeper on this problem and post an answer if I come to something interesting, but I would like to see other approaches to the problem.
EDIT:
As Charlie pointed out, [there is already another thread very similar if not the same.](http://stackoverflow.com/questions/232682/how-would-you-go-about-implementing-off-side-rule) Should my post be deleted? | This is kind of hypothetical, as it would depend on what technology you have for your lexer and parser, but the easiest way would seem to be to have BEGINBLOCK and ENDBLOCK tokens analogous to braces in C. Using the ["offsides rule"](http://en.wikipedia.org/wiki/Off-side_rule) your lexer needs to keep track of a stack of indendtation levels. When the indent level increases, emit a BEGINBLOCK for the parser; when the indentation level decreases, emit ENDBLOCK and pop levels off the stack.
[Here's another discussion](http://stackoverflow.com/questions/232682/how-would-you-go-about-implementing-off-side-rule) of this on SO, btw. |
How to handle constructors or methods with a different set (or type) of arguments in Python? | 356,718 | 14 | 2008-12-10T16:42:08Z | 356,820 | 10 | 2008-12-10T17:14:02Z | [
"python"
] | Is there a way in Python, to have more than one constructor or more than one method with the *same name*, who differ in the *number of arguments* they accept or the *type(s) of one or more argument(s)*?
If not, what would be the best way to handle such situations?
For an example I made up a color class. *This class should only work as a basic example to discuss this*, there is lot's of unnecessary and/or redundant stuff in there.
It would be nice, if I could call the constructor with different objects (a list, an other color object or three integers...) and the constructor handles them accordingly. In this basic example it works in some cases with \* args and \* \* kwargs, but using class methods is the only general way I came up with. What would be a "**best practice**" like solution for this?
The constructor aside, if I would like to implement an \_ \_ **add** \_ \_ method too, how can I get this method to accept all of this: A plain integer (which is added to all values), three integers (where the first is added to the red value and so forth) or another color object (where both red values are added together, etc.)?
**EDIT**
* I added an *alternative* constructor (initializer, \_ \_ **init** \_ \_) that basicly does all the stuff I wanted.
* But I stick with the first one and the factory methods. Seems clearer.
* I also added an \_ \_ **add** \_ \_, which does all the things mentioned above but I'm not sure if it's *good style*. I try to use the iteration protocol and fall back to "single value mode" instead of checking for specific types. Maybe still ugly tho.
* I have taken a look at \_ \_ **new** \_ \_, thanks for the links.
* My first quick try with it fails: I filter the rgb values from the \* args and \* \* kwargs (is it a class, a list, etc.) then call the superclass's \_ \_ new \_ \_ with the right args (just r,g,b) to pass it along to init. The call to the 'Super(cls, self).\_ \_ new \_ \_ (....)' works, but since I generate and return the same object as the one I call from (as intended), all the original args get passed to \_ \_ init \_ \_ (working as intended), so it bails.
* I could get rid of the \_ \_ init \_ \_ completly and set the values in the \_ \_ new \_ \_ but I don't know... feels like I'm abusing stuff here ;-) I should take a good look at metaclasses and new first I guess.
Source:
```
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
class Color (object):
# It's strict on what to accept, but I kinda like it that way.
def __init__(self, r=0, g=0, b=0):
self.r = r
self.g = g
self.b = b
# Maybe this would be a better __init__?
# The first may be more clear but this could handle way more cases...
# I like the first more though. What do you think?
#
#def __init__(self, obj):
# self.r, self.g, self.b = list(obj)[:3]
# This methods allows to use lists longer than 3 items (eg. rgba), where
# 'Color(*alist)' would bail
@classmethod
def from_List(cls, alist):
r, g, b = alist[:3]
return cls(r, g, b)
# So we could use dicts with more keys than rgb keys, where
# 'Color(**adict)' would bail
@classmethod
def from_Dict(cls, adict):
return cls(adict['r'], adict['g'], adict['b'])
# This should theoreticaly work with every object that's iterable.
# Maybe that's more intuitive duck typing than to rely on an object
# to have an as_List() methode or similar.
@classmethod
def from_Object(cls, obj):
return cls.from_List(list(obj))
def __str__(self):
return "<Color(%s, %s, %s)>" % (self.r, self.g, self.b)
def _set_rgb(self, r, g, b):
self.r = r
self.g = g
self.b = b
def _get_rgb(self):
return (self.r, self.g, self.b)
rgb = property(_get_rgb, _set_rgb)
def as_List(self):
return [self.r, self.g, self.b]
def __iter__(self):
return (c for c in (self.r, self.g, self.b))
# We could add a single value (to all colorvalues) or a list of three
# (or more) values (from any object supporting the iterator protocoll)
# one for each colorvalue
def __add__(self, obj):
r, g, b = self.r, self.g, self.b
try:
ra, ga, ba = list(obj)[:3]
except TypeError:
ra = ga = ba = obj
r += ra
g += ga
b += ba
return Color(*Color.check_rgb(r, g, b))
@staticmethod
def check_rgb(*vals):
ret = []
for c in vals:
c = int(c)
c = min(c, 255)
c = max(c, 0)
ret.append(c)
return ret
class ColorAlpha(Color):
def __init__(self, r=0, g=0, b=0, alpha=255):
Color.__init__(self, r, g, b)
self.alpha = alpha
def __str__(self):
return "<Color(%s, %s, %s, %s)>" % (self.r, self.g, self.b, self.alpha)
# ...
if __name__ == '__main__':
l = (220, 0, 70)
la = (57, 58, 61, 255)
d = {'r': 220, 'g': 0, 'b':70}
da = {'r': 57, 'g': 58, 'b':61, 'a':255}
c = Color(); print c # <Color(0, 0, 0)>
ca = ColorAlpha(*la); print ca # <Color(57, 58, 61, 255)>
print '---'
c = Color(220, 0, 70); print c # <Color(220, 0, 70)>
c = Color(*l); print c # <Color(220, 0, 70)>
#c = Color(*la); print c # -> Fail
c = Color(**d); print c # <Color(220, 0, 70)>
#c = Color(**da); print c # -> Fail
print '---'
c = Color.from_Object(c); print c # <Color(220, 0, 70)>
c = Color.from_Object(ca); print c # <Color(57, 58, 61, 255)>
c = Color.from_List(l); print c # <Color(220, 0, 70)>
c = Color.from_List(la); print c # <Color(57, 58, 61, 255)>
c = Color.from_Dict(d); print c # <Color(220, 0, 70)>
c = Color.from_Dict(da); print c # <Color(57, 58, 61, 255)>
print '---'
print 'Check =', Color.check_rgb('1', 0x29a, -23, 40)
# Check = [1, 255, 0, 40]
print '%s + %s = %s' % (c, 10, c + 10)
# <Color(57, 58, 61)> + 10 = <Color(67, 68, 71)>
print '%s + %s = %s' % (c, ca, c + ca)
# <Color(57, 58, 61)> + <Color(57, 58, 61, 255)> = <Color(114, 116, 122)>
``` | You can have the factory methods, it is fine. But why not just call it as it is?
```
Color(r, g, b)
Color(*[r, g, b])
Color(**{'r': r, 'g': g, 'b': b})
```
This is the python way. As for the from object constructor, I would prefer something like:
```
Color(*Color2.as_list())
```
*Explicit is better than implicit* - Python Zen |
How to handle constructors or methods with a different set (or type) of arguments in Python? | 356,718 | 14 | 2008-12-10T16:42:08Z | 357,004 | 7 | 2008-12-10T18:02:06Z | [
"python"
] | Is there a way in Python, to have more than one constructor or more than one method with the *same name*, who differ in the *number of arguments* they accept or the *type(s) of one or more argument(s)*?
If not, what would be the best way to handle such situations?
For an example I made up a color class. *This class should only work as a basic example to discuss this*, there is lot's of unnecessary and/or redundant stuff in there.
It would be nice, if I could call the constructor with different objects (a list, an other color object or three integers...) and the constructor handles them accordingly. In this basic example it works in some cases with \* args and \* \* kwargs, but using class methods is the only general way I came up with. What would be a "**best practice**" like solution for this?
The constructor aside, if I would like to implement an \_ \_ **add** \_ \_ method too, how can I get this method to accept all of this: A plain integer (which is added to all values), three integers (where the first is added to the red value and so forth) or another color object (where both red values are added together, etc.)?
**EDIT**
* I added an *alternative* constructor (initializer, \_ \_ **init** \_ \_) that basicly does all the stuff I wanted.
* But I stick with the first one and the factory methods. Seems clearer.
* I also added an \_ \_ **add** \_ \_, which does all the things mentioned above but I'm not sure if it's *good style*. I try to use the iteration protocol and fall back to "single value mode" instead of checking for specific types. Maybe still ugly tho.
* I have taken a look at \_ \_ **new** \_ \_, thanks for the links.
* My first quick try with it fails: I filter the rgb values from the \* args and \* \* kwargs (is it a class, a list, etc.) then call the superclass's \_ \_ new \_ \_ with the right args (just r,g,b) to pass it along to init. The call to the 'Super(cls, self).\_ \_ new \_ \_ (....)' works, but since I generate and return the same object as the one I call from (as intended), all the original args get passed to \_ \_ init \_ \_ (working as intended), so it bails.
* I could get rid of the \_ \_ init \_ \_ completly and set the values in the \_ \_ new \_ \_ but I don't know... feels like I'm abusing stuff here ;-) I should take a good look at metaclasses and new first I guess.
Source:
```
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
class Color (object):
# It's strict on what to accept, but I kinda like it that way.
def __init__(self, r=0, g=0, b=0):
self.r = r
self.g = g
self.b = b
# Maybe this would be a better __init__?
# The first may be more clear but this could handle way more cases...
# I like the first more though. What do you think?
#
#def __init__(self, obj):
# self.r, self.g, self.b = list(obj)[:3]
# This methods allows to use lists longer than 3 items (eg. rgba), where
# 'Color(*alist)' would bail
@classmethod
def from_List(cls, alist):
r, g, b = alist[:3]
return cls(r, g, b)
# So we could use dicts with more keys than rgb keys, where
# 'Color(**adict)' would bail
@classmethod
def from_Dict(cls, adict):
return cls(adict['r'], adict['g'], adict['b'])
# This should theoreticaly work with every object that's iterable.
# Maybe that's more intuitive duck typing than to rely on an object
# to have an as_List() methode or similar.
@classmethod
def from_Object(cls, obj):
return cls.from_List(list(obj))
def __str__(self):
return "<Color(%s, %s, %s)>" % (self.r, self.g, self.b)
def _set_rgb(self, r, g, b):
self.r = r
self.g = g
self.b = b
def _get_rgb(self):
return (self.r, self.g, self.b)
rgb = property(_get_rgb, _set_rgb)
def as_List(self):
return [self.r, self.g, self.b]
def __iter__(self):
return (c for c in (self.r, self.g, self.b))
# We could add a single value (to all colorvalues) or a list of three
# (or more) values (from any object supporting the iterator protocoll)
# one for each colorvalue
def __add__(self, obj):
r, g, b = self.r, self.g, self.b
try:
ra, ga, ba = list(obj)[:3]
except TypeError:
ra = ga = ba = obj
r += ra
g += ga
b += ba
return Color(*Color.check_rgb(r, g, b))
@staticmethod
def check_rgb(*vals):
ret = []
for c in vals:
c = int(c)
c = min(c, 255)
c = max(c, 0)
ret.append(c)
return ret
class ColorAlpha(Color):
def __init__(self, r=0, g=0, b=0, alpha=255):
Color.__init__(self, r, g, b)
self.alpha = alpha
def __str__(self):
return "<Color(%s, %s, %s, %s)>" % (self.r, self.g, self.b, self.alpha)
# ...
if __name__ == '__main__':
l = (220, 0, 70)
la = (57, 58, 61, 255)
d = {'r': 220, 'g': 0, 'b':70}
da = {'r': 57, 'g': 58, 'b':61, 'a':255}
c = Color(); print c # <Color(0, 0, 0)>
ca = ColorAlpha(*la); print ca # <Color(57, 58, 61, 255)>
print '---'
c = Color(220, 0, 70); print c # <Color(220, 0, 70)>
c = Color(*l); print c # <Color(220, 0, 70)>
#c = Color(*la); print c # -> Fail
c = Color(**d); print c # <Color(220, 0, 70)>
#c = Color(**da); print c # -> Fail
print '---'
c = Color.from_Object(c); print c # <Color(220, 0, 70)>
c = Color.from_Object(ca); print c # <Color(57, 58, 61, 255)>
c = Color.from_List(l); print c # <Color(220, 0, 70)>
c = Color.from_List(la); print c # <Color(57, 58, 61, 255)>
c = Color.from_Dict(d); print c # <Color(220, 0, 70)>
c = Color.from_Dict(da); print c # <Color(57, 58, 61, 255)>
print '---'
print 'Check =', Color.check_rgb('1', 0x29a, -23, 40)
# Check = [1, 255, 0, 40]
print '%s + %s = %s' % (c, 10, c + 10)
# <Color(57, 58, 61)> + 10 = <Color(67, 68, 71)>
print '%s + %s = %s' % (c, ca, c + ca)
# <Color(57, 58, 61)> + <Color(57, 58, 61, 255)> = <Color(114, 116, 122)>
``` | Python doesn't accept multiple methods with the same name, period. One method does one thing.
I've seen different approaches recommended on how to handle this ... classmethods (like you outlined above) or factory functions. I like keyword arguments the most.
```
class Color (object):
def __init__(self, **parms):
if parms.get('list'):
self.r, self.g, self.b = parms['list']
elif parms.get('color'):
color = parms['color']
self.r = color.r
self.g = color.g
self.b = color.b
else:
self.r = parms['red']
self.g = parms['green']
self.b = parms['blue']
c1 = Color(red=220, green=0, blue=270)
c2 = Color(list=[220, 0, 70])
c3 = Color(color=c1)
```
This fits the Python way of being explicit and readable, plus it easily allows you to add new arguments if needed.
EDIT: Plus I don't have to look at the actual constructor code to understand the arguments. The explanation is supplied by the keyword. |
How to handle constructors or methods with a different set (or type) of arguments in Python? | 356,718 | 14 | 2008-12-10T16:42:08Z | 357,256 | 9 | 2008-12-10T19:22:09Z | [
"python"
] | Is there a way in Python, to have more than one constructor or more than one method with the *same name*, who differ in the *number of arguments* they accept or the *type(s) of one or more argument(s)*?
If not, what would be the best way to handle such situations?
For an example I made up a color class. *This class should only work as a basic example to discuss this*, there is lot's of unnecessary and/or redundant stuff in there.
It would be nice, if I could call the constructor with different objects (a list, an other color object or three integers...) and the constructor handles them accordingly. In this basic example it works in some cases with \* args and \* \* kwargs, but using class methods is the only general way I came up with. What would be a "**best practice**" like solution for this?
The constructor aside, if I would like to implement an \_ \_ **add** \_ \_ method too, how can I get this method to accept all of this: A plain integer (which is added to all values), three integers (where the first is added to the red value and so forth) or another color object (where both red values are added together, etc.)?
**EDIT**
* I added an *alternative* constructor (initializer, \_ \_ **init** \_ \_) that basicly does all the stuff I wanted.
* But I stick with the first one and the factory methods. Seems clearer.
* I also added an \_ \_ **add** \_ \_, which does all the things mentioned above but I'm not sure if it's *good style*. I try to use the iteration protocol and fall back to "single value mode" instead of checking for specific types. Maybe still ugly tho.
* I have taken a look at \_ \_ **new** \_ \_, thanks for the links.
* My first quick try with it fails: I filter the rgb values from the \* args and \* \* kwargs (is it a class, a list, etc.) then call the superclass's \_ \_ new \_ \_ with the right args (just r,g,b) to pass it along to init. The call to the 'Super(cls, self).\_ \_ new \_ \_ (....)' works, but since I generate and return the same object as the one I call from (as intended), all the original args get passed to \_ \_ init \_ \_ (working as intended), so it bails.
* I could get rid of the \_ \_ init \_ \_ completly and set the values in the \_ \_ new \_ \_ but I don't know... feels like I'm abusing stuff here ;-) I should take a good look at metaclasses and new first I guess.
Source:
```
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
class Color (object):
# It's strict on what to accept, but I kinda like it that way.
def __init__(self, r=0, g=0, b=0):
self.r = r
self.g = g
self.b = b
# Maybe this would be a better __init__?
# The first may be more clear but this could handle way more cases...
# I like the first more though. What do you think?
#
#def __init__(self, obj):
# self.r, self.g, self.b = list(obj)[:3]
# This methods allows to use lists longer than 3 items (eg. rgba), where
# 'Color(*alist)' would bail
@classmethod
def from_List(cls, alist):
r, g, b = alist[:3]
return cls(r, g, b)
# So we could use dicts with more keys than rgb keys, where
# 'Color(**adict)' would bail
@classmethod
def from_Dict(cls, adict):
return cls(adict['r'], adict['g'], adict['b'])
# This should theoreticaly work with every object that's iterable.
# Maybe that's more intuitive duck typing than to rely on an object
# to have an as_List() methode or similar.
@classmethod
def from_Object(cls, obj):
return cls.from_List(list(obj))
def __str__(self):
return "<Color(%s, %s, %s)>" % (self.r, self.g, self.b)
def _set_rgb(self, r, g, b):
self.r = r
self.g = g
self.b = b
def _get_rgb(self):
return (self.r, self.g, self.b)
rgb = property(_get_rgb, _set_rgb)
def as_List(self):
return [self.r, self.g, self.b]
def __iter__(self):
return (c for c in (self.r, self.g, self.b))
# We could add a single value (to all colorvalues) or a list of three
# (or more) values (from any object supporting the iterator protocoll)
# one for each colorvalue
def __add__(self, obj):
r, g, b = self.r, self.g, self.b
try:
ra, ga, ba = list(obj)[:3]
except TypeError:
ra = ga = ba = obj
r += ra
g += ga
b += ba
return Color(*Color.check_rgb(r, g, b))
@staticmethod
def check_rgb(*vals):
ret = []
for c in vals:
c = int(c)
c = min(c, 255)
c = max(c, 0)
ret.append(c)
return ret
class ColorAlpha(Color):
def __init__(self, r=0, g=0, b=0, alpha=255):
Color.__init__(self, r, g, b)
self.alpha = alpha
def __str__(self):
return "<Color(%s, %s, %s, %s)>" % (self.r, self.g, self.b, self.alpha)
# ...
if __name__ == '__main__':
l = (220, 0, 70)
la = (57, 58, 61, 255)
d = {'r': 220, 'g': 0, 'b':70}
da = {'r': 57, 'g': 58, 'b':61, 'a':255}
c = Color(); print c # <Color(0, 0, 0)>
ca = ColorAlpha(*la); print ca # <Color(57, 58, 61, 255)>
print '---'
c = Color(220, 0, 70); print c # <Color(220, 0, 70)>
c = Color(*l); print c # <Color(220, 0, 70)>
#c = Color(*la); print c # -> Fail
c = Color(**d); print c # <Color(220, 0, 70)>
#c = Color(**da); print c # -> Fail
print '---'
c = Color.from_Object(c); print c # <Color(220, 0, 70)>
c = Color.from_Object(ca); print c # <Color(57, 58, 61, 255)>
c = Color.from_List(l); print c # <Color(220, 0, 70)>
c = Color.from_List(la); print c # <Color(57, 58, 61, 255)>
c = Color.from_Dict(d); print c # <Color(220, 0, 70)>
c = Color.from_Dict(da); print c # <Color(57, 58, 61, 255)>
print '---'
print 'Check =', Color.check_rgb('1', 0x29a, -23, 40)
# Check = [1, 255, 0, 40]
print '%s + %s = %s' % (c, 10, c + 10)
# <Color(57, 58, 61)> + 10 = <Color(67, 68, 71)>
print '%s + %s = %s' % (c, ca, c + ca)
# <Color(57, 58, 61)> + <Color(57, 58, 61, 255)> = <Color(114, 116, 122)>
``` | In general, use factory methods, marked up as `@classmethod`s. They'll also work correctly on subclasses. From a design perspective, they are more explicit, especially when given a good name.
In this case, mixing everything together is probably more convenient, but it also makes the contract for your constructor more difficult. |
What is the recommended way to use Vim folding for Python code | 357,785 | 83 | 2008-12-10T22:10:45Z | 360,634 | 83 | 2008-12-11T19:47:43Z | [
"python",
"vim",
"folding"
] | I am interested in enabling code folding in Vim for Python code. I have noticed multiple ways to do so.
Does anyone have a preferred way to do Python code folding in Vim? I.e,
* Do you have a particular Vim plugin that you use and like?
* Do you use manual folding or do you place markers in comments?
* Any other recommended ways to do code folding for Python in Vim? | Personally I can't convince myself to litter my code with the markers. I've become pretty used to (and efficient) at using indent-folding. Together with my mapping of space bar (see below) to open/close folds and the zR and zM commands, I'm right at home. Perfect for Python!
> `nnoremap <space> za`
>
> `vnoremap <space> zf` |
What is the recommended way to use Vim folding for Python code | 357,785 | 83 | 2008-12-10T22:10:45Z | 361,548 | 21 | 2008-12-12T00:32:01Z | [
"python",
"vim",
"folding"
] | I am interested in enabling code folding in Vim for Python code. I have noticed multiple ways to do so.
Does anyone have a preferred way to do Python code folding in Vim? I.e,
* Do you have a particular Vim plugin that you use and like?
* Do you use manual folding or do you place markers in comments?
* Any other recommended ways to do code folding for Python in Vim? | I use [this](http://www.vim.org/scripts/script.php?script_id=2462 "this") syntax file for Python. It sets the folding method to syntax and folds all classes and functions, but nothing else. |
Does Python have something like anonymous inner classes of Java? | 357,997 | 28 | 2008-12-10T23:26:57Z | 358,012 | 13 | 2008-12-10T23:35:43Z | [
"python",
"class",
"anonymous-class"
] | In Java you can define a new class inline using anonymous inner classes. This is useful when you need to rewrite only a single method of the class.
Suppose that you want create a subclass of `OptionParser` that overrides only a single method (for example `exit()`). In Java you can write something like this:
```
new OptionParser () {
public void exit() {
// body of the method
}
};
```
This piece of code creates a anonymous class that extends `OptionParser` and override only the `exit()` method.
There is a similar idiom in Python? Which idiom is used in these circumstances? | Java uses anonymous classes mostly to imitate closures or simply code blocks. Since in Python you can easily pass around methods there's no need for a construct as clunky as anonymous inner classes:
```
def printStuff():
print "hello"
def doit(what):
what()
doit(printStuff)
```
Edit: I'm aware that this is not what is needed in this special case. I just described the most common python solution to the problem most commonly by anonymous inner classes in Java. |
Does Python have something like anonymous inner classes of Java? | 357,997 | 28 | 2008-12-10T23:26:57Z | 358,055 | 11 | 2008-12-11T00:01:54Z | [
"python",
"class",
"anonymous-class"
] | In Java you can define a new class inline using anonymous inner classes. This is useful when you need to rewrite only a single method of the class.
Suppose that you want create a subclass of `OptionParser` that overrides only a single method (for example `exit()`). In Java you can write something like this:
```
new OptionParser () {
public void exit() {
// body of the method
}
};
```
This piece of code creates a anonymous class that extends `OptionParser` and override only the `exit()` method.
There is a similar idiom in Python? Which idiom is used in these circumstances? | You can accomplish this in three ways:
1. Proper subclass (of course)
2. a custom method that you invoke with the object as an argument
3. (what you probably want) -- adding a new method to an object (or replacing an existing one).
Example of option 3 (edited to remove use of "new" module -- It's deprecated, I did not know ):
```
import types
class someclass(object):
val = "Value"
def some_method(self):
print self.val
def some_method_upper(self):
print self.val.upper()
obj = someclass()
obj.some_method()
obj.some_method = types.MethodType(some_method_upper, obj)
obj.some_method()
``` |
Does Python have something like anonymous inner classes of Java? | 357,997 | 28 | 2008-12-10T23:26:57Z | 358,062 | 10 | 2008-12-11T00:05:13Z | [
"python",
"class",
"anonymous-class"
] | In Java you can define a new class inline using anonymous inner classes. This is useful when you need to rewrite only a single method of the class.
Suppose that you want create a subclass of `OptionParser` that overrides only a single method (for example `exit()`). In Java you can write something like this:
```
new OptionParser () {
public void exit() {
// body of the method
}
};
```
This piece of code creates a anonymous class that extends `OptionParser` and override only the `exit()` method.
There is a similar idiom in Python? Which idiom is used in these circumstances? | Well, classes are first class objects, so you can create them in methods if you want. e.g.
```
from optparse import OptionParser
def make_custom_op(i):
class MyOP(OptionParser):
def exit(self):
print 'custom exit called', i
return MyOP
custom_op_class = make_custom_op(3)
custom_op = custom_op_class()
custom_op.exit() # prints 'custom exit called 3'
dir(custom_op) # shows all the regular attributes of an OptionParser
```
But, really, why not just define the class at the normal level? If you need to customise it, put the customisation in as arguments to `__init__`.
(edit: fixed typing errors in code) |
Does Python have something like anonymous inner classes of Java? | 357,997 | 28 | 2008-12-10T23:26:57Z | 3,915,434 | 22 | 2010-10-12T13:53:02Z | [
"python",
"class",
"anonymous-class"
] | In Java you can define a new class inline using anonymous inner classes. This is useful when you need to rewrite only a single method of the class.
Suppose that you want create a subclass of `OptionParser` that overrides only a single method (for example `exit()`). In Java you can write something like this:
```
new OptionParser () {
public void exit() {
// body of the method
}
};
```
This piece of code creates a anonymous class that extends `OptionParser` and override only the `exit()` method.
There is a similar idiom in Python? Which idiom is used in these circumstances? | You can use the [`type(name, bases, dict)`](http://docs.python.org/library/functions.html#type) builtin function to create classes on the fly. For example:
```
op = type("MyOptionParser", (OptionParser,object), {"foo": lambda self: "foo" })
op().foo()
```
Since OptionParser isn't a new-style class, you have to explicitly include `object` in the list of base classes. |
log4j with timestamp per log entry | 358,225 | 41 | 2008-12-11T01:44:39Z | 358,247 | 77 | 2008-12-11T01:59:03Z | [
"java",
"python",
"log4j",
"jython"
] | this is my log output
```
INFO main digestemails - process inbox
INFO main digestemails - checking for emails in c:\development\DCMail\email\KN-Source
INFO main digestemails - digesting [email protected]
INFO main digestemails - extracting attachments
INFO main digestemails - no attachments or no attachments supported
INFO main digestemails - updating database
INFO main digestemails - email -> COMPLETED folder
INFO main digestemails -
```
I would like a time stamp per log message ie
```
INFO 2008-12-25 13:14:00 digestemails - email -> COMPLETED folder
```
here is my log4j config file
```
log4j.rootLogger=debug, stdout, R
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
# Pattern to output the caller's file name and line number.
log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=c:\\development\\DCMail\\logs\\digestlogfolder\\digest-logfile.log
log4j.appender.R.MaxFileSize=100KB
# Keep one backup file
log4j.appender.R.MaxBackupIndex=1
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%p %t %c - %m%n
```
How do I do it? | Use %d in your PatternLayout.
Also %d can take a format pattern as in %d{dd MMM yyyy HH:mm:ss,SSS} you can pick and choose the elements that you want. When the format pattern is omitted the date will be in ISO8601 format. |
log4j with timestamp per log entry | 358,225 | 41 | 2008-12-11T01:44:39Z | 358,641 | 13 | 2008-12-11T07:45:42Z | [
"java",
"python",
"log4j",
"jython"
] | this is my log output
```
INFO main digestemails - process inbox
INFO main digestemails - checking for emails in c:\development\DCMail\email\KN-Source
INFO main digestemails - digesting [email protected]
INFO main digestemails - extracting attachments
INFO main digestemails - no attachments or no attachments supported
INFO main digestemails - updating database
INFO main digestemails - email -> COMPLETED folder
INFO main digestemails -
```
I would like a time stamp per log message ie
```
INFO 2008-12-25 13:14:00 digestemails - email -> COMPLETED folder
```
here is my log4j config file
```
log4j.rootLogger=debug, stdout, R
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
# Pattern to output the caller's file name and line number.
log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=c:\\development\\DCMail\\logs\\digestlogfolder\\digest-logfile.log
log4j.appender.R.MaxFileSize=100KB
# Keep one backup file
log4j.appender.R.MaxBackupIndex=1
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%p %t %c - %m%n
```
How do I do it? | A extract from my properties file
```
log4j.rootLogger=INFO, stdout, logfile
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p (%t) [%c] - %m%n
log4j.appender.logfile=org.apache.log4j.RollingFileAppender
log4j.appender.logfile.File=C:/log/client.log
log4j.appender.logfile.MaxFileSize=5MB
log4j.appender.logfile.MaxBackupIndex=0
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n
``` |
python, set terminal type in pexpect | 358,783 | 4 | 2008-12-11T09:32:43Z | 359,241 | 7 | 2008-12-11T12:52:34Z | [
"python",
"pexpect"
] | I have a script which uses pexpect to start a CLI program. It works a bit like a shell where you get a prompt where you can enter some commands.
The problem I have, I think, is that this program uses a coloured prompt.
This is what I do
```
import pprint
import pexpect
1 a = pexpect.spawn('program')
2 a.expect("prompt>")
3 print "---------start------------"
4 print(a.before)
5 a.sendline("command")
6 a.expect("prompt>")
7 print "---------before------------"
8 pprint.pprint(a.before)
9 print "---------after------------"
10 pprint.pprint(a.after)
```
This is the output:
```
> python borken.py
---------start------------
A lot of text here from the enjoying programs start-up, lorem ipsum ...
---------before------------
' \x1b[0m\x1b[8D\x1b[K\x1b[1m\x1b[34m'
---------after------------
'prompt>'
```
For some reason the first prompt colour coding borkens up things and a.before at line 8 is garbled, normal print does not work, even if I see that the command at line 5 actually produced a lot of output.
Does someone know what the problem could be, or is it possible to set the terminal type in pexpect to avoid the colours?
I am using tcsh shell | Ok, I found the answer. csl's answer set me on the right path.
pexpect has a "env" option which I thought I could use. like this:
```
a = pexpect.spawn('program', env = {"TERM": "dumb"})
```
But this spawns a new shell which does not work for me, our development environment
depends on a lot of environmental variables :/
But if I do this before spawning a shell:
```
import os
os.environ["TERM"] = "dumb"
```
I change the current "TERM" and "dumb" does not support colours, which fixed my issue. |
Execute Commands Sequentially in Python | 359,347 | 13 | 2008-12-11T13:30:23Z | 359,506 | 17 | 2008-12-11T14:23:17Z | [
"python",
"windows",
"subprocess"
] | I would like to execute multiple commands in a row:
ie (just to illustrate my need):
**cmd** (the shell)
then
**cd dir**
and
**ls**
and read the result of the ls.
Any idea with subprocess module ?
UPDATE:
cd dir and ls are just an example. I need to run complex commands (following a particular order, without any pipelining). In fact, i would like one subprocess shell and the ability to launch many commands on it. | There is an easy way to execute a sequence of commands.
Use the following in `subprocess.Popen`
```
"command1; command2; command3"
```
Or, if you're stuck with windows, you have several choices.
* Create a temporary ".BAT" file, and provide this to `subprocess.Popen`
* Create a sequence of commands with "\n" separators in a single long string.
Use """s, like this.
```
"""
command1
command2
command3
"""
```
Or, if you must do things piecemeal, you have to do something like this.
```
class Command( object ):
def __init__( self, text ):
self.text = text
def execute( self ):
self.proc= subprocess.Popen( ... self.text ... )
self.proc.wait()
class CommandSequence( Command ):
def __init__( self, *steps ):
self.steps = steps
def execute( self ):
for s in self.steps:
s.execute()
```
That will allow you to build a sequence of commands. |
Execute Commands Sequentially in Python | 359,347 | 13 | 2008-12-11T13:30:23Z | 359,737 | 18 | 2008-12-11T15:35:32Z | [
"python",
"windows",
"subprocess"
] | I would like to execute multiple commands in a row:
ie (just to illustrate my need):
**cmd** (the shell)
then
**cd dir**
and
**ls**
and read the result of the ls.
Any idea with subprocess module ?
UPDATE:
cd dir and ls are just an example. I need to run complex commands (following a particular order, without any pipelining). In fact, i would like one subprocess shell and the ability to launch many commands on it. | To do that, you would have to:
* supply the `shell=True` argument in the `subprocess.Popen` call, and
* separate the commands with:
+ `;` if running under a \*nix shell (bash, ash, sh, ksh, csh, tcsh, zsh etc)
+ `&` if running under the `cmd.exe` of Windows |
How can I unload a DLL using ctypes in Python? | 359,498 | 10 | 2008-12-11T14:21:06Z | 359,570 | 10 | 2008-12-11T14:41:16Z | [
"python",
"dll",
"ctypes"
] | I'm using ctypes to load a DLL in Python. This works great.
Now we'd like to be able to reload that DLL at runtime.
The straightforward approach would seem to be:
1. Unload DLL
2. Load DLL
Unfortunately I'm not sure what the correct way to unload the DLL is.
\_ctypes.FreeLibrary is available, but private.
Is there some other way to unload the DLL? | you should be able to do it by disposing the object
```
mydll = ctypes.CDLL('...')
del mydll
mydll = ctypes.CDLL('...')
```
**EDIT:** Hop's comment is right, this unbinds the name, but garbage collection doesn't happen that quickly, in fact I even doubt it even releases the loaded library.
Ctypes doesn't seem to provide a clean way to release resources, it does only provide a `_handle` field to the dlopen handle...
So the only way I see, a really, **really non-clean way**, is to system dependently dlclose the handle, but it is very very unclean, as moreover ctypes keeps internally references to this handle. So unloading takes something of the form:
```
mydll = ctypes.CDLL('./mylib.so')
handle = mydll._handle
del mydll
while isLoaded('./mylib.so'):
dlclose(handle)
```
It's so unclean that I only checked it works using:
```
def isLoaded(lib):
libp = os.path.abspath(lib)
ret = os.system("lsof -p %d | grep %s > /dev/null" % (os.getpid(), libp))
return (ret == 0)
def dlclose(handle)
libdl = ctypes.CDLL("libdl.so")
libdl.dlclose(handle)
``` |
Lambda function for classes in python? | 360,368 | 6 | 2008-12-11T18:27:21Z | 360,425 | 11 | 2008-12-11T18:44:54Z | [
"python",
"lambda"
] | There must be an easy way to do this, but somehow I can wrap my head around it. The best way I can describe what I want is a lambda function for a class. I have a library that expects as an argument an uninstantiated version of a class to work with. It then instantiates the class itself to work on. The problem is that I'd like to be able to dynamically create versions of the class, to pass to the library, but I can't figure out how to do it since the library expects an uninstantiated version. The code below describes the problem:
```
class Double:
def run(self,x):
return x*2
class Triple:
def run(self,x):
return x*3
class Multiply:
def __init__(self,mult):
self.mult = mult
def run(self,x):
return x*self.mult
class Library:
def __init__(self,c):
self.c = c()
def Op(self,val):
return self.c.run(val)
op1 = Double
op2 = Triple
#op3 = Multiply(5)
lib1 = Library(op1)
lib2 = Library(op2)
#lib3 = Library(op3)
print lib1.Op(2)
print lib2.Op(2)
#print lib3.Op(2)
```
I can't use the generic Multiply class, because I must instantiate it first which breaks the library "AttributeError: Multiply instance has no **call** method". Without changing the Library class, is there a way I can do this? | Does the library really specify that it wants an "uninitialized version" (i.e. a class reference)?
It looks to me as if the library actually wants an object factory. In that case, it's acceptable to type:
```
lib3 = Library(lambda: Multiply(5))
```
To understand how the lambda works, consider the following:
```
Multiply5 = lambda: Multiply(5)
assert Multiply5().run(3) == Multiply(5).run(3)
``` |
Lambda function for classes in python? | 360,368 | 6 | 2008-12-11T18:27:21Z | 360,456 | 8 | 2008-12-11T18:53:30Z | [
"python",
"lambda"
] | There must be an easy way to do this, but somehow I can wrap my head around it. The best way I can describe what I want is a lambda function for a class. I have a library that expects as an argument an uninstantiated version of a class to work with. It then instantiates the class itself to work on. The problem is that I'd like to be able to dynamically create versions of the class, to pass to the library, but I can't figure out how to do it since the library expects an uninstantiated version. The code below describes the problem:
```
class Double:
def run(self,x):
return x*2
class Triple:
def run(self,x):
return x*3
class Multiply:
def __init__(self,mult):
self.mult = mult
def run(self,x):
return x*self.mult
class Library:
def __init__(self,c):
self.c = c()
def Op(self,val):
return self.c.run(val)
op1 = Double
op2 = Triple
#op3 = Multiply(5)
lib1 = Library(op1)
lib2 = Library(op2)
#lib3 = Library(op3)
print lib1.Op(2)
print lib2.Op(2)
#print lib3.Op(2)
```
I can't use the generic Multiply class, because I must instantiate it first which breaks the library "AttributeError: Multiply instance has no **call** method". Without changing the Library class, is there a way I can do this? | There's no need for lambda at all. lambda is just syntatic sugar to define a function and use it at the same time. Just like any lambda call can be replaced with an explicit def, we can solve your problem by creating a real class that meets your needs and returning it.
```
class Double:
def run(self,x):
return x*2
class Triple:
def run(self,x):
return x*3
def createMultiplier(n):
class Multiply:
def run(self,x):
return x*n
return Multiply
class Library:
def __init__(self,c):
self.c = c()
def Op(self,val):
return self.c.run(val)
op1 = Double
op2 = Triple
op3 = createMultiplier(5)
lib1 = Library(op1)
lib2 = Library(op2)
lib3 = Library(op3)
print lib1.Op(2)
print lib2.Op(2)
print lib3.Op(2)
``` |
What GUI toolkit looks best for a native LAF for Python in Windows and Linux? | 360,602 | 4 | 2008-12-11T19:38:02Z | 361,672 | 8 | 2008-12-12T01:48:15Z | [
"python",
"windows",
"linux",
"native",
"gui-toolkit"
] | I need to decide on a GUI/Widget toolkit to use with Python for a new project. The target platforms will be Linux with KDE and Windows XP (and probably Vista). What Python GUI toolkit looks best and consistent with the native look and feel of the run time platform?
If possible, cite strengths and weaknesses of the suggested toolkit.
Thank you,
Luis | Python binding of Wx is very strong since at least one of the core developer is a python guy itself. WxWdgets is robust, time proven stable, mature, but also bit more than just GUI. Even is a lot is left out in WxPython - because Python itself offers that already - you might find that extra convenient for your project. Wx is the fastest especially on Win, because it lets render the OS and yes WxLicense is de facto LGPL. With XRC you have also a way like Glade to click you to a UI that you can reuse by different projects and languages. What is one major reason for me to use Wx is the fast and helping mailing list, never seen a flamewar, you get even often answers from core developers there, like the notorious vadim zeitlin++. The only thing con to Wx is the API that once grew out of MS MFC and has still its darker(unelegant) corners, but with every version you have some improvements on that as well.
QT done some nice stuff, especially warping the language but under python that don't count. They invented also a lot of extra widgets. In wx you have also combined, more complex widgets like e.g. for config dialog, but that goes not that far as in QT.
And you could of course use GTK. almost no difference under linux to Wx but a bit alien and slower under win. but also free. |
Implementing a "[command] [action] [parameter]" style command-line interfaces? | 362,426 | 9 | 2008-12-12T10:35:03Z | 362,700 | 9 | 2008-12-12T13:17:11Z | [
"python",
"command-line",
"user-interface"
] | What is the "cleanest" way to implement an command-line UI, similar to git's, for example:
```
git push origin/master
git remote add origin git://example.com master
```
Ideally also allowing the more flexible parsing, for example,
```
jump_to_folder app theappname v2
jump_to_folder app theappname source
jump_to_folder app theappname source v2
jump_to_folder app theappname build v1
jump_to_folder app theappname build 1
jump_to_folder app theappname v2 build
```
`jump_to_folder` is the scripts name, `app` is the command, `theappname` is a "fixed-location" parameter, "build" and "v2" etc are arguments (For example, possible arguments would be any number/any number prefixed with a v, or build/source/tmp/config)
I could just manually parse the arguments with a series of `if`/`else`/`elifs`, but there must be a more elegant way to do this?
As an entirely theoretically example, I could describe the UI schema..
```
app:
fixed: application_name
optional params:
arg subsection:
"build"
"source"
"tmp"
"config"
arg version:
integer
"v" + integer
```
Then parse the supplied arguments though the above schema, and get a dictionary:
```
>>> print schema.parse(["app", "theappname", "v1", "source"])
{
"application_name": "theappname",
"params":{
"subsection": "source",
"version":"v1"
}
}
```
Does such a system exist? If not, how would I go about implementing something along these lines? | The `cmd` module would probably work well for this.
Example:
```
import cmd
class Calc(cmd.Cmd):
def do_add(self, arg):
print sum(map(int, arg.split()))
if __name__ == '__main__':
Calc().cmdloop()
```
Run it:
```
$python calc.py
(Cmd) add 4 5
9
(Cmd) help
Undocumented commands:
======================
add help
(Cmd)
```
See the [Python docs](http://docs.python.org/library/cmd.html) or [PyMOTW site](http://blog.doughellmann.com/2008/05/pymotw-cmd.html) for more info. |
Implementing a "[command] [action] [parameter]" style command-line interfaces? | 362,426 | 9 | 2008-12-12T10:35:03Z | 10,913,734 | 7 | 2012-06-06T11:56:29Z | [
"python",
"command-line",
"user-interface"
] | What is the "cleanest" way to implement an command-line UI, similar to git's, for example:
```
git push origin/master
git remote add origin git://example.com master
```
Ideally also allowing the more flexible parsing, for example,
```
jump_to_folder app theappname v2
jump_to_folder app theappname source
jump_to_folder app theappname source v2
jump_to_folder app theappname build v1
jump_to_folder app theappname build 1
jump_to_folder app theappname v2 build
```
`jump_to_folder` is the scripts name, `app` is the command, `theappname` is a "fixed-location" parameter, "build" and "v2" etc are arguments (For example, possible arguments would be any number/any number prefixed with a v, or build/source/tmp/config)
I could just manually parse the arguments with a series of `if`/`else`/`elifs`, but there must be a more elegant way to do this?
As an entirely theoretically example, I could describe the UI schema..
```
app:
fixed: application_name
optional params:
arg subsection:
"build"
"source"
"tmp"
"config"
arg version:
integer
"v" + integer
```
Then parse the supplied arguments though the above schema, and get a dictionary:
```
>>> print schema.parse(["app", "theappname", "v1", "source"])
{
"application_name": "theappname",
"params":{
"subsection": "source",
"version":"v1"
}
}
```
Does such a system exist? If not, how would I go about implementing something along these lines? | [argparse](http://docs.python.org/library/argparse.html#sub-commands) is perfect for this, specifically ["sub-commands"](http://docs.python.org/library/argparse.html#sub-commands) and positional args
```
import argparse
def main():
arger = argparse.ArgumentParser()
# Arguments for top-level, e.g "subcmds.py -v"
arger.add_argument("-v", "--verbose", action="count", default=0)
subparsers = arger.add_subparsers(dest="command")
# Make parser for "subcmds.py info ..."
info_parser = subparsers.add_parser("info")
info_parser.add_argument("-m", "--moo", dest="moo")
# Make parser for "subcmds.py create ..."
create_parser = subparsers.add_parser("create")
create_parser.add_argument("name")
create_parser.add_argument("additional", nargs="*")
# Parse
opts = arger.parse_args()
# Print option object for debug
print opts
if opts.command == "info":
print "Info command"
print "--moo was %s" % opts.moo
elif opts.command == "create":
print "Creating %s" % opts.name
print "Additional: %s" % opts.additional
else:
# argparse will error on unexpected commands, but
# in case we mistype one of the elif statements...
raise ValueError("Unhandled command %s" % opts.command)
if __name__ == '__main__':
main()
```
This can be used like so:
```
$ python subcmds.py create myapp v1 blah
Namespace(additional=['v1', 'blah'], command='create', name='myapp', verbose=0)
Creating myapp
Additional: ['v1', 'blah']
$ python subcmds.py info --moo
usage: subcmds.py info [-h] [-m MOO]
subcmds.py info: error: argument -m/--moo: expected one argument
$ python subcmds.py info --moo 1
Namespace(command='info', moo='1', verbose=0)
Info command
--moo was 1
``` |
django is very slow on my machine | 362,808 | 12 | 2008-12-12T13:53:04Z | 368,583 | 17 | 2008-12-15T14:46:40Z | [
"python",
"django",
"dns"
] | I have a fresh install of django 1.0 and a simple page served from it takes 5 secs to load. On my colleague's computer it takes almost no time.
I start the server using
```
python manage.py testserver
```
I can see each GET request (PNGs and style sheets) take about half a second.
Another weird thing, which I think is related, is that the functional tests for the app run much slower on my machine with MySQL (on order of 100 times slower than on my colleague's machine). When I set the app to use sqlite, they run pretty quickly. I would like to exclaim that sqlite doesn't much change the time it takes to load a page, but it does speed up server startup.
It looks like IO problem, but I don't see general performance problems on my machine, apart from django at least.
Django runs on python2.4, I'm running Vista. I have also checked python2.5.
Thanks ΤÎΩΤÎÎÎÎ¥, It must totaly be a DNS problem, because the page loads up quickly as soon as
instead of <http://localhost:8000/app> I go to <http://127.0.0.1:8000/app>.
But what could it be caused by? My host file has only two entries:
```
127.0.0.1 localhost
::1 localhost
``` | Firefox has a problem browsing to localhost on some Windows machines. You can solve it by switching off ipv6, which isn't really recommended. Using 127.0.0.1 directly is another way round the problem. |
Python idiom to return first item or None | 363,944 | 110 | 2008-12-12T19:56:02Z | 363,955 | 35 | 2008-12-12T20:00:03Z | [
"python",
"idioms",
"python-2.4"
] | I'm sure there's a simpler way of doing this that's just not occurring to me.
I'm calling a bunch of methods that return a list. The list may be empty. If the list is non-empty, I want to return the first item; otherwise, I want to return None. This code works:
```
my_list = get_list()
if len(my_list) > 0: return my_list[0]
return None
```
It seems to me that there should be a simple one-line idiom for doing this, but for the life of me I can't think of it. Is there?
**Edit:**
The reason that I'm looking for a one-line expression here is not that I like incredibly terse code, but because I'm having to write a lot of code like this:
```
x = get_first_list()
if x:
# do something with x[0]
# inevitably forget the [0] part, and have a bug to fix
y = get_second_list()
if y:
# do something with y[0]
# inevitably forget the [0] part AGAIN, and have another bug to fix
```
What I'd like to be doing can certainly be accomplished with a function (and probably will be):
```
def first_item(list_or_none):
if list_or_none: return list_or_none[0]
x = first_item(get_first_list())
if x:
# do something with x
y = first_item(get_second_list())
if y:
# do something with y
```
I posted the question because I'm frequently surprised by what simple expressions in Python can do, and I thought that writing a function was a silly thing to do if there was a simple expression could do the trick. But seeing these answers, it seems like a function *is* the simple solution. | ```
(get_list() or [None])[0]
```
That should work.
BTW I didn't use the variable `list`, because that overwrites the builtin `list()` function.
Edit: I had a slightly simpler, but wrong version here earlier. |
Python idiom to return first item or None | 363,944 | 110 | 2008-12-12T19:56:02Z | 363,995 | 134 | 2008-12-12T20:12:27Z | [
"python",
"idioms",
"python-2.4"
] | I'm sure there's a simpler way of doing this that's just not occurring to me.
I'm calling a bunch of methods that return a list. The list may be empty. If the list is non-empty, I want to return the first item; otherwise, I want to return None. This code works:
```
my_list = get_list()
if len(my_list) > 0: return my_list[0]
return None
```
It seems to me that there should be a simple one-line idiom for doing this, but for the life of me I can't think of it. Is there?
**Edit:**
The reason that I'm looking for a one-line expression here is not that I like incredibly terse code, but because I'm having to write a lot of code like this:
```
x = get_first_list()
if x:
# do something with x[0]
# inevitably forget the [0] part, and have a bug to fix
y = get_second_list()
if y:
# do something with y[0]
# inevitably forget the [0] part AGAIN, and have another bug to fix
```
What I'd like to be doing can certainly be accomplished with a function (and probably will be):
```
def first_item(list_or_none):
if list_or_none: return list_or_none[0]
x = first_item(get_first_list())
if x:
# do something with x
y = first_item(get_second_list())
if y:
# do something with y
```
I posted the question because I'm frequently surprised by what simple expressions in Python can do, and I thought that writing a function was a silly thing to do if there was a simple expression could do the trick. But seeing these answers, it seems like a function *is* the simple solution. | The best way is this:
```
a = get_list()
return a[0] if a else None
```
You could also do it in one line, but it's much harder for the programmer to read:
```
return (get_list()[:1] or [None])[0]
``` |
Python idiom to return first item or None | 363,944 | 110 | 2008-12-12T19:56:02Z | 364,405 | 9 | 2008-12-12T22:41:48Z | [
"python",
"idioms",
"python-2.4"
] | I'm sure there's a simpler way of doing this that's just not occurring to me.
I'm calling a bunch of methods that return a list. The list may be empty. If the list is non-empty, I want to return the first item; otherwise, I want to return None. This code works:
```
my_list = get_list()
if len(my_list) > 0: return my_list[0]
return None
```
It seems to me that there should be a simple one-line idiom for doing this, but for the life of me I can't think of it. Is there?
**Edit:**
The reason that I'm looking for a one-line expression here is not that I like incredibly terse code, but because I'm having to write a lot of code like this:
```
x = get_first_list()
if x:
# do something with x[0]
# inevitably forget the [0] part, and have a bug to fix
y = get_second_list()
if y:
# do something with y[0]
# inevitably forget the [0] part AGAIN, and have another bug to fix
```
What I'd like to be doing can certainly be accomplished with a function (and probably will be):
```
def first_item(list_or_none):
if list_or_none: return list_or_none[0]
x = first_item(get_first_list())
if x:
# do something with x
y = first_item(get_second_list())
if y:
# do something with y
```
I posted the question because I'm frequently surprised by what simple expressions in Python can do, and I thought that writing a function was a silly thing to do if there was a simple expression could do the trick. But seeing these answers, it seems like a function *is* the simple solution. | The OP's solution is nearly there, there are just a few things to make it more Pythonic.
For one, there's no need to get the length of the list. Empty lists in Python evaluate to False in an if check. Just simply say
```
if list:
```
Additionally, it's a very Bad Idea to assign to variables that overlap with reserved words. "list" is a reserved word in Python.
So let's change that to
```
some_list = get_list()
if some_list:
```
A really important point that a lot of solutions here miss is that **all Python functions/methods return None by default**. Try the following below.
```
def does_nothing():
pass
foo = does_nothing()
print foo
```
Unless you need to return None to terminate a function early, it's unnecessary to explicitly return None. Quite succinctly, just return the first entry, should it exist.
```
some_list = get_list()
if some_list:
return list[0]
```
And finally, perhaps this was implied, but just to be explicit (because [explicit is better than implicit](http://www.python.org/dev/peps/pep-0020/)), you should not have your function get the list from another function; just pass it in as a parameter. So, the final result would be
```
def get_first_item(some_list):
if some_list:
return list[0]
my_list = get_list()
first_item = get_first_item(my_list)
```
As I said, the OP was nearly there, and just a few touches give it the Python flavor you're looking for. |
Python idiom to return first item or None | 363,944 | 110 | 2008-12-12T19:56:02Z | 365,934 | 32 | 2008-12-13T23:31:55Z | [
"python",
"idioms",
"python-2.4"
] | I'm sure there's a simpler way of doing this that's just not occurring to me.
I'm calling a bunch of methods that return a list. The list may be empty. If the list is non-empty, I want to return the first item; otherwise, I want to return None. This code works:
```
my_list = get_list()
if len(my_list) > 0: return my_list[0]
return None
```
It seems to me that there should be a simple one-line idiom for doing this, but for the life of me I can't think of it. Is there?
**Edit:**
The reason that I'm looking for a one-line expression here is not that I like incredibly terse code, but because I'm having to write a lot of code like this:
```
x = get_first_list()
if x:
# do something with x[0]
# inevitably forget the [0] part, and have a bug to fix
y = get_second_list()
if y:
# do something with y[0]
# inevitably forget the [0] part AGAIN, and have another bug to fix
```
What I'd like to be doing can certainly be accomplished with a function (and probably will be):
```
def first_item(list_or_none):
if list_or_none: return list_or_none[0]
x = first_item(get_first_list())
if x:
# do something with x
y = first_item(get_second_list())
if y:
# do something with y
```
I posted the question because I'm frequently surprised by what simple expressions in Python can do, and I thought that writing a function was a silly thing to do if there was a simple expression could do the trick. But seeing these answers, it seems like a function *is* the simple solution. | ```
def get_first(iterable, default=None):
if iterable:
for item in iterable:
return item
return default
```
Example:
```
x = get_first(get_first_list())
if x:
...
y = get_first(get_second_list())
if y:
...
```
Another option is to inline the above function:
```
for x in get_first_list() or []:
# process x
break # process at most one item
for y in get_second_list() or []:
# process y
break
```
To avoid `break` you could write:
```
for x in yield_first(get_first_list()):
x # process x
for y in yield_first(get_second_list()):
y # process y
```
Where:
```
def yield_first(iterable):
for item in iterable or []:
yield item
return
``` |
Python idiom to return first item or None | 363,944 | 110 | 2008-12-12T19:56:02Z | 25,398,201 | 20 | 2014-08-20T06:36:29Z | [
"python",
"idioms",
"python-2.4"
] | I'm sure there's a simpler way of doing this that's just not occurring to me.
I'm calling a bunch of methods that return a list. The list may be empty. If the list is non-empty, I want to return the first item; otherwise, I want to return None. This code works:
```
my_list = get_list()
if len(my_list) > 0: return my_list[0]
return None
```
It seems to me that there should be a simple one-line idiom for doing this, but for the life of me I can't think of it. Is there?
**Edit:**
The reason that I'm looking for a one-line expression here is not that I like incredibly terse code, but because I'm having to write a lot of code like this:
```
x = get_first_list()
if x:
# do something with x[0]
# inevitably forget the [0] part, and have a bug to fix
y = get_second_list()
if y:
# do something with y[0]
# inevitably forget the [0] part AGAIN, and have another bug to fix
```
What I'd like to be doing can certainly be accomplished with a function (and probably will be):
```
def first_item(list_or_none):
if list_or_none: return list_or_none[0]
x = first_item(get_first_list())
if x:
# do something with x
y = first_item(get_second_list())
if y:
# do something with y
```
I posted the question because I'm frequently surprised by what simple expressions in Python can do, and I thought that writing a function was a silly thing to do if there was a simple expression could do the trick. But seeing these answers, it seems like a function *is* the simple solution. | The most python idiomatic way is to use the next() on a iterator since list is *iterable*. just like what @J.F.Sebastian put in the comment on Dec 13, 2011.
`next(iter(the_list), None)` This returns None if `the_list` is empty. see [next() Python 2.6+](https://docs.python.org/2/library/functions.html#next)
or if you know for sure `the_list` is not empty:
`iter(the_list).next()` see [iterator.next() Python 2.2+](https://docs.python.org/2/library/stdtypes.html#iterator.next) |
MVC and django fundamentals | 364,015 | 8 | 2008-12-12T20:22:29Z | 364,074 | 15 | 2008-12-12T20:37:36Z | [
"python",
"django",
"django-models",
"django-templates"
] | Pretty new to this scene and trying to find some documentation to adopt best practices. We're building a fairly large content site which will consist of various media catalogs and I'm trying to find some comparable data / architectural models so that we can get a better idea of the approach we should use using a framework we've never made use of before. Any insight / help would be greatly appreciated! | "data / architectural models so that we can get a better idea of the approach we should use using a framework we've never made use of before"
Django imposes best practices on you. You don't have a lot of choices and can't make a lot of mistakes.
MVC (while a noble aspiration) is implemented as follows:
* Data is defined in "models.py" files using the Django ORM models.
* urls.py file maps URL to view function. Pick your URL's wisely.
* View function does all processing, making use of models and methods in models
* Presentation (via HTML templates) invoked by View function. Essentially no processing can be done in presentation, just lightweight iteration and decision-making
The model is defined for you. Just stick to what Django does naturally and you'll be happy.
Architecturally, you usually have a stack like this.
* Apache does two things.
+ serves static content directly and immediately
+ hands dynamic URL to Django (via mod\_python, mod\_wsgi or mod\_fastcgi). Django apps map URL to view functions (which access to database (via ORM/model) and display via templates.
* Database used by Django view functions.
The architecture is well-defined for you. Just stick to what Django does naturally and you'll be happy.
Feel free to read the [Django documentation](http://docs.djangoproject.com/en/dev/). It's excellent; perhaps the best there is. |
Markup-based GUI for python | 364,327 | 7 | 2008-12-12T22:10:08Z | 768,465 | 7 | 2009-04-20T14:18:21Z | [
"python",
"user-interface",
"markup"
] | I want to get myself into programming some serious GUI based applications, but when I look at things like Swing/SWT from Java, I can't help but HATE programming a GUI interface by creating "widget" objects and populating them and calling methods on them.
I think GUI design should be done in a separate text-based file in some markup format, which is read and rendered (e.g. HTML), so that the design of the interface is not tightly coupled with the rest of the code.
I've seen [HTMLayout](http://www.terrainformatica.com/htmlayout/) and I love the idea, but so far it seems be only in C++.
I'm looking for a python library (or even a WIP project) for doing markup-based gui.
**UPDATE**
The reason I can't accept QT's xml is the same reason I hate the programatic approach; you're assembling each widget separately, and specifying each property of it on a separate line. It doesn't provide any advantage over doing it the programatic way. | You can try Mozilla's XUL. It supports Python via XPCOM.
See this project: [pyxpcomext](http://pyxpcomext.mozdev.org/no_wrap/tutorials/pyxulrunner/python_xulrunner_about.html)
XUL isn't compiled, it is packaged and loaded at runtime. Firefox and many other great applications use it, but most of them use Javascript for scripting instead of Python. There are one or 2 using Python though. |
In Python, how do I iterate over a dictionary in sorted order? | 364,519 | 136 | 2008-12-12T23:57:05Z | 364,521 | 63 | 2008-12-12T23:57:53Z | [
"python",
"dictionary"
] | There's an existing function that ends in:
```
return dict.iteritems()
```
that returns an unsorted iterator for a given dictionary. I would like to return an iterator that goes through the items in sorted order. How do I do that? | Use the [`sorted()`](http://pydoc.org/2.5.1/__builtin__.html#-sorted) function:
```
return sorted(dict.iteritems())
```
If you want an actual iterator over the sorted results, since `sorted()` returns a list, use:
```
return iter(sorted(dict.iteritems()))
``` |
In Python, how do I iterate over a dictionary in sorted order? | 364,519 | 136 | 2008-12-12T23:57:05Z | 364,524 | 25 | 2008-12-13T00:00:20Z | [
"python",
"dictionary"
] | There's an existing function that ends in:
```
return dict.iteritems()
```
that returns an unsorted iterator for a given dictionary. I would like to return an iterator that goes through the items in sorted order. How do I do that? | Greg's answer is right. Note that in Python 3.0 you'll have to do
```
sorted(dict.items())
```
as `iteritems` will be gone. |
In Python, how do I iterate over a dictionary in sorted order? | 364,519 | 136 | 2008-12-12T23:57:05Z | 364,588 | 35 | 2008-12-13T00:44:32Z | [
"python",
"dictionary"
] | There's an existing function that ends in:
```
return dict.iteritems()
```
that returns an unsorted iterator for a given dictionary. I would like to return an iterator that goes through the items in sorted order. How do I do that? | A dict's keys are stored in a hashtable so that is their 'natural order', i.e. psuedo-random. Any other ordering is a concept of the consumer of the dict.
sorted() always returns a list, not a dict. If you pass it a dict.items() (which produces a list of tuples), it will return a list of tuples [(k1,v1), (k2,v2), ...] which can be used in a loop in a way very much like a dict, but *it is not in anyway a dict*!
```
foo = {
'a': 1,
'b': 2,
'c': 3,
}
print foo
>>> {'a': 1, 'c': 3, 'b': 2}
print foo.items()
>>> [('a', 1), ('c', 3), ('b', 2)]
print sorted(foo.items())
>>> [('a', 1), ('b', 2), ('c', 3)]
```
The following feels like a dict in a loop, but it's not, it's a list of tuples being unpacked into k,v:
```
for k,v in sorted(foo.items()):
print k, v
```
Roughly equivalent to:
```
for k in sorted(foo.keys()):
print k, foo[k]
``` |
In Python, how do I iterate over a dictionary in sorted order? | 364,519 | 136 | 2008-12-12T23:57:05Z | 364,599 | 100 | 2008-12-13T00:49:38Z | [
"python",
"dictionary"
] | There's an existing function that ends in:
```
return dict.iteritems()
```
that returns an unsorted iterator for a given dictionary. I would like to return an iterator that goes through the items in sorted order. How do I do that? | Haven't tested this very extensively, but works in Python 2.5.2.
```
>>> d = {"x":2, "h":15, "a":2222}
>>> it = iter(sorted(d.iteritems()))
>>> it.next()
('a', 2222)
>>> it.next()
('h', 15)
>>> it.next()
('x', 2)
>>>
``` |
How to get item's position in a list? | 364,621 | 103 | 2008-12-13T01:20:32Z | 364,638 | 8 | 2008-12-13T01:33:30Z | [
"python",
"list"
] | I am iterating over a list and I want to print out the index of the item if it meets a certain condition. How would I do this?
Example:
```
testlist = [1,2,3,5,3,1,2,1,6]
for item in testlist:
if item == 1:
print position
``` | ```
for i in xrange(len(testlist)):
if testlist[i] == 1:
print i
```
xrange instead of range as requested (see comments). |
How to get item's position in a list? | 364,621 | 103 | 2008-12-13T01:20:32Z | 364,641 | 36 | 2008-12-13T01:37:46Z | [
"python",
"list"
] | I am iterating over a list and I want to print out the index of the item if it meets a certain condition. How would I do this?
Example:
```
testlist = [1,2,3,5,3,1,2,1,6]
for item in testlist:
if item == 1:
print position
``` | Use enumerate:
```
testlist = [1,2,3,5,3,1,2,1,6]
for position, item in enumerate(testlist):
if item == 1:
print position
``` |
How to get item's position in a list? | 364,621 | 103 | 2008-12-13T01:20:32Z | 364,769 | 168 | 2008-12-13T03:28:26Z | [
"python",
"list"
] | I am iterating over a list and I want to print out the index of the item if it meets a certain condition. How would I do this?
Example:
```
testlist = [1,2,3,5,3,1,2,1,6]
for item in testlist:
if item == 1:
print position
``` | Hmmm. There was an answer with a list comprehension here, but it's disappeared.
Here:
```
[i for i,x in enumerate(testlist) if x == 1]
```
Example:
```
>>> testlist
[1, 2, 3, 5, 3, 1, 2, 1, 6]
>>> [i for i,x in enumerate(testlist) if x == 1]
[0, 5, 7]
```
Update:
Okay, you want a generator expression, we'll have a generator expression. Here's the list comprehension again, in a for loop:
```
>>> for i in [i for i,x in enumerate(testlist) if x == 1]:
... print i
...
0
5
7
```
Now we'll construct a generator...
```
>>> (i for i,x in enumerate(testlist) if x == 1)
<generator object at 0x6b508>
>>> for i in (i for i,x in enumerate(testlist) if x == 1):
... print i
...
0
5
7
```
and niftily enough, we can assign that to a variable, and use it from there...
```
>>> gen = (i for i,x in enumerate(testlist) if x == 1)
>>> for i in gen: print i
...
0
5
7
```
And to think I used to write FORTRAN. |
How to get item's position in a list? | 364,621 | 103 | 2008-12-13T01:20:32Z | 10,266,829 | 82 | 2012-04-22T09:35:15Z | [
"python",
"list"
] | I am iterating over a list and I want to print out the index of the item if it meets a certain condition. How would I do this?
Example:
```
testlist = [1,2,3,5,3,1,2,1,6]
for item in testlist:
if item == 1:
print position
``` | What about the following?
```
print testlist.index(element)
```
If you are not sure whether the element to look for is actually in the list, you can add a preliminary check, like
```
if element in testlist:
print testlist.index(element)
```
or
```
print(testlist.index(element) if element in testlist else None)
``` |
generator comprehension | 364,802 | 37 | 2008-12-13T03:55:44Z | 364,816 | 10 | 2008-12-13T04:08:34Z | [
"python"
] | What does generator comprehension do? How does it work? I couldn't find a tutorial about it. | A generator comprehension is the lazy version of a list comprehension.
It is just like a list comprehension except that it returns an iterator instead of the list ie an object with a next() method that will yield the next element.
If you are not familiar with list comprehensions see [here](http://docs.python.org/tutorial/datastructures.html#list-comprehensions) and for generators see [here](http://docs.python.org/tutorial/classes.html#generators). |
generator comprehension | 364,802 | 37 | 2008-12-13T03:55:44Z | 364,824 | 76 | 2008-12-13T04:14:04Z | [
"python"
] | What does generator comprehension do? How does it work? I couldn't find a tutorial about it. | Do you understand list comprehensions? If so, a generator expression is like a list comprehension, but instead of finding all the items you're interested and packing them into list, it waits, and yields each item out of the expression, one by one.
```
>>> my_list = [1, 3, 5, 9, 2, 6]
>>> filtered_list = [item for item in my_list if item > 3]
>>> print filtered_list
[5, 9, 6]
>>> len(filtered_list)
3
>>> # compare to generator expression
...
>>> filtered_gen = (item for item in my_list if item > 3)
>>> print filtered_gen # notice it's a generator object
<generator object at 0xb7d5e02c>
>>> len(filtered_gen) # So technically, it has no length
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'generator' has no len()
>>> # We extract each item out individually. We'll do it manually first.
...
>>> filtered_gen.next()
5
>>> filtered_gen.next()
9
>>> filtered_gen.next()
6
>>> filtered_gen.next() # Should be all out of items and give an error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>> # Yup, the generator is spent. No values for you!
...
>>> # Let's prove it gives the same results as our list comprehension
...
>>> filtered_gen = (item for item in my_list if item > 3)
>>> gen_to_list = list(filtered_gen)
>>> print gen_to_list
[5, 9, 6]
>>> filtered_list == gen_to_list
True
>>>
```
Because a generator expression only has to yield one item at a time, it can lead to big savings in memory usage. Generator expressions make the most sense in scenarios where you need to take one item at a time, do a lot of calculations based on that item, and then move on to the next item. If you need more than one value, you can also use a generator expression and grab a few at a time. If you need all the values before your program proceeds, use a list comprehension instead. |
Can you change a field label in the Django Admin application? | 365,082 | 19 | 2008-12-13T10:30:41Z | 365,236 | 37 | 2008-12-13T13:59:40Z | [
"python",
"django",
"django-admin",
"django-forms"
] | As the title suggests. I want to be able to change the label of a single field in the admin application. I'm aware of the Form.field attribute, but how do I get my Model or ModelAdmin to pass along that information? | the [verbose name](http://docs.djangoproject.com/en/dev/topics/db/models/#verbose-field-names) of the field is the (optional) first parameter at field construction. |
Can you change a field label in the Django Admin application? | 365,082 | 19 | 2008-12-13T10:30:41Z | 14,743,532 | 12 | 2013-02-07T04:25:28Z | [
"python",
"django",
"django-admin",
"django-forms"
] | As the title suggests. I want to be able to change the label of a single field in the admin application. I'm aware of the Form.field attribute, but how do I get my Model or ModelAdmin to pass along that information? | If your field is a property (a method) then you should use short\_description:
```
class Person(models.Model):
...
def address_report(self, instance):
...
# short_description functions like a model field's verbose_name
address_report.short_description = "Address"
``` |
Cross platform keylogger | 365,110 | 9 | 2008-12-13T11:10:29Z | 365,225 | 10 | 2008-12-13T13:46:12Z | [
"python",
"cross-platform",
"time-management",
"keylogger"
] | I'm looking for ways to watch mouse and keyboard events on Windows, Linux and Mac from Python.
My application is a time tracker. I'm not looking into the event, I just record the time when it happens. If there are no events for a certain time, say 10 minutes, I assume that the user has left and stop the current project.
When the user returns (events come in again), I wait a moment (so this doesn't get triggered by the cleaning crew or your pets or an earthquake). If the events persist over a longer period of time, I assume that the user has returned and I pop up a small, inactive window where she can choose to add the time interval to "break", the current project (meeting, etc) or a different project.
I've solved the keylogger for Windows using the [pyHook](http://sourceforge.net/projects/pyhook/).
On Linux, I have found a solution but I don't like it: I can watch all device nodes in /etc/input and update a timestamp somewhere in /var or /tmp every time I see an event. There are two drawbacks: 1. I can't tell whether the event if from the user who is running the time tracker and 2. this little program needs to be run as root (not good).
On Mac, I have no idea, yet.
Questions:
1. Is there a better way to know whether the user is creating events than watching the event devices on Linux?
2. Any pointers how to do that on a Mac? | There are couple of open source apps that might give you some pointers:
* [PyKeylogger](http://sourceforge.net/p/pykeylogger/wiki/Main_Page/) is python keylogger for windows and linux
* [logKext](http://code.google.com/p/logkext/) is a c++ keylogger for mac |
Cross platform keylogger | 365,110 | 9 | 2008-12-13T11:10:29Z | 2,074,117 | 7 | 2010-01-15T19:16:24Z | [
"python",
"cross-platform",
"time-management",
"keylogger"
] | I'm looking for ways to watch mouse and keyboard events on Windows, Linux and Mac from Python.
My application is a time tracker. I'm not looking into the event, I just record the time when it happens. If there are no events for a certain time, say 10 minutes, I assume that the user has left and stop the current project.
When the user returns (events come in again), I wait a moment (so this doesn't get triggered by the cleaning crew or your pets or an earthquake). If the events persist over a longer period of time, I assume that the user has returned and I pop up a small, inactive window where she can choose to add the time interval to "break", the current project (meeting, etc) or a different project.
I've solved the keylogger for Windows using the [pyHook](http://sourceforge.net/projects/pyhook/).
On Linux, I have found a solution but I don't like it: I can watch all device nodes in /etc/input and update a timestamp somewhere in /var or /tmp every time I see an event. There are two drawbacks: 1. I can't tell whether the event if from the user who is running the time tracker and 2. this little program needs to be run as root (not good).
On Mac, I have no idea, yet.
Questions:
1. Is there a better way to know whether the user is creating events than watching the event devices on Linux?
2. Any pointers how to do that on a Mac? | There's a great article on **Writing Linux Kernel Keyloggers**
<http://www.phrack.com/issues.html?issue=59&id=14#article>
If you are attempting to run a honeypot, then definitely give Sebek a try:
<https://projects.honeynet.org/sebek/>
> Sebek is a data capture tool designed
> to capture attacker's activities on a
> honeypot, without the attacker
> (hopefully) knowing it. It has two
> components. The first is a client that
> runs on the honeypots, its purpose is
> to capture all of the attackers
> activities (keystrokes, file uploads,
> passwords) then covertly send the data
> to the server. The second component is
> the server which collects the data
> from the honeypots. The server
> normally runs on the Honeywall
> gateway, but can also run
> independently. For more information on
> Sebek, please see
> <http://www.honeynet.org/tools/sebek>
But, if you'd rather follow the script kiddie route / not learn, then try out the following apps:
LINUX
<http://sourceforge.net/projects/lkl/>
WINDOWS
<http://www.rohos.com/kid-logger/>
<http://code.google.com/p/freelogger/>
---
ADVICE: You're better off writing your own for learning-and-profit purposes. |
What is the pythonic way to avoid default parameters that are empty lists? | 366,422 | 34 | 2008-12-14T11:23:54Z | 366,430 | 68 | 2008-12-14T11:27:30Z | [
"python"
] | Sometimes it seems natural to have a default parameter which is an empty list. Yet Python gives unexpected behavior in these situations.
If for example, I have a function:
```
def myFunc(working_list = []):
working_list.append("a")
print working_list
```
The first time it is called with the default will work, but calls after that will use a constantly updating list.
So, what is the pythonic way to get the behavior I desire (a fresh list on each call)? | ```
def myFunc(working_list=None):
if working_list is None:
working_list = []
working_list.append("a")
print working_list
```
is how I do it. |
What is the pythonic way to avoid default parameters that are empty lists? | 366,422 | 34 | 2008-12-14T11:23:54Z | 366,446 | 9 | 2008-12-14T11:43:57Z | [
"python"
] | Sometimes it seems natural to have a default parameter which is an empty list. Yet Python gives unexpected behavior in these situations.
If for example, I have a function:
```
def myFunc(working_list = []):
working_list.append("a")
print working_list
```
The first time it is called with the default will work, but calls after that will use a constantly updating list.
So, what is the pythonic way to get the behavior I desire (a fresh list on each call)? | Not that it matters in this case, but you can use object identity to test for None:
```
if working_list is None: working_list = []
```
You could also take advantage of how the boolean operator or is defined in python:
```
working_list = working_list or []
```
Though this will behave unexpectedly if the caller gives you an empty list (which counts as false) as working\_list and expects your function to modify the list he gave it. |
What is the pythonic way to avoid default parameters that are empty lists? | 366,422 | 34 | 2008-12-14T11:23:54Z | 2,021,717 | 7 | 2010-01-07T16:09:07Z | [
"python"
] | Sometimes it seems natural to have a default parameter which is an empty list. Yet Python gives unexpected behavior in these situations.
If for example, I have a function:
```
def myFunc(working_list = []):
working_list.append("a")
print working_list
```
The first time it is called with the default will work, but calls after that will use a constantly updating list.
So, what is the pythonic way to get the behavior I desire (a fresh list on each call)? | If the intent of the function is to *modify* the parameter passed as `working_list`, see HenryR's answer (=None, check for None inside).
But if you didn't intend to mutate the argument, just use it as starting point for a list, you can simply copy it:
```
def myFunc(starting_list = []):
starting_list = list(starting_list)
starting_list.append("a")
print starting_list
```
(or in this simple case just `print starting_list + ["a"]` but I guess that was just a toy example)
In general, mutating your arguments is bad style in Python. The only functions that are fully expected to mutate an object are methods of the object.
* If you do it from the C habit of "output arguments", that's completely unnecessary - you can always return multiple values as a tuple.
* If you do this to efficiently build a long list of results without building intermediate lists, consider writing it as a generator and using `result_list.extend(myFunc())` when you are calling it. This way your calling conventions remains very clean. |
How to limit execution time of a function call in Python | 366,682 | 37 | 2008-12-14T16:20:24Z | 366,763 | 29 | 2008-12-14T17:27:50Z | [
"python",
"multithreading"
] | There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread. | I'm not sure how cross-platform this might be, but using signals and alarm might be a good way of looking at this. With a little work you could make this completely generic as well and usable in any situation.
<http://docs.python.org/library/signal.html>
So your code is going to look something like this.
```
import signal
def signal_handler(signum, frame):
raise Exception("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(10) # Ten seconds
try:
long_function_call()
except Exception, msg:
print "Timed out!"
``` |
How to limit execution time of a function call in Python | 366,682 | 37 | 2008-12-14T16:20:24Z | 601,168 | 46 | 2009-03-02T03:14:25Z | [
"python",
"multithreading"
] | There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread. | An improvement on @rik.the.vik's answer would be to use the [`with` statement](http://www.python.org/dev/peps/pep-0343/) to give the timeout function some syntactic sugar:
```
from __future__ import with_statement # Required in 2.5
import signal
from contextlib import contextmanager
class TimeoutException(Exception): pass
@contextmanager
def time_limit(seconds):
def signal_handler(signum, frame):
raise TimeoutException, "Timed out!"
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
try:
with time_limit(10):
long_function_call()
except TimeoutException, msg:
print "Timed out!"
``` |
How to limit execution time of a function call in Python | 366,682 | 37 | 2008-12-14T16:20:24Z | 1,114,567 | 7 | 2009-07-11T20:30:32Z | [
"python",
"multithreading"
] | There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread. | Doing this from within a signal handler is dangerous: you might be inside an exception handler at the time the exception is raised, and leave things in a broken state. For example,
```
def function_with_enforced_timeout():
f = open_temporary_file()
try:
...
finally:
here()
unlink(f.filename)
```
If your exception is raised here(), the temporary file will never be deleted.
The solution here is for asynchronous exceptions to be postponed until the code is not inside exception-handling code (an except or finally block), but Python doesn't do that.
Note that this won't interrupt anything while executing native code; it'll only interrupt it when the function returns, so this may not help this particular case. (SIGALRM itself might interrupt the call that's blocking--but socket code typically simply retries after an EINTR.)
Doing this with threads is a better idea, since it's more portable than signals. Since you're starting a worker thread and blocking until it finishes, there are none of the usual concurrency worries. Unfortunately, there's no way to deliver an exception asynchronously to another thread in Python (other thread APIs can do this). It'll also have the same issue with sending an exception during an exception handler, and require the same fix. |
How to limit execution time of a function call in Python | 366,682 | 37 | 2008-12-14T16:20:24Z | 26,664,130 | 13 | 2014-10-30T22:05:02Z | [
"python",
"multithreading"
] | There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread. | Here's a Linux/OSX way to limit a function's running time. This is in case you don't want to use threads, and want your program to wait until the function ends, or the time limit expires.
```
from multiprocessing import Process
from time import sleep
def f(time):
sleep(time)
def run_with_limited_time(func, args, kwargs, time):
"""Runs a function with time limit
:param func: The function to run
:param args: The functions args, given as tuple
:param kwargs: The functions keywords, given as dict
:param time: The time limit in seconds
:return: True if the function ended successfully. False if it was terminated.
"""
p = Process(target=func, args=args, kwargs=kwargs)
p.start()
p.join(time)
if p.is_alive():
p.terminate()
return False
return True
if __name__ == '__main__':
print run_with_limited_time(f, (1.5, ), {}, 2.5) # True
print run_with_limited_time(f, (3.5, ), {}, 2.5) # False
``` |
Bitwise subtraction in Python | 366,706 | 9 | 2008-12-14T16:41:28Z | 366,735 | 9 | 2008-12-14T17:06:34Z | [
"python",
"low-level"
] | This is a follow-up to [my question yesterday](http://stackoverflow.com/questions/365522/what-is-the-best-way-to-add-two-numbers-without-using-the-operator):
CMS kindly provided this example of using bitwise operators to add two numbers in C:
```
#include<stdio.h>
int add(int x, int y) {
int a, b;
do {
a = x & y;
b = x ^ y;
x = a << 1;
y = b;
} while (a);
return b;
}
int main( void ){
printf( "6 + 3 = %d", add(6,3));
printf( "6 - 3 = %d", add(6,-3));
return 0;
}
```
It works great and I then ported it to Python as follows:
```
def add(x, y):
while True:
a = x & y
b = x ^ y
x = a << 1
y = b
if a == 0:
break
return b
print "6 + 3 = %d" % add(6,3)
print "6 - 3 = %d" % add(6,-3)
```
They both work for addition and the C program works for subtraction as well. However, the Python program enters an infinite loop for subtraction. I am trying to get to the bottom of this and have posted the program here for further experimentation: <http://codepad.org/pb8IuLnY>
Can anyone advise why there would be a difference between the way C handles this and the way CPython handles this? | As I pointed out in my response to CMS' answer yesterday, left-shifting a negative number is undefined behavior in C so this isn't even guaranteed to work in C (the problem is how to handle the signed bit, do you shift it like a value bit or is it not affected by a shift? The standards committee couldn't agree on a behavior so it was left undefined).
When this happens to work in C it relies on fixed bit-width integers so that the leftmost bit gets pushed off the end when you do a shift (it also requires the sign bit to be treated as a value bit for shifting purposes). All integer types in C are fixed-bit but Python numbers can be arbitrarily large. Left-shifting a number in Python just causes it to keep getting larger:
```
>>> 1 << 100
1267650600228229401496703205376L
```
You could try something like this:
```
x = (a << 1) & 0xffffffff
```
To limit the result to 32-bits, the problem is that the left shift operator in Python doesn't shift the sign bit of a signed number (which is part of what is required to make this particular solution work). There might be a way to change the behavior of the shift operator but I don't know how. |
How to organize python test in a way that I can run all tests in a single command? | 366,720 | 14 | 2008-12-14T16:52:50Z | 366,828 | 10 | 2008-12-14T18:41:31Z | [
"python",
"unit-testing",
"python-nose"
] | Currently my code is organized in the following tree structure:
```
src/
module1.py
module2.py
test_module1.py
test_module2.py
subpackage1/
__init__.py
moduleA.py
moduleB.py
test_moduleA.py
test_moduleB.py
```
Where the `module*.py` files contains the source code and the `test_module*.py` contains the `TestCase`s for the relevant module.
With the following comands I can run the tests contained in a single file, for example:
```
$ cd src
$ nosetests test_filesystem.py
..................
----------------------------------------------------------------------
Ran 18 tests in 0.390s
OK
```
How can I run all tests? I tried with `nosetests -m 'test_.*'` but it doesn't work.
```
$cd src
$ nosetests -m 'test_.*'
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
```
Thanks | Whether you seperate or mix tests and modules is probably a matter of taste, although I would strongly advocate for keeping them apart (setup reasons, code stats etc).
When you're using nosetests, make sure that all directories with tests are real packages:
```
src/
module1.py
module2.py
subpackage1/
__init__.py
moduleA.py
moduleB.py
tests/
__init__.py
test_module1.py
test_module2.py
subpackage1/
__init__.py
test_moduleA.py
test_moduleB.py
```
This way, you can just run `nosetests` in the toplevel directory and all tests will be found. You need to make sure that `src/` is on the `PYTHONPATH`, however, otherwise all the tests will fail due to missing imports. |
How to organize python test in a way that I can run all tests in a single command? | 366,720 | 14 | 2008-12-14T16:52:50Z | 373,150 | 7 | 2008-12-16T23:28:39Z | [
"python",
"unit-testing",
"python-nose"
] | Currently my code is organized in the following tree structure:
```
src/
module1.py
module2.py
test_module1.py
test_module2.py
subpackage1/
__init__.py
moduleA.py
moduleB.py
test_moduleA.py
test_moduleB.py
```
Where the `module*.py` files contains the source code and the `test_module*.py` contains the `TestCase`s for the relevant module.
With the following comands I can run the tests contained in a single file, for example:
```
$ cd src
$ nosetests test_filesystem.py
..................
----------------------------------------------------------------------
Ran 18 tests in 0.390s
OK
```
How can I run all tests? I tried with `nosetests -m 'test_.*'` but it doesn't work.
```
$cd src
$ nosetests -m 'test_.*'
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
```
Thanks | If they all begin with `test` then just `nosetest` should work. Nose automatically searches for any files beginning with 'test'. |
How to localize Content of a Django application | 366,838 | 9 | 2008-12-14T18:48:49Z | 367,698 | 10 | 2008-12-15T07:40:50Z | [
"python",
"django",
"localization",
"internationalization"
] | Hey, i am currently working on a django app for my studies, and came to the point of l18n. Localizing the site itself was very easy, but now i have to allow users, to translate the dynamic content of the application.
Users can save "products" in the database and give them names and descriptions, but since the whole site should be localized, i must provide a way of translating theses names and descriptions to the users.
Is there a natural way in django to do this? Or do i have to realize it as part of the application (by representing the translations in the datamodel)
Thanks, Janosch | I would suggest checking out [django-multilingual](http://code.google.com/p/django-multilingual/). It is a third party app that lets you define translation fields on your models.
Of course, you still have to type in the actual translations, but they are stored transparently in the database (as opposed to in static PO files), which is what I believe you are asking about. |
No print output from child multiprocessing.Process unless the program crashes | 367,053 | 12 | 2008-12-14T22:07:56Z | 367,065 | 21 | 2008-12-14T22:18:22Z | [
"python",
"multithreading",
"io",
"multiprocessing"
] | I am having trouble with the Python multiprocessing module. I am using the `Process` class to spawn a new process in order to utilize my second core. This second process loads a bunch of data into RAM and then waits patiently instead of consuming.
I wanted to see what that process printed with the `print` command, however, I do not see anything that it prints. I only see what the parent process prints. Now this makes sense to me since they live in two different process. The second process doesn't spawn its own shell/standard output window, nor is its output sent to the parent. Yet when this process crashs, it prints everything that my script told it to print, plus the stack trace and error.
I am wondering if there is a simple way to send the child process's print output to the first process, or have it spawn a shell/standard output so that I may debug it. I know I could create a `multiprocessing.Queue` dedicated to transmitting prints to the parent so that it may print these to standard output, but I do not feel like doing this if a simpler solution exists. | Have you tried flushing stdout?
```
import sys
print "foo"
sys.stdout.flush()
``` |
Is there a Python equivalent to `perl -pi -e`? | 367,115 | 15 | 2008-12-14T22:57:33Z | 367,136 | 23 | 2008-12-14T23:12:50Z | [
"python",
"perl",
"command-line",
"language-features"
] | I know of `python -c '<code>'`, but I'm wondering if there's a more elegant python equivalent to `perl -pi -e '<code>'`. I still use it quite a bit for things like find and replace in a whole directory (`perl -pi -e s/foo/bar/g *` or even `find . | xargs perl -pi -e s/foo/bar/g` for sub-directories).
I actually feel that that which makes Perl Perl (free form Tim Toady-ness) is what makes `perl -pi -e` work so well, while with Python you'd have to do something along the lines of importing the re module, creating an re instance and then capture stdin, but maybe there's a Python shortcut that does all that and I missed it (sorely missed it)... | Python is for muggles. If magic you want, Perl you should use! |
Is there a Python equivalent to `perl -pi -e`? | 367,115 | 15 | 2008-12-14T22:57:33Z | 367,181 | 9 | 2008-12-14T23:53:48Z | [
"python",
"perl",
"command-line",
"language-features"
] | I know of `python -c '<code>'`, but I'm wondering if there's a more elegant python equivalent to `perl -pi -e '<code>'`. I still use it quite a bit for things like find and replace in a whole directory (`perl -pi -e s/foo/bar/g *` or even `find . | xargs perl -pi -e s/foo/bar/g` for sub-directories).
I actually feel that that which makes Perl Perl (free form Tim Toady-ness) is what makes `perl -pi -e` work so well, while with Python you'd have to do something along the lines of importing the re module, creating an re instance and then capture stdin, but maybe there's a Python shortcut that does all that and I missed it (sorely missed it)... | The command line usage from '`python -h`' certainly strongly suggests there is no such equivalent. Perl tends to make extensive use of '`$_`' (your examples make implicit use of it), and I don't think Python supports any similar concept, thereby making Python equivalents of the Perl one-liners much harder. |
Is there a Python equivalent to `perl -pi -e`? | 367,115 | 15 | 2008-12-14T22:57:33Z | 367,238 | 9 | 2008-12-15T00:58:30Z | [
"python",
"perl",
"command-line",
"language-features"
] | I know of `python -c '<code>'`, but I'm wondering if there's a more elegant python equivalent to `perl -pi -e '<code>'`. I still use it quite a bit for things like find and replace in a whole directory (`perl -pi -e s/foo/bar/g *` or even `find . | xargs perl -pi -e s/foo/bar/g` for sub-directories).
I actually feel that that which makes Perl Perl (free form Tim Toady-ness) is what makes `perl -pi -e` work so well, while with Python you'd have to do something along the lines of importing the re module, creating an re instance and then capture stdin, but maybe there's a Python shortcut that does all that and I missed it (sorely missed it)... | An equivalent to -pi isn't that hard to write in Python.
1. Write yourself a handy module with the -p and -i features you really like. Let's call it `pypi.py`.
2. Use `python -c 'import pypi; pypi.subs("this","that")'`
You can implement the basic -p loop with the [fileinput](http://docs.python.org/library/fileinput.html) module.
You'd have a function, `subs` that implements the essential "-i" algorithm of opening a file, saving the backup copy, and doing the substitute on each line.
There are some activestate recipes like this. Here are some:
* <http://code.activestate.com/recipes/437932/>
* <http://code.activestate.com/recipes/435904/>
* <http://code.activestate.com/recipes/576537/>
Not built-in. But not difficult to write. And once written easy to customize. |
Is there a Python equivalent to `perl -pi -e`? | 367,115 | 15 | 2008-12-14T22:57:33Z | 14,429,701 | 9 | 2013-01-20T21:45:29Z | [
"python",
"perl",
"command-line",
"language-features"
] | I know of `python -c '<code>'`, but I'm wondering if there's a more elegant python equivalent to `perl -pi -e '<code>'`. I still use it quite a bit for things like find and replace in a whole directory (`perl -pi -e s/foo/bar/g *` or even `find . | xargs perl -pi -e s/foo/bar/g` for sub-directories).
I actually feel that that which makes Perl Perl (free form Tim Toady-ness) is what makes `perl -pi -e` work so well, while with Python you'd have to do something along the lines of importing the re module, creating an re instance and then capture stdin, but maybe there's a Python shortcut that does all that and I missed it (sorely missed it)... | I know this is a couple of years too late, but I've recently
found a very nice tool called [pyp](http://code.google.com/p/pyp/), which does
exactly what you've asked for.
I think your command should be:
```
pyp "p.replace('foo','bar')"
``` |
Splitting a string into words and punctuation | 367,155 | 22 | 2008-12-14T23:30:33Z | 367,292 | 35 | 2008-12-15T01:53:18Z | [
"python",
"string",
"split"
] | I'm trying to split a string up into words and punctuation, adding the punctuation to the list produced by the split.
For instance:
```
>>> c = "help, me"
>>> print c.split()
['help,', 'me']
```
What I really want the list to look like is:
```
['help', ',', 'me']
```
So, I want the string split at whitespace with the punctuation split from the words.
I've tried to parse the string first and then run the split:
```
>>> for character in c:
... if character in ".,;!?":
... outputCharacter = " %s" % character
... else:
... outputCharacter = character
... separatedPunctuation += outputCharacter
>>> print separatedPunctuation
help , me
>>> print separatedPunctuation.split()
['help', ',', 'me']
```
This produces the result I want, but is painfully slow on large files.
Is there a way to do this more efficiently? | This is more or less the way to do it:
```
>>> import re
>>> re.findall(r"[\w']+|[.,!?;]", "Hello, I'm a string!")
['Hello', ',', "I'm", 'a', 'string', '!']
```
The trick is, not to think about where to split the string, but what to include in the tokens.
Caveats:
* The underscore (\_) is considered an inner-word character. Replace \w, if you don't want that.
* This will not work with (single) quotes in the string.
* Put any additional punctuation marks you want to use in the right half of the regular expression.
* Anything not explicitely mentioned in the re is silently dropped. |
Splitting a string into words and punctuation | 367,155 | 22 | 2008-12-14T23:30:33Z | 8,930,959 | 13 | 2012-01-19T17:58:09Z | [
"python",
"string",
"split"
] | I'm trying to split a string up into words and punctuation, adding the punctuation to the list produced by the split.
For instance:
```
>>> c = "help, me"
>>> print c.split()
['help,', 'me']
```
What I really want the list to look like is:
```
['help', ',', 'me']
```
So, I want the string split at whitespace with the punctuation split from the words.
I've tried to parse the string first and then run the split:
```
>>> for character in c:
... if character in ".,;!?":
... outputCharacter = " %s" % character
... else:
... outputCharacter = character
... separatedPunctuation += outputCharacter
>>> print separatedPunctuation
help , me
>>> print separatedPunctuation.split()
['help', ',', 'me']
```
This produces the result I want, but is painfully slow on large files.
Is there a way to do this more efficiently? | Here is a Unicode-aware version:
```
re.findall(r"\w+|[^\w\s]", text, re.UNICODE)
```
The first alternative catches sequences of word characters (as defined by unicode, so "résumé" won't turn into `['r', 'sum']`); the second catches individual non-word characters, ignoring whitespace.
Note that, unlike the top answer, this treats the single quote as separate punctuation (e.g. "I'm" -> `['I', "'", 'm']`). This appears to be standard in NLP, so I consider it a feature. |
Setting up a foreign key to an abstract base class with Django | 367,461 | 21 | 2008-12-15T04:16:33Z | 367,479 | 7 | 2008-12-15T04:33:46Z | [
"python",
"django",
"inheritance",
"django-models"
] | I've factored out common attributes from two classes into an abstract base class, however I have another model that needs to reference either one of those classes. It's not possible to reference an ABC as it doesn't actually have a database table.
The following example should illustrate my problem:
```
class Answer(models.Model):
ovramt = models.ForeignKey("Ovramt")
question = models.ForeignKey("Question")
answer = models.CharField(max_length=3, choices=(("yes","yes"),("no","no") ("NA","N/A"))
likelihood = models.IntegerField(choices=LIKELY_CHOICES)
consequence = models.IntegerField(choices=CONSEQUENCE_CHOICES)
class Meta:
abstract = True
class Answer_A(Answer):
resident = models.ForeignKey("Resident")
def __unicode__(self):
return u"%s - %s - %s" %(self.ovramt.ssa.name, self.resident, self.question)
class Answer_B(Answer):
def __unicode__(self):
return u"%s - %s" %(self.ovramt.ssa.name, self.question)
class Answer_Risk(models.Model):
answer = models.ForeignKey("Answer")
risk = models.CharField(max_length=200)
def __unicode__(self):
return self.risk
```
Answer\_A and Answer\_B are slightly different in that Answer\_A also needs a FK relationship to another table. Answer\_B may also require some specific attributes later. The problem would STILL exist if I had Answer\_B be the superclass - and have Answer\_A subclass or compose it.
A 'Risk' is the same whether it's Answer\_A or Answer\_B. I also have other models that need to reference an 'Answer' regardless of it's sub-type. How can this be done? How can you reference a type regardless of it's sub-type?
Update:
I was trying to avoid a join operation but I don't think I'm going to be able to. Would it be worth having the reference to 'Resident' in all 'Answer's and just nulling it where required? Or is that considered very bad practice? | My gut would be to suggest removing the abstract modifier on the base class. You'll get the same model structure, but Answer will be it's own table. The downside of this is that if these are large tables and/or your queries are complex, queries against it could be noticeably slower.
Alternatively, you could keep your models as is, but replace the ForeignKey to Animal with a [GenericForeignKey](https://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/#generic-relations). What you lose in the syntactic sugar of model inheritance, you gain a bit in query speed.
I don't believe it's possible to reference an abstract base model by ForeignKey (or anything functionally the same). |
Setting up a foreign key to an abstract base class with Django | 367,461 | 21 | 2008-12-15T04:16:33Z | 367,765 | 15 | 2008-12-15T08:24:54Z | [
"python",
"django",
"inheritance",
"django-models"
] | I've factored out common attributes from two classes into an abstract base class, however I have another model that needs to reference either one of those classes. It's not possible to reference an ABC as it doesn't actually have a database table.
The following example should illustrate my problem:
```
class Answer(models.Model):
ovramt = models.ForeignKey("Ovramt")
question = models.ForeignKey("Question")
answer = models.CharField(max_length=3, choices=(("yes","yes"),("no","no") ("NA","N/A"))
likelihood = models.IntegerField(choices=LIKELY_CHOICES)
consequence = models.IntegerField(choices=CONSEQUENCE_CHOICES)
class Meta:
abstract = True
class Answer_A(Answer):
resident = models.ForeignKey("Resident")
def __unicode__(self):
return u"%s - %s - %s" %(self.ovramt.ssa.name, self.resident, self.question)
class Answer_B(Answer):
def __unicode__(self):
return u"%s - %s" %(self.ovramt.ssa.name, self.question)
class Answer_Risk(models.Model):
answer = models.ForeignKey("Answer")
risk = models.CharField(max_length=200)
def __unicode__(self):
return self.risk
```
Answer\_A and Answer\_B are slightly different in that Answer\_A also needs a FK relationship to another table. Answer\_B may also require some specific attributes later. The problem would STILL exist if I had Answer\_B be the superclass - and have Answer\_A subclass or compose it.
A 'Risk' is the same whether it's Answer\_A or Answer\_B. I also have other models that need to reference an 'Answer' regardless of it's sub-type. How can this be done? How can you reference a type regardless of it's sub-type?
Update:
I was trying to avoid a join operation but I don't think I'm going to be able to. Would it be worth having the reference to 'Resident' in all 'Answer's and just nulling it where required? Or is that considered very bad practice? | A [generic relation](https://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/#generic-relations) seems to be the solution. But it will complicate things even further.
It seems to me; your model structure is already more complex than necessary. I would simply merge all three `Answer` models into one. This way:
* `Answer_Risk` would work without modification.
* You can set `resident` to None (NULL) in case of an `Answer_A`.
* You can return different string represantations depending on `resident == None`. (in other words; same functionality)
One more thing; are your answers likely to have more than one risk? If they'll have none or one risk you should consider following alternative implementations:
* Using a [one-to-one relationship](http://docs.djangoproject.com/en/dev/topics/db/models/#one-to-one-relationships)
* Demoting risk as a field (or any number of fields) inside `Answer` class.
My main concern is neither database structure nor performance (although these changes should improve performance) but *code maintainability*. |
How much input validation should I be doing on my python functions/methods? | 367,560 | 15 | 2008-12-15T05:39:52Z | 368,054 | 10 | 2008-12-15T11:18:12Z | [
"python",
"validation"
] | I'm interested in how much up front validation people do in the Python they write.
Here are a few examples of simple functions:
```
def factorial(num):
"""Computes the factorial of num."""
def isPalindrome(inputStr):
"""Tests to see if inputStr is the same backwards and forwards."""
def sum(nums):
"""Same as the built-in sum()... computes the sum of all the numbers passed in."""
```
How thoroughly do you check the input values before beginning computation, and how do you do your checking? Do you throw some kind of proprietary exception if input is faulty (BadInputException defined in the same module, for example)? Do you just start your calculation and figure it will throw an exception at some point if bad data was passed in ("asd" to factorial, for example)?
When the passed in value is supposed to be a container do you check not only the container but all the values inside it?
What about situations like factorial, where what's passed in might be convertible to an int (e.g. a float) but you might lose precision when doing so? | I `assert` what's absolutely essential.
Important: What's *absolutely* essential. Some people over-test things.
```
def factorial(num):
assert int(num)
assert num > 0
```
Isn't completely correct. long is also a legal possibility.
```
def factorial(num):
assert type(num) in ( int, long )
assert num > 0
```
Is better, but still not perfect. Many Python types (like rational numbers, or number-like objects) can also work in a good factorial function. It's hard to assert that an object has basic integer-like properties without being too specific and eliminating future unthought-of classes from consideration.
I never define unique exceptions for individual functions. I define a unique exception for a significant module or package. Usually, however, just an `Error` class or something similar. That way the application says `except somelibrary.Error,e:` which is about all you need to know. Fine-grained exceptions get fussy and silly.
I've never done this, but I can see places where it might be necessary.
```
assert all( type(i) in (int,long) for i in someList )
```
Generally, however, the ordinary Python built-in type checks work fine. They find almost all of the exceptional situations that matter almost all the time. When something isn't the right type, Python raises a TypeError that always points at the right line of code.
BTW. I only add asserts at design time if I'm absolutely certain the function will be abused. I sometimes add assertions later when I have a unit test that fails in an obscure way. |
How do I build a numpy array from a generator? | 367,565 | 81 | 2008-12-15T05:44:31Z | 367,599 | 72 | 2008-12-15T06:31:12Z | [
"python",
"numpy",
"generator"
] | How can I build a numpy array out of a generator object?
Let me illustrate the problem:
```
>>> import numpy
>>> def gimme():
... for x in xrange(10):
... yield x
...
>>> gimme()
<generator object at 0x28a1758>
>>> list(gimme())
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> numpy.array(xrange(10))
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> numpy.array(gimme())
array(<generator object at 0x28a1758>, dtype=object)
>>> numpy.array(list(gimme()))
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
```
In this instance, gimme() is the generator whose output I'd like to turn into an array. However, the array constructor does not iterate over the generator, it simply stores the generator itself. The behaviour I desire is that from numpy.array(list(gimme())), but I don't want to pay the memory overhead of having the intermediate list and the final array in memory at the same time. Is there a more space-efficient way? | Numpy arrays require their length to be set explicitly at creation time, unlike python lists. This is necessary so that space for each item can be consecutively allocated in memory. Consecutive allocation is the key feature of numpy arrays: this combined with native code implementation let operations on them execute much quicker than regular lists.
Keeping this in mind, it is technically impossible to take a generator object and turn it into an array unless you either:
(a) can predict how many elements it will yield when run:
```
my_array = numpy.zeros(predict_length())
for i, el in enumerate(gimme()): my_array[i] = el
```
(b) are willing to store its elements in an intermediate list :
```
my_array = numpy.array(list(gimme()))
```
(c) can make two identical generators, run through the first one to find the total length, initialize the array, and then run through the generator again to find each element:
```
length = sum(1 for el in gimme())
my_array = numpy.zeros(length)
for i, el in enumerate(gimme()): my_array[i] = el
```
(a) is probably what you're looking for. (b) is space inefficient, and (c) is time inefficient (you have to go through the generator twice). |
How do I build a numpy array from a generator? | 367,565 | 81 | 2008-12-15T05:44:31Z | 580,416 | 115 | 2009-02-24T03:53:18Z | [
"python",
"numpy",
"generator"
] | How can I build a numpy array out of a generator object?
Let me illustrate the problem:
```
>>> import numpy
>>> def gimme():
... for x in xrange(10):
... yield x
...
>>> gimme()
<generator object at 0x28a1758>
>>> list(gimme())
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> numpy.array(xrange(10))
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> numpy.array(gimme())
array(<generator object at 0x28a1758>, dtype=object)
>>> numpy.array(list(gimme()))
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
```
In this instance, gimme() is the generator whose output I'd like to turn into an array. However, the array constructor does not iterate over the generator, it simply stores the generator itself. The behaviour I desire is that from numpy.array(list(gimme())), but I don't want to pay the memory overhead of having the intermediate list and the final array in memory at the same time. Is there a more space-efficient way? | One google behind this stackoverflow result, I found that there is a [`numpy.fromiter(data, dtype, count)`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromiter.html). The default `count=-1` takes all elements from the iterable. It requires a `dtype` to be set explicitly. In my case, this worked:
`numpy.fromiter(something.generate(from_this_input), float)` |
Generating random text strings of a given pattern | 367,586 | 11 | 2008-12-15T06:03:40Z | 367,594 | 12 | 2008-12-15T06:18:07Z | [
"python",
"string",
"random"
] | I need to generate random text strings of a particular format. Would like some ideas so that I can code it up in Python. The format is <8 digit number><15 character string>. | See an example - [Recipe 59873: Random Password Generation](http://code.activestate.com/recipes/59873/) .
Building on the recipe, here is a solution to your question :
```
from random import choice
import string
def GenPasswd2(length=8, chars=string.letters + string.digits):
return ''.join([choice(chars) for i in range(length)])
>>> GenPasswd2(8,string.digits) + GenPasswd2(15,string.ascii_letters)
'28605495YHlCJfMKpRPGyAw'
>>>
``` |
Generating random text strings of a given pattern | 367,586 | 11 | 2008-12-15T06:03:40Z | 367,596 | 37 | 2008-12-15T06:19:22Z | [
"python",
"string",
"random"
] | I need to generate random text strings of a particular format. Would like some ideas so that I can code it up in Python. The format is <8 digit number><15 character string>. | ```
#!/usr/bin/python
import random
import string
digits = "".join( [random.choice(string.digits) for i in xrange(8)] )
chars = "".join( [random.choice(string.letters) for i in xrange(15)] )
print digits + chars
```
EDIT: liked the idea of using random.choice better than randint() so I've updated the code to reflect that.
Note: this assumes lowercase and uppercase characters are desired. If lowercase only then change the second list comprehension to read:
```
chars = "".join( [random.choice(string.letters[:26]) for i in xrange(15)] )
```
Obviously for uppercase only you can just flip that around so the slice is [26:] instead of the other way around. |
Making a virtual package available via sys.modules | 368,057 | 9 | 2008-12-15T11:20:14Z | 368,178 | 13 | 2008-12-15T12:13:05Z | [
"python",
"import",
"module"
] | Say I have a package "mylibrary".
I want to make "mylibrary.config" available for import, either as a dynamically created module, or a module imported from an entirely different place that would then basically be "mounted" inside the "mylibrary" namespace.
I.e., I do:
```
import sys, types
sys.modules['mylibrary.config'] = types.ModuleType('config')
```
Given that setup:
```
>>> import mylibrary.config # -> works
>>> from mylibrary import config
<type 'exceptions.ImportError'>: cannot import name config
```
Even stranger:
```
>>> import mylibrary.config as X
<type 'exceptions.ImportError'>: cannot import name config
```
So it seems that using the direct import works, the other forms do not. Is it possible to make those work as well? | You need to monkey-patch the module not only into sys.modules, but also into its parent module:
```
>>> import sys,types,xml
>>> xml.config = sys.modules['xml.config'] = types.ModuleType('xml.config')
>>> import xml.config
>>> from xml import config
>>> from xml import config as x
>>> x
<module 'xml.config' (built-in)>
``` |
How can I stop a While loop? | 368,545 | 3 | 2008-12-15T14:35:02Z | 368,550 | 8 | 2008-12-15T14:37:59Z | [
"python",
"while-loop"
] | I wrote a `while loop` in a function, but don't know how to stop it. When it doesn't meet its final condition, the loop just go for ever. How can I stop it?
```
def determine_period(universe_array):
period=0
tmp=universe_array
while True:
tmp=apply_rules(tmp)#aplly_rules is a another function
period+=1
if numpy.array_equal(tmp,universe_array) is True:
break #i want the loop to stop and return 0 if the
#period is bigger than 12
if period>12: #i wrote this line to stop it..but seems it
#doesnt work....help..
return 0
else:
return period
``` | just indent your code correctly:
```
def determine_period(universe_array):
period=0
tmp=universe_array
while True:
tmp=apply_rules(tmp)#aplly_rules is a another function
period+=1
if numpy.array_equal(tmp,universe_array) is True:
return period
if period>12: #i wrote this line to stop it..but seems its doesnt work....help..
return 0
else:
return period
```
You need to understand that the `break` statement in your example will exit the infinite loop you've created with `while True`. So when the break condition is True, the program will quit the infinite loop and continue to the next indented block. Since there is no following block in your code, the function ends and don't return anything. So I've fixed your code by replacing the `break` statement by a `return` statement.
Following your idea to use an infinite loop, this is the best way to write it:
```
def determine_period(universe_array):
period=0
tmp=universe_array
while True:
tmp=apply_rules(tmp)#aplly_rules is a another function
period+=1
if numpy.array_equal(tmp,universe_array) is True:
break
if period>12: #i wrote this line to stop it..but seems its doesnt work....help..
period = 0
break
return period
``` |
How can I stop a While loop? | 368,545 | 3 | 2008-12-15T14:35:02Z | 368,554 | 7 | 2008-12-15T14:38:42Z | [
"python",
"while-loop"
] | I wrote a `while loop` in a function, but don't know how to stop it. When it doesn't meet its final condition, the loop just go for ever. How can I stop it?
```
def determine_period(universe_array):
period=0
tmp=universe_array
while True:
tmp=apply_rules(tmp)#aplly_rules is a another function
period+=1
if numpy.array_equal(tmp,universe_array) is True:
break #i want the loop to stop and return 0 if the
#period is bigger than 12
if period>12: #i wrote this line to stop it..but seems it
#doesnt work....help..
return 0
else:
return period
``` | ```
def determine_period(universe_array):
period=0
tmp=universe_array
while period<12:
tmp=apply_rules(tmp)#aplly_rules is a another function
if numpy.array_equal(tmp,universe_array) is True:
break
period+=1
return period
``` |
Questions about Setuptools and alternatives | 368,636 | 20 | 2008-12-15T15:06:49Z | 370,062 | 24 | 2008-12-15T23:29:35Z | [
"python",
"packaging",
"setuptools",
"pip"
] | I've seen a good bit of setuptools bashing on the internets lately. Most recently, I read James Bennett's [On packaging](http://www.b-list.org/weblog/2008/dec/14/packaging/) post on why no one should be using setuptools. From my time in #python on Freenode, I know that there are a few souls there who absolutely detest it. I would count myself among them, but I do actually use it.
I've used setuptools for enough projects to be aware of its deficiencies, and I would prefer something better. I don't particularly like the egg format and how it's deployed. With all of setuptools' problems, I haven't found a better alternative.
My understanding of tools like [pip](http://pip.openplans.org/) is that it's meant to be an easy\_install replacement (not setuptools). In fact, pip uses some setuptools components, right?
Most of my packages make use of a setuptools-aware setup.py, which declares all of the dependencies. When they're ready, I'll build an sdist, bdist, and bdist\_egg, and upload them to pypi.
If I wanted to switch to using pip, what kind of changes would I need to make to rid myself of easy\_install dependencies? Where are the dependencies declared? I'm guessing that I would need to get away from using the egg format, and provide just source distributions. If so, how do i generate the egg-info directories? or do I even need to?
How would this change my usage of virtualenv? Doesn't virtualenv use easy\_install to manage the environments?
How would this change my usage of the setuptools provided "develop" command? Should I not use that? What's the alternative?
I'm basically trying to get a picture of what my development workflow will look like.
Before anyone suggests it, I'm not looking for an OS-dependent solution. I'm mainly concerned with debian linux, but deb packages are not an option, for the reasons Ian Bicking outlines [here](http://blog.ianbicking.org/2008/12/14/a-few-corrections-to-on-packaging/). | pip uses Setuptools, and doesn't require any changes to packages. It actually installs packages with Setuptools, using:
```
python -c 'import setuptools; __file__="setup.py"; execfile(__file__)' \
install \
--single-version-externally-managed
```
Because it uses that option (`--single-version-externally-managed`) it doesn't ever install eggs as zip files, doesn't support multiple simultaneously installed versions of software, and the packages are installed flat (like `python setup.py install` works if you use only distutils). Egg metadata is still installed. pip also, like easy\_install, downloads and installs all the requirements of a package.
*In addition* you can also use a requirements file to add other packages that should be installed in a batch, and to make version requirements more exact (without putting those exact requirements in your `setup.py` files). But if you don't make requirements files then you'd use it just like easy\_install.
For your `install_requires` I don't recommend any changes, unless you have been trying to create very exact requirements there that are known to be good. I think there's a limit to how exact you can usefully be in `setup.py` files about versions, because you can't really know what the future compatibility of new libraries will be like, and I don't recommend you try to predict this. Requirement files are an alternate place to lay out conservative version requirements.
You can still use `python setup.py develop`, and in fact if you do `pip install -e svn+http://mysite/svn/Project/trunk#egg=Project` it will check that out (into `src/project`) and run `setup.py develop` on it. So that workflow isn't any different really.
If you run pip verbosely (like `pip install -vv`) you'll see a lot of the commands that are run, and you'll probably recognize most of them. |
Python UnicodeDecodeError - Am I misunderstanding encode? | 368,805 | 49 | 2008-12-15T15:57:24Z | 370,199 | 190 | 2008-12-16T00:45:11Z | [
"python",
"unicode",
"ascii",
"encode",
"ignore"
] | Any thoughts on why this isn't working? I really thought 'ignore' would do the right thing.
```
>>> 'add \x93Monitoring\x93 to list '.encode('latin-1','ignore')
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 4: ordinal not in range(128)
``` | â¦there's a reason they're called "encodings"â¦
A little preamble: think of unicode as the norm, or the ideal state. Unicode is just a table of characters. â65 is latin capital A. â937 is greek capital omega. Just that.
In order for a computer to store and-or manipulate Unicode, it has to *encode* it into bytes. The most straightforward *encoding* of Unicode is UCS-4; every character occupies 4 bytes, and all ~1000000 characters are available. The 4 bytes contain the number of the character in the Unicode tables as a 4-byte integer. Another very useful encoding is UTF-8, which can encode any Unicode character with one to four bytes. But there also are some limited encodings, like "latin1", which include a very limited range of characters, mostly used by Western countries. Such *encodings* use only one byte per character.
Basically, Unicode can be *encoded* with many encodings, and encoded strings can be *decoded* to Unicode. The thing is, Unicode came quite late, so all of us that grew up using an 8-bit *character set* learned too late that all this time we worked with *encoded* strings. The encoding could be ISO8859-1, or windows CP437, or CP850, or, or, or, depending on our system default.
So when, in your source code, you enter the string "add âMonitoringâ to list" (and I think you wanted the string "add âMonitoringâ to list", note the second quote), you actually are using a string already *encoded* according to your system's default codepage (by the byte \x93 I assume you use Windows codepage 1252, âWesternâ). If you want to get Unicode from that, you need to *decode* the string from the "cp1252" encoding.
So, what you meant to do, was:
```
"add \x93Monitoring\x94 to list".decode("cp1252", "ignore")
```
It's unfortunate that Python 2.x includes an `.encode` method for strings too; this is a convenience function for "special" encodings, like the "zip" or "rot13" or "base64" ones, which have nothing to do with Unicode.
Anyway, all you have to remember for your to-and-fro Unicode conversions is:
* a Unicode string gets *encoded* to a Python 2.x string (actually, a sequence of bytes)
* a Python 2.x string gets *decoded* to a Unicode string
In both cases, you need to specify the *encoding* that will be used.
I'm not very clear, I'm sleepy, but I sure hope I help.
PS A humorous side note: Mayans didn't have Unicode; ancient Romans, ancient Greeks, ancient Egyptians didn't too. They all had their own "encodings", and had little to no respect for other cultures. All these civilizations crumbled to dust. Think about it people! Make your apps Unicode-aware, for the good of mankind. :)
PS2 Please don't spoil the previous message by saying "But the Chineseâ¦". If you feel inclined or obligated to do so, though, delay it by thinking that the Unicode BMP is populated mostly by chinese ideograms, ergo Chinese is the basis of Unicode. I can go on inventing outrageous lies, as long as people develop Unicode-aware applications. Cheers! |
Difference between dict.clear() and assigning {} in Python | 369,898 | 104 | 2008-12-15T22:23:18Z | 369,925 | 187 | 2008-12-15T22:30:29Z | [
"python",
"dictionary"
] | In python, is there a difference between calling `clear()` and assigning `{}` to a dictionary? If yes, what is it?
Example:
```
d = {"stuff":"things"}
d.clear() #this way
d = {} #vs this way
``` | If you have another variable also referring to the same dictionary, there is a big difference:
```
>>> d = {"stuff": "things"}
>>> d2 = d
>>> d = {}
>>> d2
{'stuff': 'things'}
>>> d = {"stuff": "things"}
>>> d2 = d
>>> d.clear()
>>> d2
{}
```
This is because assigning `d = {}` creates a new, empty dictionary and assigns it to the `d` variable. This leaves `d2` pointing at the old dictionary with items still in it. However, `d.clear()` clears the same dictionary that `d` and `d2` both point at. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.