title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Iterate a list with indexes in Python | 126,524 | 133 | 2008-09-24T11:08:52Z | 126,533 | 227 | 2008-09-24T11:11:41Z | [
"python",
"list"
] | I could swear I've seen the function (or method) that takes a list, like this `[3, 7, 19]` and makes it into iterable list of tuples, like so: `[(0,3), (1,7), (2,19)]` to use it instead of:
```
for i in range(len(name_of_list)):
name_of_list[i] = something
```
but I can't remember the name and googling "iterate list" gets nothing. | ```
>>> a = [3,4,5,6]
>>> for i, val in enumerate(a):
... print i, val
...
0 3
1 4
2 5
3 6
>>>
``` |
Iterate a list with indexes in Python | 126,524 | 133 | 2008-09-24T11:08:52Z | 126,535 | 78 | 2008-09-24T11:12:14Z | [
"python",
"list"
] | I could swear I've seen the function (or method) that takes a list, like this `[3, 7, 19]` and makes it into iterable list of tuples, like so: `[(0,3), (1,7), (2,19)]` to use it instead of:
```
for i in range(len(name_of_list)):
name_of_list[i] = something
```
but I can't remember the name and googling "iterate list" gets nothing. | Yep, that would be the [`enumerate`](http://docs.python.org/library/functions.html#enumerate) function! Or more to the point, you need to do:
```
list(enumerate([3,7,19]))
[(0, 3), (1, 7), (2, 19)]
``` |
Iterate a list with indexes in Python | 126,524 | 133 | 2008-09-24T11:08:52Z | 127,375 | 21 | 2008-09-24T14:10:58Z | [
"python",
"list"
] | I could swear I've seen the function (or method) that takes a list, like this `[3, 7, 19]` and makes it into iterable list of tuples, like so: `[(0,3), (1,7), (2,19)]` to use it instead of:
```
for i in range(len(name_of_list)):
name_of_list[i] = something
```
but I can't remember the name and googling "iterate list" gets nothing. | Here's another using the `zip` function.
```
>>> a = [3, 7, 19]
>>> zip(range(len(a)), a)
[(0, 3), (1, 7), (2, 19)]
``` |
Find out number of capture groups in Python regular expressions | 127,055 | 16 | 2008-09-24T13:14:50Z | 136,215 | 25 | 2008-09-25T21:18:49Z | [
"python",
"regex"
] | Is there a way to determine how many capture groups there are in a given regular expression?
I would like to be able to do the follwing:
```
def groups(regexp, s):
""" Returns the first result of re.findall, or an empty default
>>> groups(r'(\d)(\d)(\d)', '123')
('1', '2', '3')
>>> groups(r'(\d)(\d)(\d)', 'abc')
('', '', '')
"""
import re
m = re.search(regexp, s)
if m:
return m.groups()
return ('',) * num_of_groups(regexp)
```
This allows me to do stuff like:
```
first, last, phone = groups(r'(\w+) (\w+) ([\d\-]+)', 'John Doe 555-3456')
```
However, I don't know how to implement `num_of_groups`. (Currently I just work around it.)
**EDIT:** Following the [advice from rslite](http://stackoverflow.com/questions/127055/find-out-number-of-capture-groups-in-python-regular-expressions/127089#127089), I replaced `re.findall` with `re.search`.
`sre_parse` seems like the most robust and comprehensive solution, but requires tree traversal and appears to be a bit heavy.
MizardX's regular expression seems to cover all bases, so I'm going to go with that. | ```
def num_groups(regex):
return re.compile(regex).groups
``` |
Editing XML as a dictionary in python? | 127,606 | 8 | 2008-09-24T14:44:45Z | 127,678 | 11 | 2008-09-24T14:58:52Z | [
"python",
"xml",
"dictionary"
] | I'm trying to generate customized xml files from a template xml file in python.
Conceptually, I want to read in the template xml, remove some elements, change some text attributes, and write the new xml out to a file. I wanted it to work something like this:
```
conf_base = ConvertXmlToDict('config-template.xml')
conf_base_dict = conf_base.UnWrap()
del conf_base_dict['root-name']['level1-name']['leaf1']
del conf_base_dict['root-name']['level1-name']['leaf2']
conf_new = ConvertDictToXml(conf_base_dict)
```
now I want to write to file, but I don't see how to get to
ElementTree.ElementTree.write()
```
conf_new.write('config-new.xml')
```
Is there some way to do this, or can someone suggest doing this a different way? | I'm not sure if converting the info set to nested dicts first is easier. Using ElementTree, you can do this:
```
import xml.etree.ElementTree as ET
doc = ET.parse("template.xml")
lvl1 = doc.findall("level1-name")[0]
lvl1.remove(lvl1.find("leaf1")
lvl1.remove(lvl1.find("leaf2")
# or use del lvl1[idx]
doc.write("config-new.xml")
```
ElementTree was designed so that you don't have to convert your XML trees to lists and attributes first, since it uses exactly that internally.
It also support as small subset of [XPath](http://effbot.org/zone/element-xpath.htm). |
Editing XML as a dictionary in python? | 127,606 | 8 | 2008-09-24T14:44:45Z | 128,023 | 8 | 2008-09-24T15:56:06Z | [
"python",
"xml",
"dictionary"
] | I'm trying to generate customized xml files from a template xml file in python.
Conceptually, I want to read in the template xml, remove some elements, change some text attributes, and write the new xml out to a file. I wanted it to work something like this:
```
conf_base = ConvertXmlToDict('config-template.xml')
conf_base_dict = conf_base.UnWrap()
del conf_base_dict['root-name']['level1-name']['leaf1']
del conf_base_dict['root-name']['level1-name']['leaf2']
conf_new = ConvertDictToXml(conf_base_dict)
```
now I want to write to file, but I don't see how to get to
ElementTree.ElementTree.write()
```
conf_new.write('config-new.xml')
```
Is there some way to do this, or can someone suggest doing this a different way? | For easy manipulation of XML in python, I like the [Beautiful Soup](http://www.crummy.com/software/BeautifulSoup/) library. It works something like this:
Sample XML File:
```
<root>
<level1>leaf1</level1>
<level2>leaf2</level2>
</root>
```
Python code:
```
from BeautifulSoup import BeautifulStoneSoup, Tag, NavigableString
soup = BeautifulStoneSoup('config-template.xml') # get the parser for the xml file
soup.contents[0].name
# u'root'
```
You can use the node names as methods:
```
soup.root.contents[0].name
# u'level1'
```
It is also possible to use regexes:
```
import re
tags_starting_with_level = soup.findAll(re.compile('^level'))
for tag in tags_starting_with_level: print tag.name
# level1
# level2
```
Adding and inserting new nodes is pretty straightforward:
```
# build and insert a new level with a new leaf
level3 = Tag(soup, 'level3')
level3.insert(0, NavigableString('leaf3')
soup.root.insert(2, level3)
print soup.prettify()
# <root>
# <level1>
# leaf1
# </level1>
# <level2>
# leaf2
# </level2>
# <level3>
# leaf3
# </level3>
# </root>
``` |
Editing XML as a dictionary in python? | 127,606 | 8 | 2008-09-24T14:44:45Z | 2,303,733 | 18 | 2010-02-20T21:07:56Z | [
"python",
"xml",
"dictionary"
] | I'm trying to generate customized xml files from a template xml file in python.
Conceptually, I want to read in the template xml, remove some elements, change some text attributes, and write the new xml out to a file. I wanted it to work something like this:
```
conf_base = ConvertXmlToDict('config-template.xml')
conf_base_dict = conf_base.UnWrap()
del conf_base_dict['root-name']['level1-name']['leaf1']
del conf_base_dict['root-name']['level1-name']['leaf2']
conf_new = ConvertDictToXml(conf_base_dict)
```
now I want to write to file, but I don't see how to get to
ElementTree.ElementTree.write()
```
conf_new.write('config-new.xml')
```
Is there some way to do this, or can someone suggest doing this a different way? | This'll get you a dict minus attributes... dunno if this is useful to anyone. I was looking for an xml to dict solution myself when i came up with this.
```
import xml.etree.ElementTree as etree
tree = etree.parse('test.xml')
root = tree.getroot()
def xml_to_dict(el):
d={}
if el.text:
d[el.tag] = el.text
else:
d[el.tag] = {}
children = el.getchildren()
if children:
d[el.tag] = map(xml_to_dict, children)
return d
```
This: <http://www.w3schools.com/XML/note.xml>
```
<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>Don't forget me this weekend!</body>
</note>
```
Would equal this:
```
{'note': [{'to': 'Tove'},
{'from': 'Jani'},
{'heading': 'Reminder'},
{'body': "Don't forget me this weekend!"}]}
``` |
How to parse an ISO 8601-formatted date in Python? | 127,803 | 311 | 2008-09-24T15:17:00Z | 127,825 | 22 | 2008-09-24T15:19:27Z | [
"python",
"parsing",
"datetime",
"iso8601",
"rfc3339"
] | I need to parse [RFC 3339](https://tools.ietf.org/html/rfc3339) strings like `"2008-09-03T20:56:35.450686Z"` into Python's `datetime` type.
I have found [`strptime`](https://docs.python.org/library/datetime.html#datetime.datetime.strptime) in the Python standard library, but it is not very convenient.
What is the best way to do this? | What is the exact error you get? Is it like the following:
```
>>> datetime.datetime.strptime("2008-08-12T12:20:30.656234Z", "%Y-%m-%dT%H:%M:%S.Z")
ValueError: time data did not match format: data=2008-08-12T12:20:30.656234Z fmt=%Y-%m-%dT%H:%M:%S.Z
```
If yes, you can split your input string on ".", and then add the microseconds to the datetime you got.
Try this:
```
>>> def gt(dt_str):
dt, _, us= dt_str.partition(".")
dt= datetime.datetime.strptime(dt, "%Y-%m-%dT%H:%M:%S")
us= int(us.rstrip("Z"), 10)
return dt + datetime.timedelta(microseconds=us)
>>> gt("2008-08-12T12:20:30.656234Z")
datetime.datetime(2008, 8, 12, 12, 20, 30, 656234)
>>>
``` |
How to parse an ISO 8601-formatted date in Python? | 127,803 | 311 | 2008-09-24T15:17:00Z | 127,872 | 35 | 2008-09-24T15:27:24Z | [
"python",
"parsing",
"datetime",
"iso8601",
"rfc3339"
] | I need to parse [RFC 3339](https://tools.ietf.org/html/rfc3339) strings like `"2008-09-03T20:56:35.450686Z"` into Python's `datetime` type.
I have found [`strptime`](https://docs.python.org/library/datetime.html#datetime.datetime.strptime) in the Python standard library, but it is not very convenient.
What is the best way to do this? | ```
import re,datetime
s="2008-09-03T20:56:35.450686Z"
d=datetime.datetime(*map(int, re.split('[^\d]', s)[:-1]))
``` |
How to parse an ISO 8601-formatted date in Python? | 127,803 | 311 | 2008-09-24T15:17:00Z | 127,934 | 46 | 2008-09-24T15:38:17Z | [
"python",
"parsing",
"datetime",
"iso8601",
"rfc3339"
] | I need to parse [RFC 3339](https://tools.ietf.org/html/rfc3339) strings like `"2008-09-03T20:56:35.450686Z"` into Python's `datetime` type.
I have found [`strptime`](https://docs.python.org/library/datetime.html#datetime.datetime.strptime) in the Python standard library, but it is not very convenient.
What is the best way to do this? | Try the [iso8601](https://bitbucket.org/micktwomey/pyiso8601) module; it does exactly this.
There are several other options mentioned on the [WorkingWithTime](http://wiki.python.org/moin/WorkingWithTime) page on the python.org wiki. |
How to parse an ISO 8601-formatted date in Python? | 127,803 | 311 | 2008-09-24T15:17:00Z | 127,972 | 80 | 2008-09-24T15:45:02Z | [
"python",
"parsing",
"datetime",
"iso8601",
"rfc3339"
] | I need to parse [RFC 3339](https://tools.ietf.org/html/rfc3339) strings like `"2008-09-03T20:56:35.450686Z"` into Python's `datetime` type.
I have found [`strptime`](https://docs.python.org/library/datetime.html#datetime.datetime.strptime) in the Python standard library, but it is not very convenient.
What is the best way to do this? | Note in Python 2.6+ and Py3K, the %f character catches microseconds.
```
>>> datetime.datetime.strptime("2008-09-03T20:56:35.450686Z", "%Y-%m-%dT%H:%M:%S.%fZ")
```
See issue [here](http://bugs.python.org/issue1158) |
How to parse an ISO 8601-formatted date in Python? | 127,803 | 311 | 2008-09-24T15:17:00Z | 15,228,038 | 186 | 2013-03-05T15:44:16Z | [
"python",
"parsing",
"datetime",
"iso8601",
"rfc3339"
] | I need to parse [RFC 3339](https://tools.ietf.org/html/rfc3339) strings like `"2008-09-03T20:56:35.450686Z"` into Python's `datetime` type.
I have found [`strptime`](https://docs.python.org/library/datetime.html#datetime.datetime.strptime) in the Python standard library, but it is not very convenient.
What is the best way to do this? | The *python-dateutil* package can parse not only RFC 3339 datetime strings like the one in the question, but also other ISO 8601 date and time strings that don't comply with RFC 3339 (such as ones with no UTC offset, or ones that represent only a date).
```
>>> import dateutil.parser
>>> dateutil.parser.parse('2008-09-03T20:56:35.450686Z') # RFC 3339 format
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686, tzinfo=tzutc())
>>> dateutil.parser.parse('2008-09-03T20:56:35.450686') # ISO 8601 extended format
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686)
>>> dateutil.parser.parse('20080903T205635.450686') # ISO 8601 basic format
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686)
>>> dateutil.parser.parse('20080903') # ISO 8601 basic format, date only
datetime.datetime(2008, 9, 3, 0, 0)
``` |
How to parse an ISO 8601-formatted date in Python? | 127,803 | 311 | 2008-09-24T15:17:00Z | 22,700,869 | 9 | 2014-03-27T22:50:16Z | [
"python",
"parsing",
"datetime",
"iso8601",
"rfc3339"
] | I need to parse [RFC 3339](https://tools.ietf.org/html/rfc3339) strings like `"2008-09-03T20:56:35.450686Z"` into Python's `datetime` type.
I have found [`strptime`](https://docs.python.org/library/datetime.html#datetime.datetime.strptime) in the Python standard library, but it is not very convenient.
What is the best way to do this? | If you don't want to use dateutil, you can try this function:
```
def from_utc(utcTime,fmt="%Y-%m-%dT%H:%M:%S.%fZ"):
"""
Convert UTC time string to time.struct_time
"""
# change datetime.datetime to time, return time.struct_time type
return datetime.datetime.strptime(utcTime, fmt)
```
Test:
```
from_utc("2007-03-04T21:08:12.123Z")
```
Result:
```
datetime.datetime(2007, 3, 4, 21, 8, 12, 123000)
``` |
How to parse an ISO 8601-formatted date in Python? | 127,803 | 311 | 2008-09-24T15:17:00Z | 28,528,461 | 14 | 2015-02-15T16:47:44Z | [
"python",
"parsing",
"datetime",
"iso8601",
"rfc3339"
] | I need to parse [RFC 3339](https://tools.ietf.org/html/rfc3339) strings like `"2008-09-03T20:56:35.450686Z"` into Python's `datetime` type.
I have found [`strptime`](https://docs.python.org/library/datetime.html#datetime.datetime.strptime) in the Python standard library, but it is not very convenient.
What is the best way to do this? | Nobody has mentioned it yet. In these days, [Arrow](http://arrow.readthedocs.org/) also can be used as a third party solution.
```
>>> import arrow
>>> date = arrow.get("2008-09-03T20:56:35.450686Z")
>>> date.datetime
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686, tzinfo=tzutc())
``` |
How to parse an ISO 8601-formatted date in Python? | 127,803 | 311 | 2008-09-24T15:17:00Z | 30,696,682 | 45 | 2015-06-07T17:53:25Z | [
"python",
"parsing",
"datetime",
"iso8601",
"rfc3339"
] | I need to parse [RFC 3339](https://tools.ietf.org/html/rfc3339) strings like `"2008-09-03T20:56:35.450686Z"` into Python's `datetime` type.
I have found [`strptime`](https://docs.python.org/library/datetime.html#datetime.datetime.strptime) in the Python standard library, but it is not very convenient.
What is the best way to do this? | [Several](http://stackoverflow.com/a/127972/1709587) [answers](http://stackoverflow.com/a/127825/1709587) [here](http://stackoverflow.com/a/22700869/1709587) [suggest](http://stackoverflow.com/a/28979667/1709587) using [`datetime.datetime.strptime`](https://docs.python.org/library/datetime.html#datetime.datetime.strptime) to parse RFC 3339 or ISO 8601 datetimes with timezones, like the one exhibited in the question:
```
2008-09-03T20:56:35.450686Z
```
This is a bad idea.
Assuming that you want to support the full RFC 3339 format, including support for UTC offsets other than zero, then the code these answers suggest does not work. Indeed, it *cannot* work, because parsing RFC 3339 syntax using `strptime` is impossible. The format strings used by Python's datetime module are incapable of describing RFC 3339 syntax.
The problem is UTC offsets. The [RFC 3339 Internet Date/Time Format](https://tools.ietf.org/html/rfc3339#section-5.6) requires that every date-time includes a UTC offset, and that those offsets can either be `Z` (short for "Zulu time") or in `+HH:MM` or `-HH:MM` format, like `+05:00` or `-10:30`.
Consequently, these are all valid RFC 3339 datetimes:
* `2008-09-03T20:56:35.450686Z`
* `2008-09-03T20:56:35.450686+05:00`
* `2008-09-03T20:56:35.450686-10:30`
Alas, the format strings used by `strptime` and `strftime` have no directive that corresponds to UTC offsets in RFC 3339 format. A complete list of the directives they support can be found at <https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior>, and the only UTC offset directive included in the list is `%z`:
> ### %z
>
> UTC offset in the form +HHMM or -HHMM (empty string if the the object is naive).
>
> Example: (empty), +0000, -0400, +1030
This doesn't match the format of an RFC 3339 offset, and indeed if we try to use `%z` in the format string and parse an RFC 3339 date, we'll fail:
```
>>> from datetime import datetime
>>> datetime.strptime("2008-09-03T20:56:35.450686Z", "%Y-%m-%dT%H:%M:%S.%f%z")
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "/usr/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data '2008-09-03T20:56:35.450686Z' does not match format '%Y-%m-%dT%H:%M:%S.%f%z'
>>> datetime.strptime("2008-09-03T20:56:35.450686+05:00", "%Y-%m-%dT%H:%M:%S.%f%z")
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "/usr/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data '2008-09-03T20:56:35.450686+05:00' does not match format '%Y-%m-%dT%H:%M:%S.%f%z'
```
(Actually, the above is just what you'll see in Python 3. In Python 2 we'll fail for an even simpler reason, which is that [`strptime` does not implement the `%z` directive at all in Python 2](https://bugs.python.org/issue17342).)
The multiple answers here that recommend `strptime` all work around this by including a literal `Z` in their format string, which matches the `Z` from the question asker's example datetime string (and discards it, producing a `datetime` object without a timezone):
```
>>> datetime.strptime("2008-09-03T20:56:35.450686Z", "%Y-%m-%dT%H:%M:%S.%fZ")
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686)
```
Since this discards timezone information that was included in the original datetime string, it's questionable whether we should regard even this result as correct. But more importantly, because this approach involves *hard-coding a particular UTC offset into the format string*, it will choke the moment it tries to parse any RFC 3339 datetime with a different UTC offset:
```
>>> datetime.strptime("2008-09-03T20:56:35.450686+05:00", "%Y-%m-%dT%H:%M:%S.%fZ")
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "/usr/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data '2008-09-03T20:56:35.450686+05:00' does not match format '%Y-%m-%dT%H:%M:%S.%fZ'
```
Unless you're *certain* that you only need to support RFC 3339 datetimes in Zulu time, and not ones with other timezone offsets, don't use `strptime`. Use one of the many other approaches described in answers here instead. |
Should Python import statements always be at the top of a module? | 128,478 | 202 | 2008-09-24T17:21:47Z | 128,522 | 11 | 2008-09-24T17:30:04Z | [
"python",
"optimization",
"coding-style"
] | [PEP 08](http://www.python.org/dev/peps/pep-0008/) states:
> Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants.
However if the class/method/function that I am importing is only used in rare cases, surely it is more efficient to do the import when it is needed?
Isn't this:
```
class SomeClass(object):
def not_often_called(self)
from datetime import datetime
self.datetime = datetime.now()
```
more efficient than this?
```
from datetime import datetime
class SomeClass(object):
def not_often_called(self)
self.datetime = datetime.now()
``` | The first variant is indeed more efficient than the second when the function is called either zero or one times. With the second and subsequent invocations, however, the "import every call" approach is actually less efficient. See [this link](http://wiki.python.org/moin/PythonSpeed/PerformanceTips#head-c849d5d5d94bc3eacbff9d5746af4083443cf2ca) for a lazy-loading technique that combines the best of both approaches by doing a "lazy import".
But there are reasons other than efficiency why you might prefer one over the other. One approach is makes it much more clear to someone reading the code as to the dependencies that this module has. They also have very different failure characteristics -- the first will fail at load time if there's no "datetime" module while the second won't fail until the method is called.
**Added Note:** In IronPython, imports can be quite a bit more expensive than in CPython because the code is basically being compiled as it's being imported. |
Should Python import statements always be at the top of a module? | 128,478 | 202 | 2008-09-24T17:21:47Z | 128,532 | 21 | 2008-09-24T17:31:08Z | [
"python",
"optimization",
"coding-style"
] | [PEP 08](http://www.python.org/dev/peps/pep-0008/) states:
> Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants.
However if the class/method/function that I am importing is only used in rare cases, surely it is more efficient to do the import when it is needed?
Isn't this:
```
class SomeClass(object):
def not_often_called(self)
from datetime import datetime
self.datetime = datetime.now()
```
more efficient than this?
```
from datetime import datetime
class SomeClass(object):
def not_often_called(self)
self.datetime = datetime.now()
``` | Most of the time this would be useful for clarity and sensible to do but it's not always the case. Below are a couple of examples of circumstances where module imports might live elsewhere.
Firstly, you could have a module with a unit test of the form:
```
if __name__ == '__main__':
import foo
aa = foo.xyz() # initiate something for the test
```
Secondly, you might have a requirement to conditionally import some different module at runtime.
```
if [condition]:
import foo as plugin_api
else:
import bar as plugin_api
xx = plugin_api.Plugin()
[...]
```
There are probably other situations where you might place imports in other parts in the code. |
Should Python import statements always be at the top of a module? | 128,478 | 202 | 2008-09-24T17:21:47Z | 128,577 | 141 | 2008-09-24T17:38:00Z | [
"python",
"optimization",
"coding-style"
] | [PEP 08](http://www.python.org/dev/peps/pep-0008/) states:
> Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants.
However if the class/method/function that I am importing is only used in rare cases, surely it is more efficient to do the import when it is needed?
Isn't this:
```
class SomeClass(object):
def not_often_called(self)
from datetime import datetime
self.datetime = datetime.now()
```
more efficient than this?
```
from datetime import datetime
class SomeClass(object):
def not_often_called(self)
self.datetime = datetime.now()
``` | Module importing is quite fast, but not instant. This means that:
* Putting the imports at the top of the module is fine, because it's a trivial cost that's only paid once.
* Putting the imports within a function will cause calls to that function to take longer.
So if you care about efficiency, put the imports at the top. Only move them into a function if your profiling shows that would help (you **did** profile to see where best to improve performance, right??)
---
The best reasons I've seen to perform lazy imports are:
* Optional library support. If your code has multiple paths that use different libraries, don't break if an optional library is not installed.
* In the `__init__.py` of a plugin, which might be imported but not actually used. Examples are Bazaar plugins, which use `bzrlib`'s lazy-loading framework. |
Should Python import statements always be at the top of a module? | 128,478 | 202 | 2008-09-24T17:21:47Z | 128,859 | 31 | 2008-09-24T18:16:13Z | [
"python",
"optimization",
"coding-style"
] | [PEP 08](http://www.python.org/dev/peps/pep-0008/) states:
> Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants.
However if the class/method/function that I am importing is only used in rare cases, surely it is more efficient to do the import when it is needed?
Isn't this:
```
class SomeClass(object):
def not_often_called(self)
from datetime import datetime
self.datetime = datetime.now()
```
more efficient than this?
```
from datetime import datetime
class SomeClass(object):
def not_often_called(self)
self.datetime = datetime.now()
``` | I have adopted the practice of putting all imports in the functions that use them, rather than at the top of the module.
The benefit I get is the ability to refactor more reliably. When I move a function from one module to another, I know that the function will continue to work with all of its legacy of testing intact. If I have my imports at the top of the module, when I move a function, I find that I end up spending a lot of time getting the new module's imports complete and minimal. A refactoring IDE might make this irrelevant.
There is a speed penalty as mentioned elsewhere. I have measured this in my application and found it to be insignificant for my purposes.
It is also nice to be able to see all module dependencies up front without resorting to search (e.g. grep). However, the reason I care about module dependencies is generally because I'm installing, refactoring, or moving an entire system comprising multiple files, not just a single module. In that case, I'm going to perform a global search anyway to make sure I have the system-level dependencies. So I have not found global imports to aid my understanding of a system in practice.
I usually put the import of `sys` inside the `if __name__=='__main__'` check and then pass arguments (like `sys.argv[1:]`) to a `main()` function. This allows me to use `main` in a context where `sys` has not been imported. |
Should Python import statements always be at the top of a module? | 128,478 | 202 | 2008-09-24T17:21:47Z | 129,810 | 46 | 2008-09-24T20:36:46Z | [
"python",
"optimization",
"coding-style"
] | [PEP 08](http://www.python.org/dev/peps/pep-0008/) states:
> Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants.
However if the class/method/function that I am importing is only used in rare cases, surely it is more efficient to do the import when it is needed?
Isn't this:
```
class SomeClass(object):
def not_often_called(self)
from datetime import datetime
self.datetime = datetime.now()
```
more efficient than this?
```
from datetime import datetime
class SomeClass(object):
def not_often_called(self)
self.datetime = datetime.now()
``` | Putting the import statement inside of a function can prevent circular dependencies. |
Using property() on classmethods | 128,573 | 104 | 2008-09-24T17:37:11Z | 129,819 | 12 | 2008-09-24T20:38:47Z | [
"python",
"oop"
] | I have a class with two class methods (using the classmethod() function) for getting and setting what is essentially a static variable. I tried to use the property() function with these, but it results in an error. I was able to reproduce the error with the following in the interpreter:
```
>>> class foo(object):
... _var=5
... def getvar(cls):
... return cls._var
... getvar=classmethod(getvar)
... def setvar(cls,value):
... cls._var=value
... setvar=classmethod(setvar)
... var=property(getvar,setvar)
...
>>> f.getvar()
5
>>> f.setvar(4)
>>> f.getvar()
4
>>> f.var
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'classmethod' object is not callable
>>> f.var=5
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'classmethod' object is not callable
```
Is it possible to use the property() function with classmethod decorated functions? | There is no reasonable way to make this "class property" system to work in Python.
Here is one unreasonable way to make it work. You can certainly make it more seamless with increasing amounts of metaclass magic.
```
class ClassProperty(object):
def __init__(self, getter, setter):
self.getter = getter
self.setter = setter
def __get__(self, cls, owner):
return getattr(cls, self.getter)()
def __set__(self, cls, value):
getattr(cls, self.setter)(value)
class MetaFoo(type):
var = ClassProperty('getvar', 'setvar')
class Foo(object):
__metaclass__ = MetaFoo
_var = 5
@classmethod
def getvar(cls):
print "Getting var =", cls._var
return cls._var
@classmethod
def setvar(cls, value):
print "Setting var =", value
cls._var = value
x = Foo.var
print "Foo.var = ", x
Foo.var = 42
x = Foo.var
print "Foo.var = ", x
```
The knot of the issue is that properties are what Python calls "descriptors". There is no short and easy way to explain how this sort of metaprogramming works, so I must point you to the [descriptor howto](http://users.rcn.com/python/download/Descriptor.htm).
You only ever need to understand this sort of things if you are implementing a fairly advanced framework. Like a transparent object persistence or RPC system, or a kind of domain-specific language.
However, in a comment to a previous answer, you say that you
> need to modify an attribute that in such a way that is seen by all instances of a class, and in the scope from which these class methods are called does not have references to all instances of the class.
It seems to me, what you really want is an [Observer](http://en.wikipedia.org/wiki/Observer_pattern) design pattern. |
Using property() on classmethods | 128,573 | 104 | 2008-09-24T17:37:11Z | 1,383,402 | 64 | 2009-09-05T14:12:14Z | [
"python",
"oop"
] | I have a class with two class methods (using the classmethod() function) for getting and setting what is essentially a static variable. I tried to use the property() function with these, but it results in an error. I was able to reproduce the error with the following in the interpreter:
```
>>> class foo(object):
... _var=5
... def getvar(cls):
... return cls._var
... getvar=classmethod(getvar)
... def setvar(cls,value):
... cls._var=value
... setvar=classmethod(setvar)
... var=property(getvar,setvar)
...
>>> f.getvar()
5
>>> f.setvar(4)
>>> f.getvar()
4
>>> f.var
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'classmethod' object is not callable
>>> f.var=5
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'classmethod' object is not callable
```
Is it possible to use the property() function with classmethod decorated functions? | Reading the [Python 2.2 release](http://www.python.org/download/releases/2.2/descrintro/#property) notes, I find the following.
> The get method [of a property] won't be called when
> the property is accessed as a class
> attribute (C.x) instead of as an
> instance attribute (C().x). If you
> want to override the \_\_get\_\_ operation
> for properties when used as a class
> attribute, you can subclass property -
> it is a new-style type itself - to
> extend its \_\_get\_\_ method, or you can
> define a descriptor type from scratch
> by creating a new-style class that
> defines \_\_get\_\_, \_\_set\_\_ and
> \_\_delete\_\_ methods.
**NOTE: The below method doesn't actually work for setters, only getters.**
Therefore, I believe the prescribed solution is to create a ClassProperty as a subclass of property.
```
class ClassProperty(property):
def __get__(self, cls, owner):
return self.fget.__get__(None, owner)()
class foo(object):
_var=5
def getvar(cls):
return cls._var
getvar=classmethod(getvar)
def setvar(cls,value):
cls._var=value
setvar=classmethod(setvar)
var=ClassProperty(getvar,setvar)
assert foo.getvar() == 5
foo.setvar(4)
assert foo.getvar() == 4
assert foo.var == 4
foo.var = 3
assert foo.var == 3
```
However, the setters don't actually work:
```
foo.var = 4
assert foo.var == foo._var # raises AssertionError
```
`foo._var` is unchanged, you've simply overwritten the property with a new value.
You can also use `ClassProperty` as a decorator:
```
class Foo(object):
_var = 5
@ClassProperty
@classmethod
def var(cls):
return cls._var
@var.setter
@classmethod
def var(cls, value):
cls._var = value
assert foo.var == 5
``` |
Using property() on classmethods | 128,573 | 104 | 2008-09-24T17:37:11Z | 1,800,999 | 48 | 2009-11-26T00:58:17Z | [
"python",
"oop"
] | I have a class with two class methods (using the classmethod() function) for getting and setting what is essentially a static variable. I tried to use the property() function with these, but it results in an error. I was able to reproduce the error with the following in the interpreter:
```
>>> class foo(object):
... _var=5
... def getvar(cls):
... return cls._var
... getvar=classmethod(getvar)
... def setvar(cls,value):
... cls._var=value
... setvar=classmethod(setvar)
... var=property(getvar,setvar)
...
>>> f.getvar()
5
>>> f.setvar(4)
>>> f.getvar()
4
>>> f.var
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'classmethod' object is not callable
>>> f.var=5
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'classmethod' object is not callable
```
Is it possible to use the property() function with classmethod decorated functions? | A property is created on a class but affects an instance. So if you want a classmethod property, create the property on the metaclass.
```
>>> class foo(object):
_var = 5
class __metaclass__(type):
pass
@classmethod
def getvar(cls):
return cls._var
@classmethod
def setvar(cls, value):
cls._var = value
>>> foo.__metaclass__.var = property(foo.getvar.im_func, foo.setvar.im_func)
>>> foo.var
5
>>> foo.var = 3
>>> foo.var
3
```
But since you're using a metaclass anyway, it will read better if you just move the classmethods in there.
```
>>> class foo(object):
_var = 5
class __metaclass__(type):
@property
def var(cls):
return cls._var
@var.setter
def var(cls, value):
cls._var = value
>>> foo.var
5
>>> foo.var = 3
>>> foo.var
3
``` |
Using property() on classmethods | 128,573 | 104 | 2008-09-24T17:37:11Z | 13,624,858 | 20 | 2012-11-29T11:32:12Z | [
"python",
"oop"
] | I have a class with two class methods (using the classmethod() function) for getting and setting what is essentially a static variable. I tried to use the property() function with these, but it results in an error. I was able to reproduce the error with the following in the interpreter:
```
>>> class foo(object):
... _var=5
... def getvar(cls):
... return cls._var
... getvar=classmethod(getvar)
... def setvar(cls,value):
... cls._var=value
... setvar=classmethod(setvar)
... var=property(getvar,setvar)
...
>>> f.getvar()
5
>>> f.setvar(4)
>>> f.getvar()
4
>>> f.var
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'classmethod' object is not callable
>>> f.var=5
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'classmethod' object is not callable
```
Is it possible to use the property() function with classmethod decorated functions? | I hope this dead-simple read-only `@classproperty` decorator would help somebody looking for classproperties.
```
class classproperty(object):
def __init__(self, fget):
self.fget = fget
def __get__(self, owner_self, owner_cls):
return self.fget(owner_cls)
class C(object):
@classproperty
def x(cls):
return 1
assert C.x == 1
assert C().x == 1
``` |
Generic Exception Handling in Python the "Right Way" | 129,144 | 21 | 2008-09-24T19:00:47Z | 129,177 | 57 | 2008-09-24T19:05:58Z | [
"python",
"exception",
"readability"
] | Sometimes I find myself in the situation where I want to execute several sequential commands like such:
```
try:
foo(a, b)
except Exception, e:
baz(e)
try:
bar(c, d)
except Exception, e:
baz(e)
...
```
This same pattern occurs when exceptions simply need to be ignored.
This feels redundant and the excessive syntax causes it to be surprisingly difficult to follow when reading code.
In C, I would have solved this type of problem easily with a macro, but unfortunately, this cannot be done in straight python.
Question: How can I best reduce the code footprint and increase code readability when coming across this pattern? | You could use the [`with` statement](http://docs.python.org/whatsnew/pep-343.html) if you have python 2.5
```
from __future__ import with_statement
import contextlib
@contextlib.contextmanager
def handler():
try:
yield
except Exception, e:
baz(e)
```
Your example now becomes:
```
with handler():
foo(a, b)
with handler():
bar(c, d)
``` |
Generic Exception Handling in Python the "Right Way" | 129,144 | 21 | 2008-09-24T19:00:47Z | 133,336 | 9 | 2008-09-25T13:31:32Z | [
"python",
"exception",
"readability"
] | Sometimes I find myself in the situation where I want to execute several sequential commands like such:
```
try:
foo(a, b)
except Exception, e:
baz(e)
try:
bar(c, d)
except Exception, e:
baz(e)
...
```
This same pattern occurs when exceptions simply need to be ignored.
This feels redundant and the excessive syntax causes it to be surprisingly difficult to follow when reading code.
In C, I would have solved this type of problem easily with a macro, but unfortunately, this cannot be done in straight python.
Question: How can I best reduce the code footprint and increase code readability when coming across this pattern? | If this is always, *always* the behaviour you want when a particular function raises an exception, you could use a decorator:
```
def handle_exception(handler):
def decorate(func):
def call_function(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception, e:
handler(e)
return call_function
return decorate
def baz(e):
print(e)
@handle_exception(baz)
def foo(a, b):
return a + b
@handle_exception(baz)
def bar(c, d):
return c.index(d)
```
Usage:
```
>>> foo(1, '2')
unsupported operand type(s) for +: 'int' and 'str'
>>> bar('steve', 'cheese')
substring not found
``` |
How do you test that a Python function throws an exception? | 129,507 | 285 | 2008-09-24T20:00:35Z | 129,522 | 253 | 2008-09-24T20:02:29Z | [
"python",
"unit-testing",
"exception",
"exception-handling"
] | How does one write a unittest that fails only if a function doesn't throw an expected exception? | Use [`TestCase.assertRaises`](http://docs.python.org/library/unittest.html#unittest.TestCase.assertRaises) (or `TestCase.failUnlessRaises`) from the unittest module, for example:
```
import mymod
class MyTestCase(unittest.TestCase):
def test1(self):
self.assertRaises(SomeCoolException, mymod.myfunc)
``` |
How do you test that a Python function throws an exception? | 129,507 | 285 | 2008-09-24T20:00:35Z | 129,528 | 19 | 2008-09-24T20:03:24Z | [
"python",
"unit-testing",
"exception",
"exception-handling"
] | How does one write a unittest that fails only if a function doesn't throw an expected exception? | Your code should follow this pattern (this is a unittest module style test):
```
def test_afunction_throws_exception(self):
try:
afunction()
except ExpectedException:
pass
except Exception as e:
self.fail('Unexpected exception raised:', e)
else:
self.fail('ExpectedException not raised')
```
On Python < 2.7 this construct is useful for checking for specific values in the expected exception. The unittest function `assertRaises` only checks if an exception was raised. |
How do you test that a Python function throws an exception? | 129,507 | 285 | 2008-09-24T20:00:35Z | 129,610 | 120 | 2008-09-24T20:13:08Z | [
"python",
"unit-testing",
"exception",
"exception-handling"
] | How does one write a unittest that fails only if a function doesn't throw an expected exception? | The code in my previous answer can be simplified to:
```
def test_afunction_throws_exception(self):
self.assertRaises(ExpectedException, afunction)
```
And if afunction takes arguments, just pass them into assertRaises like this:
```
def test_afunction_throws_exception(self):
self.assertRaises(ExpectedException, afunction, arg1, arg2)
``` |
How do you test that a Python function throws an exception? | 129,507 | 285 | 2008-09-24T20:00:35Z | 132,682 | 7 | 2008-09-25T11:17:18Z | [
"python",
"unit-testing",
"exception",
"exception-handling"
] | How does one write a unittest that fails only if a function doesn't throw an expected exception? | I use **doctest**[1] almost everywhere because I like the fact that I document and test my functions at the same time.
Have a look at this code:
```
def throw_up(something, gowrong=False):
"""
>>> throw_up('Fish n Chips')
Traceback (most recent call last):
...
Exception: Fish n Chips
>>> throw_up('Fish n Chips', gowrong=True)
'I feel fine!'
"""
if gowrong:
return "I feel fine!"
raise Exception(something)
if __name__ == '__main__':
import doctest
doctest.testmod()
```
If you put this example in a module and run it from the command line both test cases are evaluated and checked.
[1] [Python documentation: 23.2 doctest -- Test interactive Python examples](http://docs.python.org/lib/module-doctest.html) |
How do you test that a Python function throws an exception? | 129,507 | 285 | 2008-09-24T20:00:35Z | 3,166,985 | 129 | 2010-07-02T15:16:00Z | [
"python",
"unit-testing",
"exception",
"exception-handling"
] | How does one write a unittest that fails only if a function doesn't throw an expected exception? | Since Python 2.7 you can use context manager to get a hold of the actual Exception object thrown:
```
import unittest
def broken_function():
raise Exception('This is broken')
class MyTestCase(unittest.TestCase):
def test(self):
with self.assertRaises(Exception) as context:
broken_function()
self.assertTrue('This is broken' in context.exception)
if __name__ == '__main__':
unittest.main()
```
<http://docs.python.org/dev/library/unittest.html#unittest.TestCase.assertRaises>
---
In **Python 3.5**, you have to wrap `context.exception` in `str`, otherwise you'll get a `TypeError`
```
self.assertTrue('This is broken' in str(context.exception))
``` |
How do you test that a Python function throws an exception? | 129,507 | 285 | 2008-09-24T20:00:35Z | 28,223,420 | 48 | 2015-01-29T19:54:26Z | [
"python",
"unit-testing",
"exception",
"exception-handling"
] | How does one write a unittest that fails only if a function doesn't throw an expected exception? | > **How do you test that a Python function throws an exception?**
>
> How does one write a test that fails only if a function doesn't throw
> an expected exception?
# Short Answer:
Use the `self.assertRaises` method as a context manager:
```
def test_1_cannot_add_int_and_str(self):
with self.assertRaises(TypeError):
1 + '1'
```
# Demonstration
The best practice approach is fairly easy to demonstrate in a Python shell.
**The `unittest` library**
In Python 2.7 or 3:
```
import unittest
```
In Python 2.6, you can install a backport of 2.7's `unittest` library, called [unittest2](https://pypi.python.org/pypi/unittest2), and just alias that as `unittest`:
```
import unittest2 as unittest
```
# Example tests
Now, paste into your Python shell the following test of Python's type-safety:
```
class MyTestCase(unittest.TestCase):
def test_1_cannot_add_int_and_str(self):
with self.assertRaises(TypeError):
1 + '1'
def test_2_cannot_add_int_and_str(self):
import operator
self.assertRaises(TypeError, operator.add, 1, '1')
```
Test one uses `assertRaises` as a context manager, which ensures that the error is properly caught and cleaned up, while recorded.
We could also write it *without* the context manager, see test two. The first argument would be the error type you expect to raise, the second argument, the function you are testing, and the remaining args and keyword args will be passed to that function.
I think it's far more simple, readable, and maintainable to just to use the context manager.
# Running the tests
To run the tests:
```
unittest.main(exit=False)
```
In Python 2.6, you'll probably [need the following](http://stackoverflow.com/a/21262077/541136):
```
unittest.TextTestRunner().run(unittest.TestLoader().loadTestsFromTestCase(MyTestCase))
```
And your terminal should output the following:
```
..
----------------------------------------------------------------------
Ran 2 tests in 0.007s
OK
<unittest2.runner.TextTestResult run=2 errors=0 failures=0>
```
And we see that as we expect, attempting to add a `1` and a `'1'` result in a `TypeError`.
---
For more verbose output, try this:
```
unittest.TextTestRunner(verbosity=2).run(unittest.TestLoader().loadTestsFromTestCase(MyTestCase))
``` |
Is there an inverse function for time.gmtime() that parses a UTC tuple to seconds since the epoch? | 130,074 | 17 | 2008-09-24T21:24:04Z | 161,385 | 27 | 2008-10-02T08:42:45Z | [
"python"
] | python's time module seems a little haphazard. For example, here is a list of methods in there, from the docstring:
```
time() -- return current time in seconds since the Epoch as a float
clock() -- return CPU time since process start as a float
sleep() -- delay for a number of seconds given as a float
gmtime() -- convert seconds since Epoch to UTC tuple
localtime() -- convert seconds since Epoch to local time tuple
asctime() -- convert time tuple to string
ctime() -- convert time in seconds to string
mktime() -- convert local time tuple to seconds since Epoch
strftime() -- convert time tuple to string according to format specification
strptime() -- parse string to time tuple according to format specification
tzset() -- change the local timezone
```
Looking at localtime() and its inverse mktime(), why is there no inverse for gmtime() ?
Bonus questions: what would you name the method ? How would you implement it ? | There is actually an inverse function, but for some bizarre reason, it's in the [calendar](https://docs.python.org/2/library/calendar.html) module: calendar.timegm(). I listed the functions in this [answer](http://stackoverflow.com/questions/79797/how-do-i-convert-local-time-to-utc-in-python#79913). |
How do I efficiently filter computed values within a Python list comprehension? | 130,262 | 16 | 2008-09-24T22:08:57Z | 130,276 | 17 | 2008-09-24T22:12:24Z | [
"python",
"list-comprehension"
] | The Python list comprehension syntax makes it easy to filter values within a comprehension. For example:
```
result = [x**2 for x in mylist if type(x) is int]
```
Will return a list of the squares of integers in mylist. However, what if the test involves some (costly) computation and you want to filter on the result? One option is:
```
result = [expensive(x) for x in mylist if expensive(x)]
```
This will result in a list of non-"false" expensive(x) values, however expensive() is called twice for each x. Is there a comprehension syntax that allows you to do this test while only calling expensive once per x? | Came up with my own answer after a minute of thought. It can be done with nested comprehensions:
```
result = [y for y in (expensive(x) for x in mylist) if y]
```
I guess that works, though I find nested comprehensions are only marginally readable |
How do I efficiently filter computed values within a Python list comprehension? | 130,262 | 16 | 2008-09-24T22:08:57Z | 130,309 | 17 | 2008-09-24T22:23:50Z | [
"python",
"list-comprehension"
] | The Python list comprehension syntax makes it easy to filter values within a comprehension. For example:
```
result = [x**2 for x in mylist if type(x) is int]
```
Will return a list of the squares of integers in mylist. However, what if the test involves some (costly) computation and you want to filter on the result? One option is:
```
result = [expensive(x) for x in mylist if expensive(x)]
```
This will result in a list of non-"false" expensive(x) values, however expensive() is called twice for each x. Is there a comprehension syntax that allows you to do this test while only calling expensive once per x? | If the calculations are already nicely bundled into functions, how about using `filter` and `map`?
```
result = filter (None, map (expensive, mylist))
```
You can use `itertools.imap` if the list is very large. |
How do I efficiently filter computed values within a Python list comprehension? | 130,262 | 16 | 2008-09-24T22:08:57Z | 130,312 | 7 | 2008-09-24T22:24:42Z | [
"python",
"list-comprehension"
] | The Python list comprehension syntax makes it easy to filter values within a comprehension. For example:
```
result = [x**2 for x in mylist if type(x) is int]
```
Will return a list of the squares of integers in mylist. However, what if the test involves some (costly) computation and you want to filter on the result? One option is:
```
result = [expensive(x) for x in mylist if expensive(x)]
```
This will result in a list of non-"false" expensive(x) values, however expensive() is called twice for each x. Is there a comprehension syntax that allows you to do this test while only calling expensive once per x? | The most obvious (and I would argue most readable) answer is to not use a list comprehension or generator expression, but rather a real generator:
```
def gen_expensive(mylist):
for item in mylist:
result = expensive(item)
if result:
yield result
```
It takes more horizontal space, but it's much easier to see what it does at a glance, and you end up not repeating yourself. |
Python Date Comparisons | 130,618 | 57 | 2008-09-24T23:34:31Z | 130,665 | 102 | 2008-09-24T23:44:27Z | [
"python",
"datetime"
] | I would like to find out if a particular python datetime object is older than X hours or minutes. I am trying to do something similar to:
```
if (datetime.now() - self.timestamp) > 100
# Where 100 is either seconds or minutes
```
This generates a type error.
What is the proper way to do date time comparison in python? I already looked at [WorkingWithTime](http://wiki.python.org/moin/WorkingWithTime) which is close but not exactly what I want. I assume I just want the datetime object represented in seconds so that I can do a normal int comparison.
Please post lists of datetime best practices. | Use the `datetime.timedelta` class:
```
>>> from datetime import datetime, timedelta
>>> then = datetime.now() - timedelta(hours = 2)
>>> now = datetime.now()
>>> (now - then) > timedelta(days = 1)
False
>>> (now - then) > timedelta(hours = 1)
True
```
Your example could be written as:
```
if (datetime.now() - self.timestamp) > timedelta(seconds = 100)
```
or
```
if (datetime.now() - self.timestamp) > timedelta(minutes = 100)
``` |
Request UAC elevation from within a Python script? | 130,763 | 49 | 2008-09-25T00:22:26Z | 131,092 | 28 | 2008-09-25T02:01:34Z | [
"python",
"windows",
"windows-vista",
"uac"
] | I want my Python script to copy files on Vista. When I run it from a normal `cmd.exe` window, no errors are generated, yet the files are NOT copied. If I run `cmd.exe` "as administator" and then run my script, it works fine.
This makes sense since User Account Control (UAC) normally prevents many file system actions.
Is there a way I can, from within a Python script, invoke a UAC elevation request (those dialogs that say something like "such and such app needs admin access, is this OK?")
If that's not possible, is there a way my script can at least detect that it is not elevated so it can fail gracefully? | It seems there's no way to elevate the application privileges for a while for you to perform a particular task. Windows needs to know at the start of the program whether the application requires certain privileges, and will ask the user to confirm when the application performs any tasks that *need* those privileges. There are two ways to do this:
1. Write a manifest file that tells Windows the application might require some privileges
2. Run the application with elevated privileges from inside another program
This [two](http://www.codeproject.com/KB/vista-security/UAC__The_Definitive_Guide.aspx) [articles](http://msdn.microsoft.com/en-gb/magazine/cc163486.aspx) explain in much more detail how this works.
What I'd do, if you don't want to write a nasty ctypes wrapper for the CreateElevatedProcess API, is use the ShellExecuteEx trick explained in the Code Project article (Pywin32 comes with a wrapper for ShellExecute). How? Something like this:
When your program starts, it checks if it has Administrator privileges, if it doesn't it runs itself using the ShellExecute trick and exits immediately, if it does, it performs the task at hand.
As you describe your program as a "script", I suppose that's enough for your needs.
Cheers. |
Request UAC elevation from within a Python script? | 130,763 | 49 | 2008-09-25T00:22:26Z | 11,746,382 | 49 | 2012-07-31T18:09:35Z | [
"python",
"windows",
"windows-vista",
"uac"
] | I want my Python script to copy files on Vista. When I run it from a normal `cmd.exe` window, no errors are generated, yet the files are NOT copied. If I run `cmd.exe` "as administator" and then run my script, it works fine.
This makes sense since User Account Control (UAC) normally prevents many file system actions.
Is there a way I can, from within a Python script, invoke a UAC elevation request (those dialogs that say something like "such and such app needs admin access, is this OK?")
If that's not possible, is there a way my script can at least detect that it is not elevated so it can fail gracefully? | It took me a little while to get dguaraglia's answer working, so in the interest of saving others time, here's what I did to implement this idea:
```
import os
import sys
import win32com.shell.shell as shell
ASADMIN = 'asadmin'
if sys.argv[-1] != ASADMIN:
script = os.path.abspath(sys.argv[0])
params = ' '.join([script] + sys.argv[1:] + [ASADMIN])
shell.ShellExecuteEx(lpVerb='runas', lpFile=sys.executable, lpParameters=params)
sys.exit(0)
``` |
Do I have to cause an ValueError in Python | 131,449 | 3 | 2008-09-25T04:14:34Z | 131,452 | 7 | 2008-09-25T04:16:25Z | [
"python",
"exception",
"list"
] | I have this code:
```
chars = #some list
try:
indx = chars.index(chars)
except ValueError:
#doSomething
else:
#doSomethingElse
```
I want to be able to do this because I don't like knowfully causing Exceptions:
```
chars = #some list
indx = chars.index(chars)
if indx == -1:
#doSomething
else:
#doSomethingElse
```
Is there a way I can do this? | ```
if element in mylist:
index = mylist.index(element)
# ... do something
else:
# ... do something else
``` |
Do I have to cause an ValueError in Python | 131,449 | 3 | 2008-09-25T04:14:34Z | 131,522 | 9 | 2008-09-25T04:47:35Z | [
"python",
"exception",
"list"
] | I have this code:
```
chars = #some list
try:
indx = chars.index(chars)
except ValueError:
#doSomething
else:
#doSomethingElse
```
I want to be able to do this because I don't like knowfully causing Exceptions:
```
chars = #some list
indx = chars.index(chars)
if indx == -1:
#doSomething
else:
#doSomethingElse
```
Is there a way I can do this? | Note that the latter approach is going against the generally accepted "pythonic" philosophy of [EAFP, or "It is Easier to Ask for Forgiveness than Permission."](http://en.wikipedia.org/wiki/Python_syntax_and_semantics#Exceptions), while the former follows it. |
Showing the stack trace from a running Python application | 132,058 | 250 | 2008-09-25T08:06:06Z | 132,108 | 10 | 2008-09-25T08:24:24Z | [
"python",
"debugging",
"stack-trace",
"traceback"
] | I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
***Related questions:***
* [Print current call stack from a method in Python code](http://stackoverflow.com/questions/1156023/print-current-call-stack-from-a-method-in-python-code)
* [Check what a running process is doing: print stack trace of an uninstrumented Python program](http://stackoverflow.com/questions/6849138/check-what-a-running-process-is-doing-print-stack-trace-of-an-uninstrumented-py) | *python -dv yourscript.py*
That will make the interpreter to run in debug mode and to give you a trace of what the interpreter is doing.
If you want to interactively debug the code you should run it like this:
*python -m pdb yourscript.py*
That tells the python interpreter to run your script with the module "pdb" which is the python debugger, if you run it like that the interpreter will be executed in interactive mode, much like GDB |
Showing the stack trace from a running Python application | 132,058 | 250 | 2008-09-25T08:06:06Z | 132,114 | 24 | 2008-09-25T08:27:16Z | [
"python",
"debugging",
"stack-trace",
"traceback"
] | I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
***Related questions:***
* [Print current call stack from a method in Python code](http://stackoverflow.com/questions/1156023/print-current-call-stack-from-a-method-in-python-code)
* [Check what a running process is doing: print stack trace of an uninstrumented Python program](http://stackoverflow.com/questions/6849138/check-what-a-running-process-is-doing-print-stack-trace-of-an-uninstrumented-py) | The [**traceback**](http://www.python.org/doc/2.5.2/lib/module-traceback.html) module has some nice functions, among them: print\_stack:
```
import traceback
traceback.print_stack()
``` |
Showing the stack trace from a running Python application | 132,058 | 250 | 2008-09-25T08:06:06Z | 132,123 | 31 | 2008-09-25T08:29:32Z | [
"python",
"debugging",
"stack-trace",
"traceback"
] | I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
***Related questions:***
* [Print current call stack from a method in Python code](http://stackoverflow.com/questions/1156023/print-current-call-stack-from-a-method-in-python-code)
* [Check what a running process is doing: print stack trace of an uninstrumented Python program](http://stackoverflow.com/questions/6849138/check-what-a-running-process-is-doing-print-stack-trace-of-an-uninstrumented-py) | ```
>>> import traceback
>>> def x():
>>> print traceback.extract_stack()
>>> x()
[('<stdin>', 1, '<module>', None), ('<stdin>', 2, 'x', None)]
```
You can also nicely format the stack trace, see the [docs](http://docs.python.org/lib/module-traceback.html).
**Edit**: To simulate Java's behavior, as suggested by @[Douglas Leeder](#132260), add this:
```
import signal
import traceback
signal.signal(signal.SIGUSR1, lambda sig, stack: traceback.print_stack(stack))
```
to the startup code in your application. Then you can print the stack by sending `SIGUSR1` to the running Python process. |
Showing the stack trace from a running Python application | 132,058 | 250 | 2008-09-25T08:06:06Z | 133,384 | 251 | 2008-09-25T13:38:45Z | [
"python",
"debugging",
"stack-trace",
"traceback"
] | I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
***Related questions:***
* [Print current call stack from a method in Python code](http://stackoverflow.com/questions/1156023/print-current-call-stack-from-a-method-in-python-code)
* [Check what a running process is doing: print stack trace of an uninstrumented Python program](http://stackoverflow.com/questions/6849138/check-what-a-running-process-is-doing-print-stack-trace-of-an-uninstrumented-py) | I have module I use for situations like this - where a process will be running for a long time but gets stuck sometimes for unknown and irreproducible reasons. Its a bit hacky, and only works on unix (requires signals):
```
import code, traceback, signal
def debug(sig, frame):
"""Interrupt running process, and provide a python prompt for
interactive debugging."""
d={'_frame':frame} # Allow access to frame object.
d.update(frame.f_globals) # Unless shadowed by global
d.update(frame.f_locals)
i = code.InteractiveConsole(d)
message = "Signal received : entering python shell.\nTraceback:\n"
message += ''.join(traceback.format_stack(frame))
i.interact(message)
def listen():
signal.signal(signal.SIGUSR1, debug) # Register handler
```
To use, just call the listen() function at some point when your program starts up (You could even stick it in site.py to have all python programs use it), and let it run. At any point, send the process a SIGUSR1 signal, using kill, or in python:
```
os.kill(pid, signal.SIGUSR1)
```
This will cause the program to break to a python console at the point it is currently at, showing you the stack trace, and letting you manipulate the variables. Use control-d (EOF) to continue running (though note that you will probably interrupt any I/O etc at the point you signal, so it isn't fully non-intrusive.
I've another script that does the same thing, except it communicates with the running process through a pipe (to allow for debugging backgrounded processes etc). Its a bit large to post here, but I've added it as a [python cookbook recipe](http://code.activestate.com/recipes/576515/). |
Showing the stack trace from a running Python application | 132,058 | 250 | 2008-09-25T08:06:06Z | 147,114 | 118 | 2008-09-29T00:44:13Z | [
"python",
"debugging",
"stack-trace",
"traceback"
] | I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
***Related questions:***
* [Print current call stack from a method in Python code](http://stackoverflow.com/questions/1156023/print-current-call-stack-from-a-method-in-python-code)
* [Check what a running process is doing: print stack trace of an uninstrumented Python program](http://stackoverflow.com/questions/6849138/check-what-a-running-process-is-doing-print-stack-trace-of-an-uninstrumented-py) | The suggestion to install a signal handler is a good one, and I use it a lot. For example, [bzr](http://bazaar-vcs.org/) by default installs a SIGQUIT handler that invokes `pdb.set_trace()` to immediately drop you into a [pdb](http://docs.python.org/lib/module-pdb.html) prompt. (See the [bzrlib.breakin](https://bazaar.launchpad.net/~bzr-pqm/bzr/bzr.dev/view/head:/bzrlib/breakin.py) module's source for the exact details.) With pdb you can not only get the current stack trace but also inspect variables, etc.
However, sometimes I need to debug a process that I didn't have the foresight to install the signal handler in. On linux, you can attach gdb to the process and get a python stack trace with some gdb macros. Put <http://svn.python.org/projects/python/trunk/Misc/gdbinit> in `~/.gdbinit`, then:
* Attach gdb: `gdb -p` *`PID`*
* Get the python stack trace: `pystack`
It's not totally reliable unfortunately, but it works most of the time.
Finally, attaching `strace` can often give you a good idea what a process is doing. |
Showing the stack trace from a running Python application | 132,058 | 250 | 2008-09-25T08:06:06Z | 618,748 | 17 | 2009-03-06T12:49:11Z | [
"python",
"debugging",
"stack-trace",
"traceback"
] | I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
***Related questions:***
* [Print current call stack from a method in Python code](http://stackoverflow.com/questions/1156023/print-current-call-stack-from-a-method-in-python-code)
* [Check what a running process is doing: print stack trace of an uninstrumented Python program](http://stackoverflow.com/questions/6849138/check-what-a-running-process-is-doing-print-stack-trace-of-an-uninstrumented-py) | What really helped me here is [spiv's tip](http://stackoverflow.com/questions/132058/getting-stack-trace-from-a-running-python-application/147114#147114) (which I would vote up and comment on if I had the reputation points) for getting a stack trace out of an *unprepared* Python process. Except it didn't work until I [modified the gdbinit script](http://lists.osafoundation.org/pipermail/chandler-dev/2007-January/007519.html). So:
* download <http://svn.python.org/projects/python/trunk/Misc/gdbinit> and put it in `~/.gdbinit`
* ~~edit it, changing `PyEval_EvalFrame` to `PyEval_EvalFrameEx`~~ [edit: no longer needed; the linked file already has this change as of 2010-01-14]
* Attach gdb: `gdb -p PID`
* Get the python stack trace: `pystack` |
Showing the stack trace from a running Python application | 132,058 | 250 | 2008-09-25T08:06:06Z | 2,569,696 | 56 | 2010-04-02T23:23:47Z | [
"python",
"debugging",
"stack-trace",
"traceback"
] | I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
***Related questions:***
* [Print current call stack from a method in Python code](http://stackoverflow.com/questions/1156023/print-current-call-stack-from-a-method-in-python-code)
* [Check what a running process is doing: print stack trace of an uninstrumented Python program](http://stackoverflow.com/questions/6849138/check-what-a-running-process-is-doing-print-stack-trace-of-an-uninstrumented-py) | I am almost always dealing with multiple threads and main thread is generally not doing much, so what is most interesting is to dump all the stacks (which is more like the Java's dump). Here is an implementation based on [this blog](http://bzimmer.ziclix.com/2008/12/17/python-thread-dumps/):
```
import threading, sys, traceback
def dumpstacks(signal, frame):
id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
code = []
for threadId, stack in sys._current_frames().items():
code.append("\n# Thread: %s(%d)" % (id2name.get(threadId,""), threadId))
for filename, lineno, name, line in traceback.extract_stack(stack):
code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
if line:
code.append(" %s" % (line.strip()))
print "\n".join(code)
import signal
signal.signal(signal.SIGQUIT, dumpstacks)
``` |
Showing the stack trace from a running Python application | 132,058 | 250 | 2008-09-25T08:06:06Z | 5,503,185 | 9 | 2011-03-31T16:30:51Z | [
"python",
"debugging",
"stack-trace",
"traceback"
] | I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
***Related questions:***
* [Print current call stack from a method in Python code](http://stackoverflow.com/questions/1156023/print-current-call-stack-from-a-method-in-python-code)
* [Check what a running process is doing: print stack trace of an uninstrumented Python program](http://stackoverflow.com/questions/6849138/check-what-a-running-process-is-doing-print-stack-trace-of-an-uninstrumented-py) | I would add this as a comment to [haridsv's response](http://stackoverflow.com/questions/132058/getting-stack-trace-from-a-running-python-application/2569696#2569696), but I lack the reputation to do so:
Some of us are still stuck on a version of Python older than 2.6 (required for Thread.ident), so I got the code working in Python 2.5 (though without the thread name being displayed) as such:
```
import traceback
import sys
def dumpstacks(signal, frame):
code = []
for threadId, stack in sys._current_frames().items():
code.append("\n# Thread: %d" % (threadId))
for filename, lineno, name, line in traceback.extract_stack(stack):
code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
if line:
code.append(" %s" % (line.strip()))
print "\n".join(code)
import signal
signal.signal(signal.SIGQUIT, dumpstacks)
``` |
Showing the stack trace from a running Python application | 132,058 | 250 | 2008-09-25T08:06:06Z | 9,019,164 | 7 | 2012-01-26T13:57:38Z | [
"python",
"debugging",
"stack-trace",
"traceback"
] | I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
***Related questions:***
* [Print current call stack from a method in Python code](http://stackoverflow.com/questions/1156023/print-current-call-stack-from-a-method-in-python-code)
* [Check what a running process is doing: print stack trace of an uninstrumented Python program](http://stackoverflow.com/questions/6849138/check-what-a-running-process-is-doing-print-stack-trace-of-an-uninstrumented-py) | Take a look at the [`faulthandler`](http://docs.python.org/3.3/whatsnew/3.3.html#faulthandler) module, new in Python 3.3. A [`faulthandler` backport](https://pypi.python.org/pypi/faulthandler/) for use in Python 2 is available on PyPI. |
Showing the stack trace from a running Python application | 132,058 | 250 | 2008-09-25T08:06:06Z | 16,246,063 | 14 | 2013-04-26T22:18:58Z | [
"python",
"debugging",
"stack-trace",
"traceback"
] | I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
***Related questions:***
* [Print current call stack from a method in Python code](http://stackoverflow.com/questions/1156023/print-current-call-stack-from-a-method-in-python-code)
* [Check what a running process is doing: print stack trace of an uninstrumented Python program](http://stackoverflow.com/questions/6849138/check-what-a-running-process-is-doing-print-stack-trace-of-an-uninstrumented-py) | You can try the [faulthandler module](http://docs.python.org/dev/library/faulthandler.html). Install it using `pip install faulthandler` and add:
```
import faulthandler, signal
faulthandler.register(signal.SIGUSR1)
```
at the beginning of your program. Then send SIGUSR1 to your process (ex: `kill -USR1 42`) to display the Python traceback of all threads to the standard output. [Read the documentation](http://docs.python.org/dev/library/faulthandler.html) for more options (ex: log into a file) and other ways to display the traceback.
The module is now part of Python 3.3. For Python 2, see <http://faulthandler.readthedocs.org/> |
Showing the stack trace from a running Python application | 132,058 | 250 | 2008-09-25T08:06:06Z | 29,881,630 | 16 | 2015-04-26T18:22:20Z | [
"python",
"debugging",
"stack-trace",
"traceback"
] | I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
***Related questions:***
* [Print current call stack from a method in Python code](http://stackoverflow.com/questions/1156023/print-current-call-stack-from-a-method-in-python-code)
* [Check what a running process is doing: print stack trace of an uninstrumented Python program](http://stackoverflow.com/questions/6849138/check-what-a-running-process-is-doing-print-stack-trace-of-an-uninstrumented-py) | Getting a stack trace of an *unprepared* python program, running in a stock python *without debugging symbols* can be done with [pyrasite](http://pyrasite.readthedocs.org/). Worked like a charm for me in on Ubuntu Trusty:
```
$ sudo pip install pyrasite
$ echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
$ sudo pyrasite 16262 dump_stacks.py # dumps stacks to stdout/stderr of the python program
```
(Hat tip to @Albert, whose answer contained a pointer to this, among other tools.) |
Translate algorithmic C to Python | 132,411 | 8 | 2008-09-25T09:56:43Z | 132,459 | 12 | 2008-09-25T10:12:54Z | [
"java",
"python",
"c",
"sandbox",
"code-translation"
] | I would like to translate some C code to Python code or bytecode. The C code in question is what i'd call purely algorithmic: platform independent, no I/O, just algorithms and in-memory data structures.
An example would be a regular expression library. Translation tool would process library source code and produce a functionally equivalent Python module that can be run in a **sandboxed** environment.
What specific approaches, tools and techniques can you recommend?
---
*Note: Python C extension or ctypes is **not an option** because the environment is sandboxed.*
*Another note*: looks like there is a [C-to-Java-bytecode compiler](http://www.axiomsol.com/), they even compiled libjpeg to Java. Is Java bytecode+VM too different from CPython bytecode+VM? | There is frankly no way to mechanically and meaningfully translate C to Python without suffering an insane performance penalty. As we all know Python isn't anywhere near C speed (with current compilers and interpreters) but worse than that is that what C is good at (bit-fiddling, integer math, tricks with blocks of memory) Python is very slow at, and what Python is good at you can't express in C directly. A direct translation would therefore be extra inefficient, to the point of absurdity.
The much, much better approach in general is indeed to keep the C the C, and wrap it in a Python extension module (using [SWIG](http://www.swig.org), [Pyrex](http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/), [Cython](http://cython.org) or [writing a wrapper manually](http://docs.python.org/ext)) or call the C library directly using [ctypes](http://docs.python.org/lib/module-ctypes.html). All the benefits (and downsides) of C for what's already C or you add later, and all the convenience (and downsides) of Python for any code in Python.
That won't satisfy your 'sandboxing' needs, but you should realize that you cannot sandbox Python particularly well anyway; it takes a lot of effort and modification of CPython, and if you forget one little hole somewhere your jail is broken. If you want to sandbox Python you should start by sandboxing the entire process, and then C extensions can get sandboxed too. |
Is there a difference between `==` and `is` in Python? | 132,988 | 299 | 2008-09-25T12:27:09Z | 133,017 | 13 | 2008-09-25T12:31:12Z | [
"python",
"reference",
"semantics"
] | My [Google-fu](http://en.wiktionary.org/wiki/Google-fu) has failed me.
In Python, are the following two tests for equality equivalent (ha!)?
```
n = 5
# Test one.
if n == 5:
print 'Yay!'
# Test two.
if n is 5:
print 'Yay!'
```
Does this hold true for objects where you would be comparing instances (a `list` say)?
Okay, so this kind of answers my question:
```
L = []
L.append(1)
if L == [1]:
print 'Yay!'
# Holds true, but...
if L is [1]:
print 'Yay!'
# Doesn't.
```
So `==` tests value where `is` tests to see if they are the same object? | == determines if the values are equivalent, while "is" determines if they are the exact same object. |
Is there a difference between `==` and `is` in Python? | 132,988 | 299 | 2008-09-25T12:27:09Z | 133,024 | 439 | 2008-09-25T12:32:37Z | [
"python",
"reference",
"semantics"
] | My [Google-fu](http://en.wiktionary.org/wiki/Google-fu) has failed me.
In Python, are the following two tests for equality equivalent (ha!)?
```
n = 5
# Test one.
if n == 5:
print 'Yay!'
# Test two.
if n is 5:
print 'Yay!'
```
Does this hold true for objects where you would be comparing instances (a `list` say)?
Okay, so this kind of answers my question:
```
L = []
L.append(1)
if L == [1]:
print 'Yay!'
# Holds true, but...
if L is [1]:
print 'Yay!'
# Doesn't.
```
So `==` tests value where `is` tests to see if they are the same object? | `is` will return `True` if two variables point to the same object, `==` if the objects referred to by the variables are equal.
```
>>> a = [1, 2, 3]
>>> b = a
>>> b is a
True
>>> b == a
True
>>> b = a[:]
>>> b is a
False
>>> b == a
True
```
In your case, the second test only works because Python caches small integer objects, which is an implementation detail. For larger integers, this does not work:
```
>>> 1000 is 10**3
False
>>> 1000 == 10**3
True
```
The same holds true for string literals:
```
>>> "a" is "a"
True
>>> "aa" is "a" * 2
True
>>> x = "a"
>>> "aa" is x * 2
False
>>> "aa" is intern(x*2)
True
```
Please see [this question](http://stackoverflow.com/questions/26595/is-there-any-difference-between-foo-is-none-and-foo-none) as well. |
Is there a difference between `==` and `is` in Python? | 132,988 | 299 | 2008-09-25T12:27:09Z | 134,631 | 8 | 2008-09-25T17:15:38Z | [
"python",
"reference",
"semantics"
] | My [Google-fu](http://en.wiktionary.org/wiki/Google-fu) has failed me.
In Python, are the following two tests for equality equivalent (ha!)?
```
n = 5
# Test one.
if n == 5:
print 'Yay!'
# Test two.
if n is 5:
print 'Yay!'
```
Does this hold true for objects where you would be comparing instances (a `list` say)?
Okay, so this kind of answers my question:
```
L = []
L.append(1)
if L == [1]:
print 'Yay!'
# Holds true, but...
if L is [1]:
print 'Yay!'
# Doesn't.
```
So `==` tests value where `is` tests to see if they are the same object? | They are **completely different**. `is` checks for object identity, while `==` checks for equality (a notion that depends on the two operands' types).
It is only a lucky coincidence that "`is`" seems to work correctly with small integers (e.g. 5 == 4+1). That is because CPython optimizes the storage of integers in the range (-5 to 256) by making them singletons: <https://docs.python.org/2/c-api/int.html#c.PyInt_FromLong> |
Is there a difference between `==` and `is` in Python? | 132,988 | 299 | 2008-09-25T12:27:09Z | 134,659 | 21 | 2008-09-25T17:19:05Z | [
"python",
"reference",
"semantics"
] | My [Google-fu](http://en.wiktionary.org/wiki/Google-fu) has failed me.
In Python, are the following two tests for equality equivalent (ha!)?
```
n = 5
# Test one.
if n == 5:
print 'Yay!'
# Test two.
if n is 5:
print 'Yay!'
```
Does this hold true for objects where you would be comparing instances (a `list` say)?
Okay, so this kind of answers my question:
```
L = []
L.append(1)
if L == [1]:
print 'Yay!'
# Holds true, but...
if L is [1]:
print 'Yay!'
# Doesn't.
```
So `==` tests value where `is` tests to see if they are the same object? | Note that this is why `if foo is None:` is the preferred null comparison for python. All null objects are really pointers to the same value, which python sets aside to mean "None"
`if x is True:` and `if x is False:` also work in a similar manner. False and True are two special objects, all true boolean values are True and all false boolean values are False |
Is there a difference between `==` and `is` in Python? | 132,988 | 299 | 2008-09-25T12:27:09Z | 1,085,656 | 131 | 2009-07-06T06:22:52Z | [
"python",
"reference",
"semantics"
] | My [Google-fu](http://en.wiktionary.org/wiki/Google-fu) has failed me.
In Python, are the following two tests for equality equivalent (ha!)?
```
n = 5
# Test one.
if n == 5:
print 'Yay!'
# Test two.
if n is 5:
print 'Yay!'
```
Does this hold true for objects where you would be comparing instances (a `list` say)?
Okay, so this kind of answers my question:
```
L = []
L.append(1)
if L == [1]:
print 'Yay!'
# Holds true, but...
if L is [1]:
print 'Yay!'
# Doesn't.
```
So `==` tests value where `is` tests to see if they are the same object? | There is a simple rule of thumb to tell you when to use `==` or `is`.
* `==` is for *value equality*. Use it when you would like to know if two objects have the same value.
* `is` is for *reference equality*. Use it when you would like to know if two references refer to the same object.
In general, when you are comparing something to a simple type, you are usually checking for *value equality*, so you should use `==`. For example, the intention of your example is probably to check whether x has a value equal to 2 (`==`), not whether `x` is literally referring to the same object as 2.
---
Something else to note: because of the way the CPython reference implementation works, you'll get unexpected and inconsistent results if you mistakenly use `is` to compare for reference equality on integers:
```
>>> a = 500
>>> b = 500
>>> a == b
True
>>> a is b
False
```
That's pretty much what we expected: `a` and `b` have the same value, but are distinct entities. But what about this?
```
>>> c = 200
>>> d = 200
>>> c == d
True
>>> c is d
True
```
This is inconsistent with the earlier result. What's going on here? It turns out the reference implementation of Python caches integer objects in the range -5..256 as singleton instances for performance reasons. Here's an example demonstrating this:
```
>>> for i in range(250, 260): a = i; print "%i: %s" % (i, a is int(str(i)));
...
250: True
251: True
252: True
253: True
254: True
255: True
256: True
257: False
258: False
259: False
```
This is another obvious reason not to use `is`: the behavior is left up to implementations when you're erroneously using it for value equality. |
Efficiently match multiple regexes in Python | 133,886 | 13 | 2008-09-25T15:10:05Z | 134,168 | 7 | 2008-09-25T15:54:53Z | [
"python",
"regex",
"lexical-analysis"
] | Lexical analyzers are quite easy to write when you have regexes. Today I wanted to write a simple general analyzer in Python, and came up with:
```
import re
import sys
class Token(object):
""" A simple Token structure.
Contains the token type, value and position.
"""
def __init__(self, type, val, pos):
self.type = type
self.val = val
self.pos = pos
def __str__(self):
return '%s(%s) at %s' % (self.type, self.val, self.pos)
class LexerError(Exception):
""" Lexer error exception.
pos:
Position in the input line where the error occurred.
"""
def __init__(self, pos):
self.pos = pos
class Lexer(object):
""" A simple regex-based lexer/tokenizer.
See below for an example of usage.
"""
def __init__(self, rules, skip_whitespace=True):
""" Create a lexer.
rules:
A list of rules. Each rule is a `regex, type`
pair, where `regex` is the regular expression used
to recognize the token and `type` is the type
of the token to return when it's recognized.
skip_whitespace:
If True, whitespace (\s+) will be skipped and not
reported by the lexer. Otherwise, you have to
specify your rules for whitespace, or it will be
flagged as an error.
"""
self.rules = []
for regex, type in rules:
self.rules.append((re.compile(regex), type))
self.skip_whitespace = skip_whitespace
self.re_ws_skip = re.compile('\S')
def input(self, buf):
""" Initialize the lexer with a buffer as input.
"""
self.buf = buf
self.pos = 0
def token(self):
""" Return the next token (a Token object) found in the
input buffer. None is returned if the end of the
buffer was reached.
In case of a lexing error (the current chunk of the
buffer matches no rule), a LexerError is raised with
the position of the error.
"""
if self.pos >= len(self.buf):
return None
else:
if self.skip_whitespace:
m = self.re_ws_skip.search(self.buf[self.pos:])
if m:
self.pos += m.start()
else:
return None
for token_regex, token_type in self.rules:
m = token_regex.match(self.buf[self.pos:])
if m:
value = self.buf[self.pos + m.start():self.pos + m.end()]
tok = Token(token_type, value, self.pos)
self.pos += m.end()
return tok
# if we're here, no rule matched
raise LexerError(self.pos)
def tokens(self):
""" Returns an iterator to the tokens found in the buffer.
"""
while 1:
tok = self.token()
if tok is None: break
yield tok
if __name__ == '__main__':
rules = [
('\d+', 'NUMBER'),
('[a-zA-Z_]\w+', 'IDENTIFIER'),
('\+', 'PLUS'),
('\-', 'MINUS'),
('\*', 'MULTIPLY'),
('\/', 'DIVIDE'),
('\(', 'LP'),
('\)', 'RP'),
('=', 'EQUALS'),
]
lx = Lexer(rules, skip_whitespace=True)
lx.input('erw = _abc + 12*(R4-623902) ')
try:
for tok in lx.tokens():
print tok
except LexerError, err:
print 'LexerError at position', err.pos
```
It works just fine, but I'm a bit worried that it's too inefficient. Are there any regex tricks that will allow me to write it in a more efficient / elegant way ?
Specifically, is there a way to avoid looping over all the regex rules linearly to find one that fits? | You can merge all your regexes into one using the "|" operator and let the regex library do the work of discerning between tokens. Some care should be taken to ensure the preference of tokens (for example to avoid matching a keyword as an identifier). |
Efficiently match multiple regexes in Python | 133,886 | 13 | 2008-09-25T15:10:05Z | 4,136,323 | 11 | 2010-11-09T16:59:03Z | [
"python",
"regex",
"lexical-analysis"
] | Lexical analyzers are quite easy to write when you have regexes. Today I wanted to write a simple general analyzer in Python, and came up with:
```
import re
import sys
class Token(object):
""" A simple Token structure.
Contains the token type, value and position.
"""
def __init__(self, type, val, pos):
self.type = type
self.val = val
self.pos = pos
def __str__(self):
return '%s(%s) at %s' % (self.type, self.val, self.pos)
class LexerError(Exception):
""" Lexer error exception.
pos:
Position in the input line where the error occurred.
"""
def __init__(self, pos):
self.pos = pos
class Lexer(object):
""" A simple regex-based lexer/tokenizer.
See below for an example of usage.
"""
def __init__(self, rules, skip_whitespace=True):
""" Create a lexer.
rules:
A list of rules. Each rule is a `regex, type`
pair, where `regex` is the regular expression used
to recognize the token and `type` is the type
of the token to return when it's recognized.
skip_whitespace:
If True, whitespace (\s+) will be skipped and not
reported by the lexer. Otherwise, you have to
specify your rules for whitespace, or it will be
flagged as an error.
"""
self.rules = []
for regex, type in rules:
self.rules.append((re.compile(regex), type))
self.skip_whitespace = skip_whitespace
self.re_ws_skip = re.compile('\S')
def input(self, buf):
""" Initialize the lexer with a buffer as input.
"""
self.buf = buf
self.pos = 0
def token(self):
""" Return the next token (a Token object) found in the
input buffer. None is returned if the end of the
buffer was reached.
In case of a lexing error (the current chunk of the
buffer matches no rule), a LexerError is raised with
the position of the error.
"""
if self.pos >= len(self.buf):
return None
else:
if self.skip_whitespace:
m = self.re_ws_skip.search(self.buf[self.pos:])
if m:
self.pos += m.start()
else:
return None
for token_regex, token_type in self.rules:
m = token_regex.match(self.buf[self.pos:])
if m:
value = self.buf[self.pos + m.start():self.pos + m.end()]
tok = Token(token_type, value, self.pos)
self.pos += m.end()
return tok
# if we're here, no rule matched
raise LexerError(self.pos)
def tokens(self):
""" Returns an iterator to the tokens found in the buffer.
"""
while 1:
tok = self.token()
if tok is None: break
yield tok
if __name__ == '__main__':
rules = [
('\d+', 'NUMBER'),
('[a-zA-Z_]\w+', 'IDENTIFIER'),
('\+', 'PLUS'),
('\-', 'MINUS'),
('\*', 'MULTIPLY'),
('\/', 'DIVIDE'),
('\(', 'LP'),
('\)', 'RP'),
('=', 'EQUALS'),
]
lx = Lexer(rules, skip_whitespace=True)
lx.input('erw = _abc + 12*(R4-623902) ')
try:
for tok in lx.tokens():
print tok
except LexerError, err:
print 'LexerError at position', err.pos
```
It works just fine, but I'm a bit worried that it's too inefficient. Are there any regex tricks that will allow me to write it in a more efficient / elegant way ?
Specifically, is there a way to avoid looping over all the regex rules linearly to find one that fits? | I suggest using the re.Scanner class, it's not documented in the standard library, but it's well worth using. Here's an example:
```
import re
scanner = re.Scanner([
(r"-?[0-9]+\.[0-9]+([eE]-?[0-9]+)?", lambda scanner, token: float(token)),
(r"-?[0-9]+", lambda scanner, token: int(token)),
(r" +", lambda scanner, token: None),
])
>>> scanner.scan("0 -1 4.5 7.8e3")[0]
[0, -1, 4.5, 7800.0]
``` |
Run Pylons controller as separate app? | 134,387 | 10 | 2008-09-25T16:34:14Z | 784,421 | 11 | 2009-04-24T03:42:01Z | [
"python",
"pylons"
] | I have a Pylons app where I would like to move some of the logic to a separate batch process. I've been running it under the main app for testing, but it is going to be doing a lot of work in the database, and I'd like it to be a separate process that will be running in the background constantly. The main pylons app will submit jobs into the database, and the new process will do the work requested in each job.
How can I launch a controller as a stand alone script?
I currently have:
```
from warehouse2.controllers import importServer
importServer.runServer(60)
```
and in the controller file, but not part of the controller class:
```
def runServer(sleep_secs):
try:
imp = ImportserverController()
while(True):
imp.runImport()
sleepFor(sleep_secs)
except Exception, e:
log.info("Unexpected error: %s" % sys.exc_info()[0])
log.info(e)
```
But starting ImportServer.py on the command line results in:
```
2008-09-25 12:31:12.687000 Could not locate a bind configured on mapper Mapper|I
mportJob|n_imports, SQL expression or this Session
``` | If you want to load parts of a Pylons app, such as the models from outside Pylons, load the Pylons app in the script first:
```
from paste.deploy import appconfig
from pylons import config
from YOURPROJ.config.environment import load_environment
conf = appconfig('config:development.ini', relative_to='.')
load_environment(conf.global_conf, conf.local_conf)
```
That will load the Pylons app, which sets up most of the state so that you can proceed to use the SQLAlchemy models and Session to work with the database.
Note that if your code is using the pylons globals such as request/response/etc then that won't work since they require a request to be in progress to exist. |
Which is more preferable to use in Python: lambda functions or nested functions ('def')? | 134,626 | 58 | 2008-09-25T17:15:03Z | 134,638 | 65 | 2008-09-25T17:16:31Z | [
"python",
"syntax",
"function",
"lambda"
] | I mostly use lambda functions but sometimes use nested functions that seem to provide the same behavior.
Here are some trivial examples where they functionally do the same thing if either were found within another function:
**Lambda function**
```
>>> a = lambda x : 1 + x
>>> a(5)
6
```
**Nested function**
```
>>> def b(x): return 1 + x
>>> b(5)
6
```
Is there advantages to using one over the other? (Performance? Readability? Limitations? Consistency? etc.) Does it even matter? If doesn't then does that violate the Pythonic principle: [âThere should be oneâand preferably only oneâobvious way to do itâ](http://en.wikipedia.org/wiki/Python_(programming_language)#Philosophy). | If you need to assign the `lambda` to a name, use a `def` instead. `def`s are just syntactic sugar for an assignment, so the result is the same, and they are a lot more flexible and readable.
`lambda`s can be used for *use once, throw away* functions which won't have a name.
However, this use case is very rare. You rarely need to pass around unnamed function objects.
The builtins `map()` and `filter()` need function objects, but **list comprehensions** and **generator expressions** are generally more readable than those functions and can cover all use cases, without the need of lambdas.
For the cases you really need a small function object, you should use the `operator` module functions, like `operator.add` instead of `lambda x, y: x + y`
If you still need some `lambda` not covered, you might consider writing a `def`, just to be more readable. If the function is more complex than the ones at `operator` module, a `def` is probably better.
So, real world good `lambda` use cases are very rare. |
Which is more preferable to use in Python: lambda functions or nested functions ('def')? | 134,626 | 58 | 2008-09-25T17:15:03Z | 134,709 | 13 | 2008-09-25T17:29:26Z | [
"python",
"syntax",
"function",
"lambda"
] | I mostly use lambda functions but sometimes use nested functions that seem to provide the same behavior.
Here are some trivial examples where they functionally do the same thing if either were found within another function:
**Lambda function**
```
>>> a = lambda x : 1 + x
>>> a(5)
6
```
**Nested function**
```
>>> def b(x): return 1 + x
>>> b(5)
6
```
Is there advantages to using one over the other? (Performance? Readability? Limitations? Consistency? etc.) Does it even matter? If doesn't then does that violate the Pythonic principle: [âThere should be oneâand preferably only oneâobvious way to do itâ](http://en.wikipedia.org/wiki/Python_(programming_language)#Philosophy). | [In this interview,](http://www.amk.ca/python/writing/gvr-interview) Guido van Rossum says he wishes he hadn't let 'lambda' into Python:
> "**Q. What feature of Python are you least pleased with?**
>
> Sometimes I've been too quick in accepting contributions, and later realized that it was a mistake. One example would be some of the functional programming features, such as lambda functions. lambda is a keyword that lets you create a small anonymous function; built-in functions such as map, filter, and reduce run a function over a sequence type, such as a list.
>
> In practice, it didn't turn out that well. Python only has two scopes: local and global. This makes writing lambda functions painful, because you often want to access variables in the scope where the lambda was defined, but you can't because of the two scopes. There's a way around this, but it's something of a kludge. Often it seems much easier in Python to just use a for loop instead of messing around with lambda functions. map and friends work well only when there's already a built-in function that does what you want.
IMHO, Iambdas can be convenient sometimes, but usually are convenient at the expense of readibility. Can you tell me what this does:
```
str(reduce(lambda x,y:x+y,map(lambda x:x**x,range(1,1001))))[-10:]
```
I wrote it, and it took me a minute to figure it out. This is from Project Euler - i won't say which problem because i hate spoilers, but it runs in 0.124 seconds :) |
Which is more preferable to use in Python: lambda functions or nested functions ('def')? | 134,626 | 58 | 2008-09-25T17:15:03Z | 138,625 | 22 | 2008-09-26T10:20:43Z | [
"python",
"syntax",
"function",
"lambda"
] | I mostly use lambda functions but sometimes use nested functions that seem to provide the same behavior.
Here are some trivial examples where they functionally do the same thing if either were found within another function:
**Lambda function**
```
>>> a = lambda x : 1 + x
>>> a(5)
6
```
**Nested function**
```
>>> def b(x): return 1 + x
>>> b(5)
6
```
Is there advantages to using one over the other? (Performance? Readability? Limitations? Consistency? etc.) Does it even matter? If doesn't then does that violate the Pythonic principle: [âThere should be oneâand preferably only oneâobvious way to do itâ](http://en.wikipedia.org/wiki/Python_(programming_language)#Philosophy). | Practically speaking, to me there are two differences:
The first is about what they do and what they return:
* def is a keyword that doesn't return anything and creates a 'name' in the local namespace.
* lambda is a keyword that returns a function object and does not create a 'name' in the local namespace.
Hence, if you need to call a function that takes a function object, the only way to do that in one line of python code is with a lambda. There's no equivalent with def.
In some frameworks this is actually quite common; for example, I use [Twisted](http://twistedmatrix.com/) a lot, and so doing something like
```
d.addCallback(lambda result: setattr(self, _someVariable, result))
```
is quite common, and more concise with lambdas.
The second difference is about what the actual function is allowed to do.
* A function defined with 'def' can contain any python code
* A function defined with 'lambda' has to evaluate to an expression, and can thus not contain statements like print, import, raise, ...
For example,
```
def p(x): print x
```
works as expected, while
```
lambda x: print x
```
is a SyntaxError.
Of course, there are workarounds - substitute `print` with `sys.stdout.write`, or `import` with `__import__`. But usually you're better off going with a function in that case. |
Display number with leading zeros | 134,934 | 356 | 2008-09-25T18:06:06Z | 134,942 | 46 | 2008-09-25T18:07:37Z | [
"python",
"string-formatting"
] | Given:
```
a = 1
b = 10
c = 100
```
I want to display a leading zero for all numbers with less than 2 digits, i.e.:
```
01
10
100
``` | ```
x = [1, 10, 100]
for i in x:
print '%02d' % i
```
results:
```
01
10
100
```
Read [more information about string formatting using %](https://pyformat.info/) in the documentation. |
Display number with leading zeros | 134,934 | 356 | 2008-09-25T18:06:06Z | 134,951 | 405 | 2008-09-25T18:08:21Z | [
"python",
"string-formatting"
] | Given:
```
a = 1
b = 10
c = 100
```
I want to display a leading zero for all numbers with less than 2 digits, i.e.:
```
01
10
100
``` | Here you are:
```
print "%02d" % (1,)
```
Basically **%** is like `printf` or `sprint`. |
Display number with leading zeros | 134,934 | 356 | 2008-09-25T18:06:06Z | 135,157 | 172 | 2008-09-25T18:43:06Z | [
"python",
"string-formatting"
] | Given:
```
a = 1
b = 10
c = 100
```
I want to display a leading zero for all numbers with less than 2 digits, i.e.:
```
01
10
100
``` | In Python 3, you would use the format() string method:
```
for i in (1, 10, 100):
print('{num:02d}'.format(num=i))
```
or using the built-in (for a single number):
```
print(format(i, '02d'))
```
See the [PEP-3101](http://www.python.org/dev/peps/pep-3101) documentation for the new formatting functions. |
Display number with leading zeros | 134,934 | 356 | 2008-09-25T18:06:06Z | 3,371,180 | 392 | 2010-07-30T11:58:30Z | [
"python",
"string-formatting"
] | Given:
```
a = 1
b = 10
c = 100
```
I want to display a leading zero for all numbers with less than 2 digits, i.e.:
```
01
10
100
``` | You can use zfill:
```
print str(1).zfill(2)
print str(10).zfill(2)
print str(100).zfill(2)
```
prints:
```
01
10
100
``` |
Display number with leading zeros | 134,934 | 356 | 2008-09-25T18:06:06Z | 4,143,246 | 60 | 2010-11-10T10:03:43Z | [
"python",
"string-formatting"
] | Given:
```
a = 1
b = 10
c = 100
```
I want to display a leading zero for all numbers with less than 2 digits, i.e.:
```
01
10
100
``` | Or this:
`print '{0:02d}'.format(1)` |
Display number with leading zeros | 134,934 | 356 | 2008-09-25T18:06:06Z | 10,358,441 | 24 | 2012-04-27T22:02:37Z | [
"python",
"string-formatting"
] | Given:
```
a = 1
b = 10
c = 100
```
I want to display a leading zero for all numbers with less than 2 digits, i.e.:
```
01
10
100
``` | The pythonic way to do this:
```
str(number).rjust(string_width, fill_char)
```
This way, the original string is returned unchanged if its length is greater than string\_width. Example:
```
a = [1, 10, 100]
for num in a:
print str(num).rjust(2, '0')
```
Results:
```
01
10
100
``` |
Display number with leading zeros | 134,934 | 356 | 2008-09-25T18:06:06Z | 19,962,689 | 20 | 2013-11-13T19:30:23Z | [
"python",
"string-formatting"
] | Given:
```
a = 1
b = 10
c = 100
```
I want to display a leading zero for all numbers with less than 2 digits, i.e.:
```
01
10
100
``` | ```
print '{:02}'.format(a)
print '{:02}'.format(b)
print '{:02}'.format(c)
```
prints:
```
01
10
100
``` |
Display number with leading zeros | 134,934 | 356 | 2008-09-25T18:06:06Z | 33,860,138 | 21 | 2015-11-22T21:01:27Z | [
"python",
"string-formatting"
] | Given:
```
a = 1
b = 10
c = 100
```
I want to display a leading zero for all numbers with less than 2 digits, i.e.:
```
01
10
100
``` | Or another solution.
```
"{:0>2}".format(number)
``` |
Python Library Path | 135,035 | 30 | 2008-09-25T18:23:50Z | 135,050 | 7 | 2008-09-25T18:25:07Z | [
"python"
] | In ruby the library path is provided in $:, in perl it's in @INC - how do you get the list of paths that Python searches for modules when you do an import? | ```
import sys
sys.path
``` |
Python Library Path | 135,035 | 30 | 2008-09-25T18:23:50Z | 135,051 | 44 | 2008-09-25T18:25:18Z | [
"python"
] | In ruby the library path is provided in $:, in perl it's in @INC - how do you get the list of paths that Python searches for modules when you do an import? | I think you're looking for [sys.path](https://docs.python.org/3/library/sys.html#sys.path)
```
import sys
print (sys.path)
``` |
Python Library Path | 135,035 | 30 | 2008-09-25T18:23:50Z | 135,273 | 40 | 2008-09-25T19:02:46Z | [
"python"
] | In ruby the library path is provided in $:, in perl it's in @INC - how do you get the list of paths that Python searches for modules when you do an import? | You can also make additions to this path with the PYTHONPATH environment variable at runtime, in addition to:
```
import sys
sys.path.append('/home/user/python-libs')
``` |
Should you always favor xrange() over range()? | 135,041 | 375 | 2008-09-25T18:24:37Z | 135,070 | 11 | 2008-09-25T18:28:16Z | [
"python",
"range",
"xrange"
] | Why or why not? | `xrange()` is more efficient because instead of generating a list of objects, it just generates one object at a time. Instead of 100 integers, and all of their overhead, and the list to put them in, you just have one integer at a time. Faster generation, better memory use, more efficient code.
Unless I specifically need a list for something, I always favor `xrange()` |
Should you always favor xrange() over range()? | 135,041 | 375 | 2008-09-25T18:24:37Z | 135,074 | 34 | 2008-09-25T18:28:49Z | [
"python",
"range",
"xrange"
] | Why or why not? | You should favour `range()` over `xrange()` only when you need an actual list. For instance, when you want to modify the list returned by `range()`, or when you wish to slice it. For iteration or even just normal indexing, `xrange()` will work fine (and usually much more efficiently). There is a point where `range()` is a bit faster than `xrange()` for very small lists, but depending on your hardware and various other details, the break-even can be at a result of length 1 or 2; not something to worry about. Prefer `xrange()`. |
Should you always favor xrange() over range()? | 135,041 | 375 | 2008-09-25T18:24:37Z | 135,114 | 370 | 2008-09-25T18:34:38Z | [
"python",
"range",
"xrange"
] | Why or why not? | For performance, especially when you're iterating over a large range, `xrange()` is usually better. However, there are still a few cases why you might prefer `range()`:
* In python 3, `range()` does what `xrange()` used to do and `xrange()` does not exist. If you want to write code that will run on both Python 2 and Python 3, you can't use `xrange()`.
* `range()` can actually be faster in some cases - eg. if iterating over the same sequence multiple times. `xrange()` has to reconstruct the integer object every time, but `range()` will have real integer objects. (It will always perform worse in terms of memory however)
* `xrange()` isn't usable in all cases where a real list is needed. For instance, it doesn't support slices, or any list methods.
[Edit] There are a couple of posts mentioning how `range()` will be upgraded by the 2to3 tool. For the record, here's the output of running the tool on some sample usages of `range()` and `xrange()`
```
RefactoringTool: Skipping implicit fixer: buffer
RefactoringTool: Skipping implicit fixer: idioms
RefactoringTool: Skipping implicit fixer: ws_comma
--- range_test.py (original)
+++ range_test.py (refactored)
@@ -1,7 +1,7 @@
for x in range(20):
- a=range(20)
+ a=list(range(20))
b=list(range(20))
c=[x for x in range(20)]
d=(x for x in range(20))
- e=xrange(20)
+ e=range(20)
```
As you can see, when used in a for loop or comprehension, or where already wrapped with list(), range is left unchanged. |
Should you always favor xrange() over range()? | 135,041 | 375 | 2008-09-25T18:24:37Z | 135,228 | 7 | 2008-09-25T18:55:58Z | [
"python",
"range",
"xrange"
] | Why or why not? | range() returns a list, xrange() returns an xrange object.
xrange() is a bit faster, and a bit more memory efficient. But the gain is not very large.
The extra memory used by a list is of course not just wasted, lists have more functionality (slice, repeat, insert, ...). Exact differences can be found in the [documentation](http://docs.python.org/typesseq.html). There is no bonehard rule, use what is needed.
Python 3.0 is still in development, but IIRC range() will very similar to xrange() of 2.X and list(range()) can be used to generate lists. |
Should you always favor xrange() over range()? | 135,041 | 375 | 2008-09-25T18:24:37Z | 135,669 | 112 | 2008-09-25T20:04:30Z | [
"python",
"range",
"xrange"
] | Why or why not? | No, they both have their uses:
Use `xrange()` when iterating, as it saves memory. Say:
```
for x in xrange(1, one_zillion):
```
rather than:
```
for x in range(1, one_zillion):
```
On the other hand, use `range()` if you actually want a list of numbers.
```
multiples_of_seven = range(7,100,7)
print "Multiples of seven < 100: ", multiples_of_seven
``` |
Should you always favor xrange() over range()? | 135,041 | 375 | 2008-09-25T18:24:37Z | 11,795,908 | 23 | 2012-08-03T12:38:58Z | [
"python",
"range",
"xrange"
] | Why or why not? | One other difference is that xrange() can't support numbers bigger than C ints, so if you want to have a range using python's built in large number support, you have to use range().
```
Python 2.7.3 (default, Jul 13 2012, 22:29:01)
[GCC 4.7.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> range(123456787676676767676676,123456787676676767676679)
[123456787676676767676676L, 123456787676676767676677L, 123456787676676767676678L]
>>> xrange(123456787676676767676676,123456787676676767676679)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: Python int too large to convert to C long
```
Python 3 does not have this problem:
```
Python 3.2.3 (default, Jul 14 2012, 01:01:48)
[GCC 4.7.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> range(123456787676676767676676,123456787676676767676679)
range(123456787676676767676676, 123456787676676767676679)
``` |
How can I closely achieve ?: from C++/C# in Python? | 135,303 | 8 | 2008-09-25T19:06:03Z | 135,318 | 22 | 2008-09-25T19:08:31Z | [
"python",
"syntax",
"ternary-operator",
"syntax-rules"
] | In C# I could easily write the following:
```
string stringValue = string.IsNullOrEmpty( otherString ) ? defaultString : otherString;
```
Is there a quick way of doing the same thing in Python or am I stuck with an 'if' statement? | In Python 2.5, there is
```
A if C else B
```
which behaves a lot like ?: in C. However, it's frowned upon for two reasons: readability, and the fact that there's usually a simpler way to approach the problem. For instance, in your case:
```
stringValue = otherString or defaultString
``` |
How many bytes per element are there in a Python list (tuple)? | 135,664 | 13 | 2008-09-25T20:04:09Z | 135,718 | 23 | 2008-09-25T20:10:53Z | [
"python",
"memory-management"
] | For example, how much memory is required to store a list of one million (32-bit) integers?
```
alist = range(1000000) # or list(range(1000000)) in Python 3.0
``` | "It depends." Python allocates space for lists in such a way as to achieve [amortized constant time](http://effbot.org/zone/python-list.htm) for appending elements to the list.
In practice, what this means with the current implementation is... the list always has space allocated for a power-of-two number of elements. So range(1000000) will actually allocate a list big enough to hold 2^20 elements (~ 1.045 million).
This is only the space required to store the list structure itself (which is an array of pointers to the Python objects for each element). A 32-bit system will require 4 bytes per element, a 64-bit system will use 8 bytes per element.
Furthermore, you need space to store the actual elements. This varies widely. For small integers (-5 to 256 currently), no additional space is needed, but for larger numbers Python allocates a new object for each integer, which takes 10-100 bytes and tends to fragment memory.
Bottom line: **it's complicated** and Python lists are **not** a good way to store large homogeneous data structures. For that, use the `array` module or, if you need to do vectorized math, use NumPy.
PS- Tuples, unlike lists, are *not designed* to have elements progressively appended to them. I don't know how the allocator works, but don't even think about using it for large data structures :-) |
How many bytes per element are there in a Python list (tuple)? | 135,664 | 13 | 2008-09-25T20:04:09Z | 136,083 | 13 | 2008-09-25T21:00:40Z | [
"python",
"memory-management"
] | For example, how much memory is required to store a list of one million (32-bit) integers?
```
alist = range(1000000) # or list(range(1000000)) in Python 3.0
``` | Useful links:
[How to get memory size/usage of python object](http://bytes.com/forum/thread757255.html)
[Memory sizes of python objects?](http://mail.python.org/pipermail/python-list/2002-March/135223.html)
[if you put data into dictionary, how do we calculate the data size?](http://groups.google.com/group/comp.lang.python/msg/b9afcfc2e1de5b05)
However they don't give a definitive answer. The way to go:
1. Measure memory consumed by Python interpreter with/without the list (use OS tools).
2. Use a third-party extension module which defines some sort of sizeof(PyObject).
**Update**:
[Recipe 546530: Size of Python objects (revised)](http://code.activestate.com/recipes/546530/)
```
import asizeof
N = 1000000
print asizeof.asizeof(range(N)) / N
# -> 20 (python 2.5, WinXP, 32-bit Linux)
# -> 33 (64-bit Linux)
``` |
Python: SWIG vs ctypes | 135,834 | 45 | 2008-09-25T20:29:27Z | 135,873 | 12 | 2008-09-25T20:35:17Z | [
"python",
"swig",
"ctypes",
"multilanguage",
"ffi"
] | In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s).
What are the performance metrics of the two? | CTypes is very cool and much easier than SWIG, but it has the drawback that poorly or malevolently-written python code can actually crash the python process. You should also consider [boost](http://www.boost.org/doc/libs/release/libs/python/doc/) python. IMHO it's actually easier than swig while giving you more control over the final python interface. If you are using C++ anyway, you also don't add any other languages to your mix. |
Python: SWIG vs ctypes | 135,834 | 45 | 2008-09-25T20:29:27Z | 135,966 | 54 | 2008-09-25T20:47:28Z | [
"python",
"swig",
"ctypes",
"multilanguage",
"ffi"
] | In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s).
What are the performance metrics of the two? | SWIG generates (rather ugly) C or C++ code. It is straightforward to use for simple functions (things that can be translated directly) and reasonably easy to use for more complex functions (such as functions with output parameters that need an extra translation step to represent in Python.) For more powerful interfacing you often need to write bits of C as part of the interface file. For anything but simple use you will need to know about CPython and how it represents objects -- not hard, but something to keep in mind.
ctypes allows you to directly access C functions, structures and other data, and load arbitrary shared libraries. You do not need to write any C for this, but you do need to understand how C works. It is, you could argue, the flip side of SWIG: it doesn't generate code and it doesn't require a compiler at runtime, but for anything but simple use it does require that you understand how things like C datatypes, casting, memory management and alignment work. You also need to manually or automatically translate C structs, unions and arrays into the equivalent ctypes datastructure, including the right memory layout.
It is likely that in pure execution, SWIG is faster than ctypes -- because the management around the actual work is done in C at compiletime rather than in Python at runtime. However, unless you interface a lot of different C functions but each only a few times, it's unlikely the overhead will be really noticeable.
In development time, ctypes has a much lower startup cost: you don't have to learn about interface files, you don't have to generate .c files and compile them, you don't have to check out and silence warnings. You can just jump in and start using a single C function with minimal effort, then expand it to more. And you get to test and try things out directly in the Python interpreter. Wrapping lots of code is somewhat tedious, although there are attempts to make that simpler (like ctypes-configure.)
SWIG, on the other hand, can be used to generate wrappers for multiple languages (barring language-specific details that need filling in, like the custom C code I mentioned above.) When wrapping lots and lots of code that SWIG can handle with little help, the code generation can also be a lot simpler to set up than the ctypes equivalents. |
Python: SWIG vs ctypes | 135,834 | 45 | 2008-09-25T20:29:27Z | 136,019 | 8 | 2008-09-25T20:54:22Z | [
"python",
"swig",
"ctypes",
"multilanguage",
"ffi"
] | In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s).
What are the performance metrics of the two? | You can also use [Pyrex](http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/), which can act as glue between high-level Python code and low-level C code. [lxml](http://codespeak.net/lxml/) is written in Pyrex, for instance. |
Python: SWIG vs ctypes | 135,834 | 45 | 2008-09-25T20:29:27Z | 137,827 | 7 | 2008-09-26T04:49:05Z | [
"python",
"swig",
"ctypes",
"multilanguage",
"ffi"
] | In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s).
What are the performance metrics of the two? | I'm going to be contrarian and suggest that, if you can, you should write your extension library using the [standard Python API](http://docs.python.org/api/). It's really well-integrated from both a C and Python perspective... if you have any experience with the Perl API, you will find it a *very* pleasant surprise.
Ctypes is nice too, but as others have said, it doesn't do C++.
How big is the library you're trying to wrap? How quickly does the codebase change? Any other maintenance issues? These will all probably affect the choice of the best way to write the Python bindings. |
Python: SWIG vs ctypes | 135,834 | 45 | 2008-09-25T20:29:27Z | 463,848 | 9 | 2009-01-21T01:36:28Z | [
"python",
"swig",
"ctypes",
"multilanguage",
"ffi"
] | In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s).
What are the performance metrics of the two? | In my experience, ctypes does have a big disadvantage: when something goes wrong (and it invariably will for any complex interfaces), it's a hell to debug.
The problem is that a big part of your stack is obscured by ctypes/ffi magic and there is no easy way to determine how did you get to a particular point and why parameter values are what they are.. |
Python: SWIG vs ctypes | 135,834 | 45 | 2008-09-25T20:29:27Z | 4,892,651 | 69 | 2011-02-03T22:47:25Z | [
"python",
"swig",
"ctypes",
"multilanguage",
"ffi"
] | In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s).
What are the performance metrics of the two? | I have a rich experience of using swig. SWIG claims that it is rapid solution for wrapping things. But in the real life...
---
# Cons:
SWIG is developed to be general, for everyone and for 20+ languages. Generally it leads to drawbacks:
- needs configuration (SWIG .i templates), sometimes it is tricky,
- lack of treatment of some special cases (see python properties further),
- lack of the performance for some languages.
*Python cons:*
1) **Code style inconsistancy**. C++ and python have very different code styles (that is obvious, certainly), the possibilities of swig of making target code more pythonish is very limited. As example, it is butt-heart to create properties from getters and setters. See [this q&a](http://stackoverflow.com/questions/1183716/python-properties-swig)
2) **Lack of broad community**. Swig have some good documentation. But if one caught something that is not in the documentation, there is no information at all. No blogs nor googling helps. So one have to heavily dig SWIG generated code in such cases... That is terrible, I could say...
# Procs:
* In simple cases it is really rapid, easy and straight forward
* If you produced swig interface files once, you can wrap this C++ code to ANY of other 20+ languages (!!!).
* One big concern about SWIG is a performance. Since version 2.04 SWIG includes '-builtin' flag wich makes SWIG even faster than other automated ways of wrapping. At least [some benchmarks](http://stackoverflow.com/questions/456884/extending-python-to-swig-not-to-swig-or-cython) shows this.
---
# When to USE SWIG?
So I concluded for myself two cases when the swig is good to use:
2) If one needs to wrap C++ code **for several languages**. Or if potentially there could be a time when one needs to distribute the code for several languages. Using SWIG is reliable in this case.
1) If one needs to **rapidly** wrap **just several** functions from some C++ library for end use.
---
# Live experience
**Update** :
It is a year and half passed as we did a conversion of our library by using SWIG.
First we made a python version.There were several moments when we experienced troubles with SWIG - it is true. But right now we expanded our library to Java and .NET. So we have 3 languages with 1 SWIG. And I could say that **SWIG rocks** in terms of saving a LOT of time.
**Update 2**:
It is two years as we use SWIG for this library. SWIG is integrated in our build system. Recently we had major API change of C++ library. SWIG worked perfectly. The only thing we needed to do is to add several %rename to .i files so our `CppCamelStyleFunctions()` now `looks_more_pythonish` in python. First I was concerned about some problems that could arise, but nothing went wrong. It was amazing. Just several edits and everything distributed in 3 languages. Now I am confident that it was a good solution to use SWIG in our case.
**Update 3**:
It is 3+ years we use SWIG for our library. **Major change**: python part was totally rewritten in pure python. The reason is that python is used for the majority of applications of our library now. Even if pure python version works slower than C++ wrapping, it is more convenient for users to work with pure python, not struggling with native libraries.
SWIG is still used for .NET and Java versions.
The Main question here "Would we use SWIG for python, if we started the project from the beginning?". We would! SWIG allowed us to rapidly distribute our product to many languages. It worked for a period of time which gave us the opportunity for better understanding our users requirements. |
Python web development - with or without a framework | 136,069 | 18 | 2008-09-25T20:59:49Z | 136,188 | 13 | 2008-09-25T21:14:33Z | [
"python",
"frameworks"
] | I am planning on porting a PHP application over to Python. The application is mostly about data collection and processing. The main application runs as a stand alone command line application. There is a web interface to the application which is basically a very light weight reporting interface.
I did not use a framework in the PHP version, but being new to Python, I am wondering if it would be advantageous to use something like Django or at the very least Genshi. The caveat is I do not want my application distribution to be overwhelmed by the framework parts I would need to distribute with the application.
Is using only the cgi import in Python the best way to go in this circumstance? I would tend to think a framework is too much overhead, but perhaps I'm not thinking in a very "python" way about them. What suggestions do you have in this scenario? | The command-line Python, IMO, definitely comes first. Get that to work, since that's the core of what you're doing.
The issue is that using a web framework's ORM from a command line application isn't obvious. Django provides specific instructions for using their ORM from a command-line app. Those are annoying at first, but I think they're a life-saver in the long run. I use it heavily for giant uploads of customer-supplied files.
Don't use bare CGI. It's not impossible, but too many things can go wrong, and they've all been solved by the frameworks. Why reinvent something? Just use someone else's code.
Frameworks involve learning, but no real "overhead". They're not slow. They're code you don't have to write or debug.
1. Learn some Python.
2. Do the [Django](http://docs.djangoproject.com/en/dev/) tutorial.
3. Start to build a web app.
a. Start a Django project. Build a small application in that project.
b. Build your new model using the Django ORM. Create a Django unit test for the model. Be sure that it works. You'll be able to use the default admin pages and do a lot of playing around. Just don't build the *entire* web site yet.
4. Get your command-line app to work using Django ORM. Essentially, you have to finesse the settings file for this app to work nicely. See the [settings/configuration](http://docs.djangoproject.com/en/dev/topics/settings/#topics-settings) section.
5. Once you've got your command line and the default admin running, you can finish
the web app.
Here's the golden rule of frameworks: **It's code you don't have to write, debug or maintain.** Use them. |
Python web development - with or without a framework | 136,069 | 18 | 2008-09-25T20:59:49Z | 136,804 | 10 | 2008-09-25T23:17:24Z | [
"python",
"frameworks"
] | I am planning on porting a PHP application over to Python. The application is mostly about data collection and processing. The main application runs as a stand alone command line application. There is a web interface to the application which is basically a very light weight reporting interface.
I did not use a framework in the PHP version, but being new to Python, I am wondering if it would be advantageous to use something like Django or at the very least Genshi. The caveat is I do not want my application distribution to be overwhelmed by the framework parts I would need to distribute with the application.
Is using only the cgi import in Python the best way to go in this circumstance? I would tend to think a framework is too much overhead, but perhaps I'm not thinking in a very "python" way about them. What suggestions do you have in this scenario? | You might consider using something like [web.py](http://webpy.org/) which would be easy to distribute (since it's small) and it would also be easy to adapt your other tools to it since it doesn't require you to submit to the framework so much like Django does.
Be forewarned, however, it's not the most loved framework in the Python community, but it might be just the thing for you. You might also check out [web2py](http://mdp.cti.depaul.edu/), but I know less about that. |
What is the difference between @staticmethod and @classmethod in Python? | 136,097 | 1,583 | 2008-09-25T21:01:57Z | 136,138 | 525 | 2008-09-25T21:05:53Z | [
"python"
] | What is the difference between a function decorated with [`@staticmethod`](http://docs.python.org/library/functions.html#staticmethod) and one decorated with [`@classmethod`](http://docs.python.org/library/functions.html#classmethod)? | A staticmethod is a method that knows nothing about the class or instance it was called on. It just gets the arguments that were passed, no implicit first argument. It is basically useless in Python -- you can just use a module function instead of a staticmethod.
A classmethod, on the other hand, is a method that gets passed the class it was called on, or the class of the instance it was called on, as first argument. This is useful when you want the method to be a factory for the class: since it gets the actual class it was called on as first argument, you can always instantiate the right class, even when subclasses are involved. Observe for instance how `dict.fromkeys()`, a classmethod, returns an instance of the subclass when called on a subclass:
```
>>> class DictSubclass(dict):
... def __repr__(self):
... return "DictSubclass"
...
>>> dict.fromkeys("abc")
{'a': None, 'c': None, 'b': None}
>>> DictSubclass.fromkeys("abc")
DictSubclass
>>>
``` |
What is the difference between @staticmethod and @classmethod in Python? | 136,097 | 1,583 | 2008-09-25T21:01:57Z | 136,149 | 54 | 2008-09-25T21:07:06Z | [
"python"
] | What is the difference between a function decorated with [`@staticmethod`](http://docs.python.org/library/functions.html#staticmethod) and one decorated with [`@classmethod`](http://docs.python.org/library/functions.html#classmethod)? | Basically `@classmethod` makes a method whose first argument is the class it's called from (rather than the class instance), `@staticmethod` does not have any implicit arguments. |
What is the difference between @staticmethod and @classmethod in Python? | 136,097 | 1,583 | 2008-09-25T21:01:57Z | 136,246 | 12 | 2008-09-25T21:24:13Z | [
"python"
] | What is the difference between a function decorated with [`@staticmethod`](http://docs.python.org/library/functions.html#staticmethod) and one decorated with [`@classmethod`](http://docs.python.org/library/functions.html#classmethod)? | `@staticmethod` just disables the default function as method descriptor. classmethod wraps your function in a container callable that passes a reference to the owning class as first argument:
```
>>> class C(object):
... pass
...
>>> def f():
... pass
...
>>> staticmethod(f).__get__(None, C)
<function f at 0x5c1cf0>
>>> classmethod(f).__get__(None, C)
<bound method type.f of <class '__main__.C'>>
```
As a matter of fact, `classmethod` has a runtime overhead but makes it possible to access the owning class. Alternatively I recommend using a metaclass and putting the class methods on that metaclass:
```
>>> class CMeta(type):
... def foo(cls):
... print cls
...
>>> class C(object):
... __metaclass__ = CMeta
...
>>> C.foo()
<class '__main__.C'>
``` |
What is the difference between @staticmethod and @classmethod in Python? | 136,097 | 1,583 | 2008-09-25T21:01:57Z | 1,669,457 | 32 | 2009-11-03T19:02:23Z | [
"python"
] | What is the difference between a function decorated with [`@staticmethod`](http://docs.python.org/library/functions.html#staticmethod) and one decorated with [`@classmethod`](http://docs.python.org/library/functions.html#classmethod)? | [Here](http://rapd.wordpress.com/2008/07/02/python-staticmethod-vs-classmethod/) is a short article on this question
> @staticmethod function is nothing more than a function defined inside a class. It is callable without instantiating the class first. Itâs definition is immutable via inheritance.
>
> @classmethod function also callable without instantiating the class, but its definition follows Sub class, not Parent class, via inheritance. Thatâs because the first argument for @classmethod function must always be cls (class). |
What is the difference between @staticmethod and @classmethod in Python? | 136,097 | 1,583 | 2008-09-25T21:01:57Z | 1,669,524 | 1,256 | 2009-11-03T19:13:48Z | [
"python"
] | What is the difference between a function decorated with [`@staticmethod`](http://docs.python.org/library/functions.html#staticmethod) and one decorated with [`@classmethod`](http://docs.python.org/library/functions.html#classmethod)? | Maybe a bit of example code will help: Notice the difference in the call signatures of `foo`, `class_foo` and `static_foo`:
```
class A(object):
def foo(self,x):
print "executing foo(%s,%s)"%(self,x)
@classmethod
def class_foo(cls,x):
print "executing class_foo(%s,%s)"%(cls,x)
@staticmethod
def static_foo(x):
print "executing static_foo(%s)"%x
a=A()
```
Below is the usual way an object instance calls a method. The object instance, `a`, is implicitly passed as the first argument.
```
a.foo(1)
# executing foo(<__main__.A object at 0xb7dbef0c>,1)
```
---
**With classmethods**, the class of the object instance is implicitly passed as the first argument instead of `self`.
```
a.class_foo(1)
# executing class_foo(<class '__main__.A'>,1)
```
You can also call `class_foo` using the class. In fact, if you define something to be
a classmethod, it is probably because you intend to call it from the class rather than from a class instance. `A.foo(1)` would have raised a TypeError, but `A.class_foo(1)` works just fine:
```
A.class_foo(1)
# executing class_foo(<class '__main__.A'>,1)
```
One use people have found for class methods is to create [inheritable alternative constructors](http://stackoverflow.com/a/1950927/190597).
---
**With staticmethods**, neither `self` (the object instance) nor `cls` (the class) is implicitly passed as the first argument. They behave like plain functions except that you can call them from an instance or the class:
```
a.static_foo(1)
# executing static_foo(1)
A.static_foo('hi')
# executing static_foo(hi)
```
Staticmethods are used to group functions which have some logical connection with a class to the class.
---
`foo` is just a function, but when you call `a.foo` you don't just get the function,
you get a "partially applied" version of the function with the object instance `a` bound as the first argument to the function. `foo` expects 2 arguments, while `a.foo` only expects 1 argument.
`a` is bound to `foo`. That is what is meant by the term "bound" below:
```
print(a.foo)
# <bound method A.foo of <__main__.A object at 0xb7d52f0c>>
```
With `a.class_foo`, `a` is not bound to `class_foo`, rather the class `A` is bound to `class_foo`.
```
print(a.class_foo)
# <bound method type.class_foo of <class '__main__.A'>>
```
Here, with a staticmethod, even though it is a method, `a.static_foo` just returns
a good 'ole function with no arguments bound. `static_foo` expects 1 argument, and
`a.static_foo` expects 1 argument too.
```
print(a.static_foo)
# <function static_foo at 0xb7d479cc>
```
And of course the same thing happens when you call `static_foo` with the class `A` instead.
```
print(A.static_foo)
# <function static_foo at 0xb7d479cc>
``` |
What is the difference between @staticmethod and @classmethod in Python? | 136,097 | 1,583 | 2008-09-25T21:01:57Z | 1,669,579 | 40 | 2009-11-03T19:23:19Z | [
"python"
] | What is the difference between a function decorated with [`@staticmethod`](http://docs.python.org/library/functions.html#staticmethod) and one decorated with [`@classmethod`](http://docs.python.org/library/functions.html#classmethod)? | **Official python docs:**
[@classmethod](http://docs.python.org/library/functions.html#classmethod)
> A class method receives the class as
> implicit first argument, just like an
> instance method receives the instance.
> To declare a class method, use this
> idiom:
>
> ```
> class C:
> @classmethod
> def f(cls, arg1, arg2, ...): ...
> ```
>
> The `@classmethod` form is a function
> [*decorator*](http://docs.python.org/2/glossary.html#term-decorator) â see the description of
> function definitions in [*Function
> definitions*](http://docs.python.org/2/reference/compound_stmts.html#function) for details.
>
> It can be called either on the class
> (such as `C.f()`) or on an instance
> (such as `C().f()`). The instance is
> ignored except for its class. If a
> class method is called for a derived
> class, the derived class object is
> passed as the implied first argument.
>
> Class methods are different than C++
> or Java static methods. If you want
> those, see [`staticmethod()`](http://docs.python.org/2/library/functions.html#staticmethod) in this
> section.
[@staticmethod](http://docs.python.org/library/functions.html#staticmethod)
> A static method does not receive an
> implicit first argument. To declare a
> static method, use this idiom:
>
> ```
> class C:
> @staticmethod
> def f(arg1, arg2, ...): ...
> ```
>
> The `@staticmethod` form is a function
> [*decorator*](http://docs.python.org/2/glossary.html#term-decorator) â see the description of
> function definitions in [*Function
> definitions*](http://docs.python.org/2/reference/compound_stmts.html#function) for details.
>
> It can be called either on the class
> (such as `C.f()`) or on an instance
> (such as `C().f()`). The instance is
> ignored except for its class.
>
> Static methods in Python are similar
> to those found in Java or C++. For a
> more advanced concept, see
> [`classmethod()`](http://docs.python.org/2/library/functions.html#classmethod) in this section. |
What is the difference between @staticmethod and @classmethod in Python? | 136,097 | 1,583 | 2008-09-25T21:01:57Z | 9,428,384 | 17 | 2012-02-24T09:32:32Z | [
"python"
] | What is the difference between a function decorated with [`@staticmethod`](http://docs.python.org/library/functions.html#staticmethod) and one decorated with [`@classmethod`](http://docs.python.org/library/functions.html#classmethod)? | I just wanted to add that the @decorators were added in python 2.4.
If you're using python < 2.4 you can use the classmethod() and staticmethod() function.
For example, if you want to create a factory method (A function returning an instance of a different implementation of a class depending on what argument it gets) you can do something like:
```
class Cluster(object):
def _is_cluster_for(cls, name):
"""
see if this class is the cluster with this name
this is a classmethod
"""
return cls.__name__ == name
_is_cluster_for = classmethod(_is_cluster_for)
#static method
def getCluster(name):
"""
static factory method, should be in Cluster class
returns a cluster object of the given name
"""
for cls in Cluster.__subclasses__():
if cls._is_cluster_for(name):
return cls()
getCluster = staticmethod(getCluster)
```
Also observe that this is a good example for using a classmethod and a static method,
The static method clearly belongs to the class, since it uses the class Cluster internally.
The classmethod only needs information about the class, and no instance of the object.
Another benefit of making the `_is_cluster_for` method a classmethod is so a subclass can decide to change it's implementation, maybe because it is pretty generic and can handle more than one type of cluster, so just checking the name of the class would not be enough. |
What is the difference between @staticmethod and @classmethod in Python? | 136,097 | 1,583 | 2008-09-25T21:01:57Z | 28,117,800 | 15 | 2015-01-23T20:01:20Z | [
"python"
] | What is the difference between a function decorated with [`@staticmethod`](http://docs.python.org/library/functions.html#staticmethod) and one decorated with [`@classmethod`](http://docs.python.org/library/functions.html#classmethod)? | > **What is the difference between @staticmethod and @classmethod in Python?**
You may have seen Python code like this pseudocode, which demonstrates the signatures of the various method types and provides a docstring to explain each:
```
class Foo(object):
def a_normal_method(self, arg_1, kwarg_2=None):
'''
Return a value that is a function of the instance with its
attributes, and other arguments such as arg_1 and kwarg2
'''
@staticmethod
def a_static_method(arg_0):
'''
Return a value that is a function of arg_0. It does not know the
instance or class it is called from.
'''
@classmethod
def a_class_method(cls, arg1):
'''
Return a value that is a function of the class and other arguments.
respects subclassing, it is called with the class it is called from.
'''
```
# The Normal Method
First I'll explain the `normal_method`. This may be better called an "**instance method**". When an instance method is used, it is used as a partial function (as opposed to a total function, defined for all values when viewed in source code) that is, when used, the first of the arguments is predefined as the instance of the object, with all of its given attributes. It has the instance of the object bound to it, and it must be called from an instance of the object. Typically, it will access various attributes of the instance.
For example, this is an instance of a string:
```
', '
```
if we use the instance method, `join` on this string, to join another iterable,
it quite obviously is a function of the instance, in addition to being a function of the iterable list, `['a', 'b', 'c']`:
```
>>>', '.join(['a', 'b', 'c'])
'a, b, c'
```
# Static Method
The static method does *not* take the instance as an argument. Yes it is very similar to a module level function. However, a module level function must live in the module and be specially imported to other places where it is used. If it is attached to the object, however, it will follow the object conveniently through importing and inheritance as well.
An example is the `str.maketrans` static method, moved from the `string` module in Python 3. It makes a translation table suitable for consumption by `str.translate`. It does seem rather silly when used from an instance of a string, as demonstrated below, but importing the function from the `string` module is rather clumsy, and it's nice to be able to call it from the class, as in `str.maketrans`
```
# demonstrate same function whether called from instance or not:
>>> ', '.maketrans('ABC', 'abc')
{65: 97, 66: 98, 67: 99}
>>> str.maketrans('ABC', 'abc')
{65: 97, 66: 98, 67: 99}
```
In python 2, you have to import this function from the increasingly deprecated string module:
```
>>> import string
>>> 'ABCDEFG'.translate(string.maketrans('ABC', 'abc'))
'abcDEFG'
```
# Class Method
A class method is a similar to a static method in that it takes an implicit first argument, but instead of taking the instance, it takes the class. Frequently these are used as alternative constructors for better semantic usage and it will support inheritance.
The most canonical example of a builtin classmethod is `dict.fromkeys`. It is used as an alternative constructor of dict, (well suited for when you know what your keys are and want a default value for them.)
```
>>> dict.fromkeys(['a', 'b', 'c'])
{'c': None, 'b': None, 'a': None}
```
When we subclass dict, we can use the same constructor, which creates an instance of the subclass.
```
>>> class MyDict(dict): 'A dict subclass, use to demo classmethods'
>>> md = MyDict.fromkeys(['a', 'b', 'c'])
>>> md
{'a': None, 'c': None, 'b': None}
>>> type(md)
<class '__main__.MyDict'>
```
See the [pandas source code](https://github.com/pydata/pandas/blob/master/pandas/core/frame.py) for other similar examples of alternative constructors, and see also the official Python documentation on [`classmethod`](https://docs.python.org/library/functions.html#classmethod) and [`staticmethod`](https://docs.python.org/library/functions.html#staticmethod). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.