title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Is there a fast way to generate a dict of the alphabet in Python? | 453,576 | 24 | 2009-01-17T16:51:34Z | 453,596 | 7 | 2009-01-17T17:01:01Z | [
"python",
"dictionary",
"alphabet"
] | I want to generate a dict with the letters of the alphabet as the keys, something like
```
letter_count = {'a': 0, 'b': 0, 'c': 0}
```
what would be a fast way of generating that dict, rather than me having to type it in?
Thanks for your help.
**EDIT**
Thanks everyone for your solutions :)
[nosklo's](http://stackoverflow.com/users/17160/nosklo)
[solution](http://stackoverflow.com/questions/453576/simple-python-question-is-there-a-fast-way-to-generate-a-dict-of-the-alphabet#453785) is probably the shortest
Also, thanks for reminding me about the Python [string](http://docs.python.org/library/string.html) module. | Here's a compact version, using a [list comprehension](http://docs.python.org/tutorial/datastructures.html#list-comprehensions):
```
>>> import string
>>> letter_count = dict( (key, 0) for key in string.ascii_lowercase )
>>> letter_count
{'a': 0, 'c': 0, 'b': 0, 'e': 0, 'd': 0, 'g': 0, 'f': 0, 'i': 0, 'h': 0, 'k': 0,
'j': 0, 'm': 0, 'l': 0, 'o': 0, 'n': 0, 'q': 0, 'p': 0, 's': 0, 'r': 0, 'u': 0,
't': 0, 'w': 0, 'v': 0, 'y': 0, 'x': 0, 'z': 0}
``` |
Is there a fast way to generate a dict of the alphabet in Python? | 453,576 | 24 | 2009-01-17T16:51:34Z | 453,603 | 10 | 2009-01-17T17:03:27Z | [
"python",
"dictionary",
"alphabet"
] | I want to generate a dict with the letters of the alphabet as the keys, something like
```
letter_count = {'a': 0, 'b': 0, 'c': 0}
```
what would be a fast way of generating that dict, rather than me having to type it in?
Thanks for your help.
**EDIT**
Thanks everyone for your solutions :)
[nosklo's](http://stackoverflow.com/users/17160/nosklo)
[solution](http://stackoverflow.com/questions/453576/simple-python-question-is-there-a-fast-way-to-generate-a-dict-of-the-alphabet#453785) is probably the shortest
Also, thanks for reminding me about the Python [string](http://docs.python.org/library/string.html) module. | ```
import string
letter_count = dict(zip(string.ascii_lowercase, [0]*26))
```
or maybe:
```
import string
import itertools
letter_count = dict(zip(string.lowercase, itertools.repeat(0)))
```
or even:
```
import string
letter_count = dict.fromkeys(string.ascii_lowercase, 0)
```
The preferred solution might be a different one, depending on the actual values you want in the dict.
---
I'll take a guess here: do you want to count occurences of letters in a text (or something similar)? There is a better way to do this than starting with an initialized dictionary.
Use `Counter` from the `collections` module:
```
>>> import collections
>>> the_text = 'the quick brown fox jumps over the lazy dog'
>>> letter_counts = collections.Counter(the_text)
>>> letter_counts
Counter({' ': 8, 'o': 4, 'e': 3, ... 'n': 1, 'x': 1, 'k': 1, 'b': 1})
``` |
Is there a fast way to generate a dict of the alphabet in Python? | 453,576 | 24 | 2009-01-17T16:51:34Z | 453,766 | 8 | 2009-01-17T18:19:27Z | [
"python",
"dictionary",
"alphabet"
] | I want to generate a dict with the letters of the alphabet as the keys, something like
```
letter_count = {'a': 0, 'b': 0, 'c': 0}
```
what would be a fast way of generating that dict, rather than me having to type it in?
Thanks for your help.
**EDIT**
Thanks everyone for your solutions :)
[nosklo's](http://stackoverflow.com/users/17160/nosklo)
[solution](http://stackoverflow.com/questions/453576/simple-python-question-is-there-a-fast-way-to-generate-a-dict-of-the-alphabet#453785) is probably the shortest
Also, thanks for reminding me about the Python [string](http://docs.python.org/library/string.html) module. | If you plan to use it for counting, I suggest the following:
```
import collections
d = collections.defaultdict(int)
``` |
Is there a fast way to generate a dict of the alphabet in Python? | 453,576 | 24 | 2009-01-17T16:51:34Z | 453,785 | 43 | 2009-01-17T18:36:15Z | [
"python",
"dictionary",
"alphabet"
] | I want to generate a dict with the letters of the alphabet as the keys, something like
```
letter_count = {'a': 0, 'b': 0, 'c': 0}
```
what would be a fast way of generating that dict, rather than me having to type it in?
Thanks for your help.
**EDIT**
Thanks everyone for your solutions :)
[nosklo's](http://stackoverflow.com/users/17160/nosklo)
[solution](http://stackoverflow.com/questions/453576/simple-python-question-is-there-a-fast-way-to-generate-a-dict-of-the-alphabet#453785) is probably the shortest
Also, thanks for reminding me about the Python [string](http://docs.python.org/library/string.html) module. | I find this solution more elegant:
```
import string
d = dict.fromkeys(string.ascii_lowercase, 0)
``` |
Does Django development provide a truly flexible 3 layer architecture? | 454,443 | 3 | 2009-01-18T01:20:33Z | 454,539 | 9 | 2009-01-18T02:41:38Z | [
"python",
"django",
"model-view-controller",
"orm"
] | A few weeks ago I asked the question *"Is a PHP, Python, PostgreSQL design suitable for a non-web business application?"* <http://stackoverflow.com/questions/439759/is-a-php-python-postgresql-design-suitable-for-a-business-application>
A lot of the answers recommended skipping the **PHP** piece and using **Django** to build the application. As I've explored Django, I've started to question one specific aspect of my goals and how Django comes into play for **a non-web business application**.
Based on my understanding, Django would manage both the view and controller pieces and **PostgreSQL** or **MySQL** would handle the data. But my goal was to clearly separate the layers so that the database, domain logic, and presentation could each be changed without significantly affecting the others. It seems like I'm only separating the M from the VC layers with the Django solution.
So, is it counterproductive for me to build the domain layer in **Python** with an **SQL Alchemy/Elixir ORM tool**, **PostgreSQL** for the database layer, and then still use **Django** or **PHP** for the presentation layer? Is this possible or pure insanity?
Basically, I'd be looking at an architecture of **Django/PHP > Python/SQLAlchemy > PostgreSQL/MySQL**.
*Edit: Before the fanboys get mad at me for asking a question about Django, just realize: It's a question, not an accusation. If I knew the answer or had my own opinion, I wouldn't have asked!* | You seem to be saying that choosing Django would prevent you from using a more heterogenous solution later. This isn't the case. Django provides a number of interesting connections between the layers, and using Django for all the layers lets you take advantage of those connections. For example, using the Django ORM means that you get the great Django admin app almost for free.
You can choose to use a different ORM within Django, you just won't get the admin app (or generic views, for example) along with it. So a different ORM takes you a step backward from full Django top-to-bottom, but it isn't a step backward from other heterogenous solutions, because those solutions didn't give you intra-layer goodness the admin app in the first place.
Django shouldn't be criticized for not providing a flexible architecture: it's as flexible as any other solution, you just forgo some of the Django benefits if you choose to swap out a layer.
If you choose to start with Django, you can use the Django ORM now, and then later, if you need to switch, you can change over to SQLalchemy. That will be no more difficult than starting with SQLalchemy now and later moving to some other ORM solution.
You haven't said why you anticipate needing to swap out layers. It will be a painful process no matter what, because there is necessarily much code that relies on the behavior of whichever toolset and library you're currently using. |
Does Django development provide a truly flexible 3 layer architecture? | 454,443 | 3 | 2009-01-18T01:20:33Z | 454,608 | 7 | 2009-01-18T03:48:18Z | [
"python",
"django",
"model-view-controller",
"orm"
] | A few weeks ago I asked the question *"Is a PHP, Python, PostgreSQL design suitable for a non-web business application?"* <http://stackoverflow.com/questions/439759/is-a-php-python-postgresql-design-suitable-for-a-business-application>
A lot of the answers recommended skipping the **PHP** piece and using **Django** to build the application. As I've explored Django, I've started to question one specific aspect of my goals and how Django comes into play for **a non-web business application**.
Based on my understanding, Django would manage both the view and controller pieces and **PostgreSQL** or **MySQL** would handle the data. But my goal was to clearly separate the layers so that the database, domain logic, and presentation could each be changed without significantly affecting the others. It seems like I'm only separating the M from the VC layers with the Django solution.
So, is it counterproductive for me to build the domain layer in **Python** with an **SQL Alchemy/Elixir ORM tool**, **PostgreSQL** for the database layer, and then still use **Django** or **PHP** for the presentation layer? Is this possible or pure insanity?
Basically, I'd be looking at an architecture of **Django/PHP > Python/SQLAlchemy > PostgreSQL/MySQL**.
*Edit: Before the fanboys get mad at me for asking a question about Django, just realize: It's a question, not an accusation. If I knew the answer or had my own opinion, I wouldn't have asked!* | Django will happily let you use whatever libraries you want for whatever you want to use them for -- you want a different ORM, use it, you want a different template engine, use it, and so on -- but is designed to provide a common default stack used by many interoperable applications. In other words, if you swap out an ORM or a template system, you'll lose compatibility with a lot of applications, but the ability to take advantage of a large base of applications typically outweighs this.
In broader terms, however, I'd advise you to spend a bit more time reading up on architectural patterns for web applications, since you seem to have some major conceptual confusion going on. One might just as easily say that, for example, Rails doesn't have a "view" layer since you could use different file systems as the storage location for the view code (in other words: being able to change where and how the data is stored by your model layer doesn't mean you don't *have* a model layer).
(and it goes without saying that it's also important to know why "strict" or "pure" MVC is an absolutely *horrid* fit for web applications; MVC in its pure form is useful for applications with many independent ways to initiate interaction, like a word processor with lots of toolbars and input panes, but its benefits quickly start to disappear when you move to the web and have only one way -- an HTTP request -- to interact with the application. This is why there are no "true" MVC web frameworks; they all borrow certain ideas about separation of concerns, but none of them implement the pattern strictly) |
How do I re.search or re.match on a whole file without reading it all into memory? | 454,456 | 19 | 2009-01-18T01:27:16Z | 454,589 | 39 | 2009-01-18T03:24:46Z | [
"python",
"regex",
"performance",
"file"
] | I want to be able to run a regular expression on an entire file, but I'd like to be able to not have to read the whole file into memory at once as I may be working with rather large files in the future. Is there a way to do this? Thanks!
**Clarification:** I cannot read line-by-line because it can span multiple lines. | You can use mmap to map the file to memory. The file contents can then be accessed like a normal string:
```
import re, mmap
with open('/var/log/error.log', 'r+') as f:
data = mmap.mmap(f.fileno(), 0)
mo = re.search('error: (.*)', data)
if mo:
print "found error", mo.group(1)
```
This also works for big files, the file content is internally loaded from disk as needed. |
Find an Image within an Image | 454,498 | 4 | 2009-01-18T02:07:16Z | 454,536 | 7 | 2009-01-18T02:40:21Z | [
"java",
"python",
"image"
] | I am looking for the best way to detect an image within another image. I have a small image and would like to find the location that it appears within a larger image - which will actually be screen captures. Conceptually, it is like a 'Where's Waldo?' sort of search in the larger image.
Are there any efficient/quick ways to accomplish this? Speed is more important than memory.
Edit:
The 'inner' image may not always have the same scale but will have the same rotation.
It is not safe to assume that the image will be perfectly contained within the other, pixel for pixel. | Wikipedia has an article on [Template Matching](http://en.wikipedia.org/wiki/Template_matching), with sample code.
(While that page doesn't handle changed scales, it has links to other styles of matching, for example [Scale invariant feature transform](http://en.wikipedia.org/wiki/Scale-invariant_feature_transform)) |
How can I make a list in Python like (0,6,12, .. 144)? | 454,566 | 3 | 2009-01-18T03:04:43Z | 454,578 | 20 | 2009-01-18T03:11:14Z | [
"python",
"list"
] | I am not sure, whether I should use for -loop. Perhaps, like
```
for i in range(145):
by 6: //mistake here?
print i
``` | ```
for i in range(0,150,6):
print i
```
if you are stepping by a constant |
Pythonic macro syntax | 454,648 | 18 | 2009-01-18T04:22:27Z | 483,447 | 10 | 2009-01-27T13:48:47Z | [
"python",
"syntax",
"macros"
] | I've been working on an alternative compiler front-end for Python where all syntax is parsed via macros. I'm finally to the point with its development that I can start work on a superset of the Python language where macros are an integral component.
My problem is that I can't come up with a pythonic macro definition syntax. I've posted several examples in two different syntaxes in answers below. Can anyone come up with a better syntax? It doesn't have to build off the syntax I've proposed in any way -- I'm completely open here. Any comments, suggestions, etc would be helpful, as would alternative syntaxes showing the examples I've posted.
A note about the macro structure, as seen in the examples I've posted: The use of MultiLine/MLMacro and Partial/PartialMacro tell the parser how the macro is applied. If it's multiline, the macro will match multiple line lists; generally used for constructs. If it's partial, the macro will match code in the middle of a list; generally used for operators. | After thinking about it a while a few days ago, and coming up with nothing worth posting, I came back to it now and came up with some syntax I rather like, because it nearly looks like python:
```
macro PrintMacro:
syntax:
"print", OneOrMore(Var(), name='vars')
return Printnl(vars, None)
```
* Make all the macro "keywords" look like creating python objects (`Var()` instead of simple `Var`)
* Pass the name of elements as a "keyword parameter" to items we want a name for.
It should still be easy to find all the names in the parser, since this syntax definition anyway needs to be interpreted in some way to fill the macro classes syntax variable.
needs to be converted to fill the syntax variable of the resulting macro class.
The internal syntax representation could also look the same:
```
class PrintMacro(Macro):
syntax = 'print', OneOrMore(Var(), name='vars')
...
```
The internal syntax classes like `OneOrMore` would follow this pattern to allow subitems and an optional name:
```
class MacroSyntaxElement(object):
def __init__(self, *p, name=None):
self.subelements = p
self.name = name
```
When the macro matches, you just collect all items that have a name and pass them as keyword parameters to the handler function:
```
class Macro():
...
def parse(self, ...):
syntaxtree = []
nameditems = {}
# parse, however this is done
# store all elements that have a name as
# nameditems[name] = parsed_element
self.handle(syntaxtree, **nameditems)
```
The handler function would then be defined like this:
```
class PrintMacro(Macro):
...
def handle(self, syntaxtree, vars):
return Printnl(vars, None)
```
I added the syntaxtree as a first parameter that is always passed, so you wouldn't need to have any named items if you just want to do very basic stuff on the syntax tree.
Also, if you don't like the decorators, why not add the macro type like a "base class"? `IfMacro` would then look like this:
```
macro IfMacro(MultiLine):
syntax:
Group("if", Var(), ":", Var(), name='if_')
ZeroOrMore("elif", Var(), ":", Var(), name='elifs')
Optional("else", Var(name='elseBody'))
return If(
[(cond, Stmt(body)) for keyword, cond, colon, body in [if_] + elifs],
None if elseBody is None else Stmt(elseBody)
)
```
And in the internal representation:
```
class IfMacro(MultiLineMacro):
syntax = (
Group("if", Var(), ":", Var(), name='if_'),
ZeroOrMore("elif", Var(), ":", Var(), name='elifs'),
Optional("else", Var(name='elseBody'))
)
def handle(self, syntaxtree, if_=None, elifs=None, elseBody=None):
# Default parameters in case there is no such named item.
# In this case this can only happen for 'elseBody'.
return If(
[(cond, Stmt(body)) for keyword, cond, body in [if_] + elifs],
None if elseNody is None else Stmt(elseBody)
)
```
I think this would give a quite flexible system. Main advantages:
* Easy to learn (looks like standard python)
* Easy to parse (parses like standard python)
* Optional items can be easily handled, since you can have a default parameter `None` in the handler
* Flexible use of named items:
+ You don't need to name any items if you don't want, because the syntax tree is always passed in.
+ You can name any subexpressions in a big macro definition, so it's easy to pick out specific stuff you're interested in
* Easily extensible if you want to add more features to the macro constructs. For example `Several("abc", min=3, max=5, name="a")`. I think this could also be used to add default values to optional elements like `Optional("step", Var(), name="step", default=1)`.
I'm not sure about the quote/unquote syntax with "quote:" and "$", but some syntax for this is needed, since it makes life much easier if you don't have to manually write syntax trees. Probably its a good idea to require (or just permit?) parenthesis for "$", so that you can insert more complicated syntax parts, if you want. Like `$(Stmt(a, b, c))`.
The ToMacro would look something like this:
```
# macro definition
macro ToMacro(Partial):
syntax:
Var(name='start'), "to", Var(name='end'), Optional("inclusive", name='inc'), Optional("step", Var(name='step'))
if step == None:
step = quote(1)
if inclusive:
return quote:
xrange($(start), $(end)+1, $(step))
else:
return quote:
xrange($(start), $(end), $(step))
# resulting macro class
class ToMacro(PartialMacro):
syntax = Var(name='start'), "to", Var(name='end'), Optional("inclusive", name='inc'), Optional("step", Var(name='step'))
def handle(syntaxtree, start=None, end=None, inc=None, step=None):
if step is None:
step = Number(1)
if inclusive:
return ['xrange', ['(', start, [end, '+', Number(1)], step, ')']]
return ['xrange', ['(', start, end, step, ')']]
``` |
Python get proper line ending | 454,725 | 38 | 2009-01-18T06:01:14Z | 454,731 | 13 | 2009-01-18T06:07:20Z | [
"python",
"line-endings"
] | Is there an easy way to get the type of line ending that the current operating system uses? | Oh, I figured it out. Apparently, [PEP-278](http://www.python.org/dev/peps/pep-0278/) states the following:
> Any line ending in the input file will be seen as a '\n' in Python, so little other code has to change to handle universal newlines. |
Python get proper line ending | 454,725 | 38 | 2009-01-18T06:01:14Z | 454,809 | 61 | 2009-01-18T08:21:16Z | [
"python",
"line-endings"
] | Is there an easy way to get the type of line ending that the current operating system uses? | If you are operating on a file that you opened in text mode, then you are correct that line breaks all show up as '`\n`'. Otherwise, you are looking for [`os.linesep`](http://docs.python.org/library/os.html) .
From <http://docs.python.org/library/os.html>:
> os.**linesep**
>
> The string used to separate (or, rather, terminate) lines on the
> current platform. This may be a single
> character, such as '\n' for POSIX, or
> multiple characters, for example,
> '\r\n' for Windows. Do not use
> os.linesep as a line terminator when
> writing files opened in text mode (the
> default); use a single '\n' instead,
> on all platforms. |
No module named MySQLdb | 454,854 | 211 | 2009-01-18T09:13:38Z | 454,871 | 39 | 2009-01-18T09:25:58Z | [
"python",
"django"
] | I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. Can anyone help me to solve this issue? I am using it in Vista. | `mysqldb` is a module for Python that doesn't come pre-installed or with Django. You can download `mysqldb` [here](http://sourceforge.net/projects/mysql-python). |
No module named MySQLdb | 454,854 | 211 | 2009-01-18T09:13:38Z | 5,873,259 | 395 | 2011-05-03T17:23:47Z | [
"python",
"django"
] | I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. Can anyone help me to solve this issue? I am using it in Vista. | You need to use one of the following commands. Which one depends on what OS and software you have and use.
1. **easy\_install mysql-python** (mix os)
2. **pip install mysql-python** (mix os)
3. **apt-get install python-mysqldb** (Linux Ubuntu, ...)
4. **cd /usr/ports/databases/py-MySQLdb && make install clean** (FreeBSD)
5. **yum install MySQL-python** (Linux Fedora, CentOS ...)
For Windows, see this answer: [Install mysql-python (Windows)](http://stackoverflow.com/q/21440230/4646678) |
No module named MySQLdb | 454,854 | 211 | 2009-01-18T09:13:38Z | 5,999,414 | 24 | 2011-05-14T02:17:49Z | [
"python",
"django"
] | I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. Can anyone help me to solve this issue? I am using it in Vista. | Ubuntu:
```
sudo apt-get install python-mysqldb
``` |
No module named MySQLdb | 454,854 | 211 | 2009-01-18T09:13:38Z | 25,475,877 | 40 | 2014-08-24T20:11:47Z | [
"python",
"django"
] | I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. Can anyone help me to solve this issue? I am using it in Vista. | ...and remember there is **no MySQLdb for python3.x**
(I know the question is about python2.x but google rates this post quite high)
---
**EDIT:** As stated in the comments, there's a MySQLdb's fork that adds Python 3 support: [github.com/PyMySQL/mysqlclient-python](http://github.com/PyMySQL/mysqlclient-python) |
Advice on Python/Django and message queues | 454,944 | 40 | 2009-01-18T10:42:53Z | 455,024 | 13 | 2009-01-18T11:54:03Z | [
"python",
"django",
"message-queue"
] | I have an application in Django, that needs to send a large number of emails to users in various use cases. I don't want to handle this synchronously within the application for obvious reasons.
Has anyone any recommendations for a message queuing server which integrates well with Python, or they have used on a Django project? The rest of my stack is Apache, mod\_python, MySQL. | So far I have found no "nice" solution for this. I have some more strict soft realtime requirements (taking a picture from a cardboard box being labeled) so probably one of the approaches is fast enough for you. I assume emails can wait for a few minutes.
* A "todo list" in the database processed by a cron job.
* A "todo list" in the database processed permanently beeing polled by a daemon.
* Using a custom daemon which gets notified by the webserver via an UDP packet (in Production today). Basically my own Queing system with the IP stack for handling the queue.
* [Using ActiveMQ as a message broker](http://blogs.23.nu/c0re/2007/08/antville-15655/) - this didn't work out because of stability issues. Also to me Java Daemons are generally somewhat plump
* Using Update Triggers in CouchDB. Nice but Update Triggers are not meant to do heavy image processing, so no good fit for my problem.
So far I haven't tried RabbitMQ and XMPP/ejabebrd for handling the problem but they are on my list of next things to try. RabbitMQ got decent Python connectivity during 2008 and there are tons of XMPP libraries.
But perhaps all you need is a correctly configured mailserver on the local machine. This probably would allow you to dump mails synchronously into the local mailserver and thus make your whole software stack much more simple. |
Advice on Python/Django and message queues | 454,944 | 40 | 2009-01-18T10:42:53Z | 456,593 | 23 | 2009-01-19T05:16:07Z | [
"python",
"django",
"message-queue"
] | I have an application in Django, that needs to send a large number of emails to users in various use cases. I don't want to handle this synchronously within the application for obvious reasons.
Has anyone any recommendations for a message queuing server which integrates well with Python, or they have used on a Django project? The rest of my stack is Apache, mod\_python, MySQL. | In your specific case, where it's just an email queue, I wold take the easy way out and use [django-mailer](http://code.google.com/p/django-mailer/). As a nice side bonues there are other pluggable projects that are smart enough to take advantage of django-mailer when they see it in the stack.
As for more general queue solutions, I haven't been able to try any of these yet, but here's a list of ones that look more interesting to me:
1. [pybeanstalk/beanstalkd](http://parand.com/say/index.php/2008/10/12/beanstalkd-python-basic-tutorial/)
2. [python interface to gearman](http://code.sixapart.com/trac/gearman/browser/trunk/api/python/lib/gearman) (which is probably much more interesting now with the release of the [C version of gearman](http://www.gearmanproject.org/doku.php))
3. [memcacheQ](http://memcachedb.org/memcacheq/)
4. [stomp](http://morethanseven.net/2008/09/14/using-python-and-stompserver-get-started-message-q/)
5. [Celery](http://docs.celeryproject.org/en/latest/getting-started/introduction.html) |
Using an ordered dict as object dictionary in python | 455,059 | 8 | 2009-01-18T12:33:03Z | 455,087 | 8 | 2009-01-18T12:59:34Z | [
"python",
"ordereddictionary"
] | I don't know why this doesn't work:
I'm using the [odict](http://dev.pocoo.org/hg/sandbox/raw-file/tip/odict.py) class from [PEP 372](http://www.python.org/dev/peps/pep-0372/), but I want to use it as a `__dict__` member, i.e.:
```
class Bag(object):
def __init__(self):
self.__dict__ = odict()
```
But for some reason I'm getting weird results. This works:
```
>>> b = Bag()
>>> b.apple = 1
>>> b.apple
1
>>> b.banana = 2
>>> b.banana
2
```
But trying to access the actual dictionary doesn't work:
```
>>> b.__dict__.items()
[]
>>> b.__dict__
odict.odict([])
```
And it gets weirder:
```
>>> b.__dict__['tomato'] = 3
>>> b.tomato
3
>>> b.__dict__
odict.odict([('tomato', 3)])
```
I'm feeling very stupid. Can you help me out? | The closest answer to your question that I can find is at <http://mail.python.org/pipermail/python-bugs-list/2006-April/033155.html>.
Basically, if `__dict__` is not an actual `dict()`, then it is ignored, and attribute lookup fails.
The alternative for this is to use the odict as a member, and override the getitem and setitem methods accordingly.
```
>>> class A(object) :
... def __init__(self) :
... self.__dict__['_odict'] = odict()
... def __getattr__(self, value) :
... return self.__dict__['_odict'][value]
... def __setattr__(self, key, value) :
... self.__dict__['_odict'][key] = value
...
>>> a = A()
>>> a
<__main__.A object at 0xb7bce34c>
>>> a.x = 1
>>> a.x
1
>>> a.y = 2
>>> a.y
2
>>> a.odict
odict.odict([('x', 1), ('y', 2)])
``` |
Break on exception in pydev | 455,552 | 40 | 2009-01-18T17:39:09Z | 455,556 | 16 | 2009-01-18T17:43:29Z | [
"python",
"eclipse",
"debugging",
"exception",
"pydev"
] | Is it possible to get the pydev debugger to break on exception? | ~~On **any** exception?~~
If my memory serves me right, in PyDev (in Eclipse) this is possible.
---
**EDIT:** went through it again, checked [pdb documentation](http://docs.python.org/library/pdb.html), can't find a way to set an exception breakpoint.
If I may suggest a really crude workaround, but if you must, you can call your program from within a `try-except` block, set a breakpoint there, and once it breaks in the `except` block just go up the stack and debug your error.
**Another edit** This functionality [has been added to PyDev](http://stackoverflow.com/a/6655894/17523) |
Break on exception in pydev | 455,552 | 40 | 2009-01-18T17:39:09Z | 6,655,894 | 35 | 2011-07-11T20:09:05Z | [
"python",
"eclipse",
"debugging",
"exception",
"pydev"
] | Is it possible to get the pydev debugger to break on exception? | This was added by the PyDev author, under Run > Manage Python Exception Breakpoints |
JSON datetime between Python and JavaScript | 455,580 | 331 | 2009-01-18T17:51:11Z | 455,863 | 21 | 2009-01-18T20:51:42Z | [
"javascript",
"python",
"json"
] | I want to send a datetime.datetime object in serialized form from Python using [JSON](http://en.wikipedia.org/wiki/JSON) and de-serialize in JavaScript using JSON. What is the best way to do this? | If you're certain that only Javascript will be consuming the JSON, I prefer to pass Javascript `Date` objects directly.
The `ctime()` method on `datetime` objects will return a string that the Javascript Date object can understand.
```
import datetime
date = datetime.datetime.today()
json = '{"mydate":new Date("%s")}' % date.ctime()
```
Javascript will happily use that as an object literal, and you've got your Date object built right in. |
JSON datetime between Python and JavaScript | 455,580 | 331 | 2009-01-18T17:51:11Z | 456,032 | 66 | 2009-01-18T22:26:56Z | [
"javascript",
"python",
"json"
] | I want to send a datetime.datetime object in serialized form from Python using [JSON](http://en.wikipedia.org/wiki/JSON) and de-serialize in JavaScript using JSON. What is the best way to do this? | For cross language projects I found out that strings containing [RfC 3339](http://www.ietf.org/rfc/rfc3339.txt) dates are the best way to go. A RfC 3339 date looks like this:
```
1985-04-12T23:20:50.52Z
```
I think most of the format is obvious. The only somewhat unusual thing may be the "Z" at the end. It stands for GMT/UTC. You could also add a timezone offset like +02:00 for CEST (Germany in summer). I personally prefer to keep everything in UTC until it is displayed.
For displaying, comparisons and storage you can leave it in string format across all languages. If you need the date for calculations easy to convert it back to a native date object in most language.
So generate the JSON like this:
```
json.dump(datetime.now().strftime('%Y-%m-%dT%H:%M:%S'))
```
Unfortunately Javascripts Date constructor doesn't accept RfC 3339 strings but there are many [parsers](http://blog.dansnetwork.com/2008/11/01/javascript-iso8601rfc3339-date-parser/) available on the Internet.
[huTools.hujson](https://github.com/hudora/huTools/blob/master/huTools/hujson.py) tries to handle the most common encoding issues you might come across in Python code including date/datetime objects while handling timezones correctly. |
JSON datetime between Python and JavaScript | 455,580 | 331 | 2009-01-18T17:51:11Z | 2,680,060 | 347 | 2010-04-21T03:09:43Z | [
"javascript",
"python",
"json"
] | I want to send a datetime.datetime object in serialized form from Python using [JSON](http://en.wikipedia.org/wiki/JSON) and de-serialize in JavaScript using JSON. What is the best way to do this? | You can add the 'default' parameter to json.dumps to handle this:
```
date_handler = lambda obj: (
obj.isoformat()
if isinstance(obj, datetime.datetime)
or isinstance(obj, datetime.date)
else None
)
json.dumps(datetime.datetime.now(), default=date_handler)
'"2010-04-20T20:08:21.634121"'
```
Which is [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601) format.
A more comprehensive default handler function:
```
def handler(obj):
if hasattr(obj, 'isoformat'):
return obj.isoformat()
elif isinstance(obj, ...):
return ...
else:
raise TypeError, 'Object of type %s with value of %s is not JSON serializable' % (type(obj), repr(obj))
```
Update: Added output of type as well as value.
Update: Also handle date |
JSON datetime between Python and JavaScript | 455,580 | 331 | 2009-01-18T17:51:11Z | 3,049,307 | 46 | 2010-06-15T21:45:25Z | [
"javascript",
"python",
"json"
] | I want to send a datetime.datetime object in serialized form from Python using [JSON](http://en.wikipedia.org/wiki/JSON) and de-serialize in JavaScript using JSON. What is the best way to do this? | Using `json`, you can subclass JSONEncoder and override the default() method to provide your own custom serializers:
```
import json
import datetime
class DateTimeJSONEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, datetime.datetime):
return obj.isoformat()
else:
return super(DateTimeJSONEncoder, self).default(obj)
```
Then, you can call it like this:
```
>>> DateTimeJSONEncoder().encode([datetime.datetime.now()])
'["2010-06-15T14:42:28"]'
``` |
JSON datetime between Python and JavaScript | 455,580 | 331 | 2009-01-18T17:51:11Z | 3,235,787 | 29 | 2010-07-13T09:26:57Z | [
"javascript",
"python",
"json"
] | I want to send a datetime.datetime object in serialized form from Python using [JSON](http://en.wikipedia.org/wiki/JSON) and de-serialize in JavaScript using JSON. What is the best way to do this? | Here's a fairly complete solution for recursively encoding and decoding datetime.datetime and datetime.date objects using the standard library `json` module. This needs Python >= 2.6 since the `%f` format code in the datetime.datetime.strptime() format string is only supported in since then. For Python 2.5 support, drop the `%f` and strip the microseconds from the ISO date string before trying to convert it, but you'll loose microseconds precision, of course. For interoperability with ISO date strings from other sources, which may include a time zone name or UTC offset, you may also need to strip some parts of the date string before the conversion. For a complete parser for ISO date strings (and many other date formats) see the third-party [dateutil](http://labix.org/python-dateutil) module.
Decoding only works when the ISO date strings are values in a JavaScript
literal object notation or in nested structures within an object. ISO date
strings, which are items of a top-level array will *not* be decoded.
I.e. this works:
```
date = datetime.datetime.now()
>>> json = dumps(dict(foo='bar', innerdict=dict(date=date)))
>>> json
'{"innerdict": {"date": "2010-07-15T13:16:38.365579"}, "foo": "bar"}'
>>> loads(json)
{u'innerdict': {u'date': datetime.datetime(2010, 7, 15, 13, 16, 38, 365579)},
u'foo': u'bar'}
```
And this too:
```
>>> json = dumps(['foo', 'bar', dict(date=date)])
>>> json
'["foo", "bar", {"date": "2010-07-15T13:16:38.365579"}]'
>>> loads(json)
[u'foo', u'bar', {u'date': datetime.datetime(2010, 7, 15, 13, 16, 38, 365579)}]
```
But this doesn't work as expected:
```
>>> json = dumps(['foo', 'bar', date])
>>> json
'["foo", "bar", "2010-07-15T13:16:38.365579"]'
>>> loads(json)
[u'foo', u'bar', u'2010-07-15T13:16:38.365579']
```
Here's the code:
```
__all__ = ['dumps', 'loads']
import datetime
try:
import json
except ImportError:
import simplejson as json
class JSONDateTimeEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, (datetime.date, datetime.datetime)):
return obj.isoformat()
else:
return json.JSONEncoder.default(self, obj)
def datetime_decoder(d):
if isinstance(d, list):
pairs = enumerate(d)
elif isinstance(d, dict):
pairs = d.items()
result = []
for k,v in pairs:
if isinstance(v, basestring):
try:
# The %f format code is only supported in Python >= 2.6.
# For Python <= 2.5 strip off microseconds
# v = datetime.datetime.strptime(v.rsplit('.', 1)[0],
# '%Y-%m-%dT%H:%M:%S')
v = datetime.datetime.strptime(v, '%Y-%m-%dT%H:%M:%S.%f')
except ValueError:
try:
v = datetime.datetime.strptime(v, '%Y-%m-%d').date()
except ValueError:
pass
elif isinstance(v, (dict, list)):
v = datetime_decoder(v)
result.append((k, v))
if isinstance(d, list):
return [x[1] for x in result]
elif isinstance(d, dict):
return dict(result)
def dumps(obj):
return json.dumps(obj, cls=JSONDateTimeEncoder)
def loads(obj):
return json.loads(obj, object_hook=datetime_decoder)
if __name__ == '__main__':
mytimestamp = datetime.datetime.utcnow()
mydate = datetime.date.today()
data = dict(
foo = 42,
bar = [mytimestamp, mydate],
date = mydate,
timestamp = mytimestamp,
struct = dict(
date2 = mydate,
timestamp2 = mytimestamp
)
)
print repr(data)
jsonstring = dumps(data)
print jsonstring
print repr(loads(jsonstring))
``` |
JSON datetime between Python and JavaScript | 455,580 | 331 | 2009-01-18T17:51:11Z | 6,130,825 | 52 | 2011-05-25T20:55:37Z | [
"javascript",
"python",
"json"
] | I want to send a datetime.datetime object in serialized form from Python using [JSON](http://en.wikipedia.org/wiki/JSON) and de-serialize in JavaScript using JSON. What is the best way to do this? | I've worked it out.
Let's say you have a Python datetime object, *d*, created with datetime.now(). Its value is:
```
datetime.datetime(2011, 5, 25, 13, 34, 5, 787000)
```
You can serialize it to JSON as an ISO 8601 datetime string:
```
import json
json.dumps(d.isoformat())
```
The example datetime object would be serialized as:
```
'"2011-05-25T13:34:05.787000"'
```
This value, once received in the Javascript layer, can construct a Date object:
```
var d = new Date("2011-05-25T13:34:05.787000");
```
As of Javascript 1.8.5, Date objects have a toJSON method, which returns a string in a standard format. To serialize the above Javascript object back to JSON, therefore, the command would be:
```
d.toJSON()
```
Which would give you:
```
'2011-05-25T20:34:05.787Z'
```
This string, once received in Python, could be deserialized back to a datetime object:
```
datetime.strptime('2011-05-25T20:34:05.787Z', '%Y-%m-%dT%H:%M:%S.%fZ')
```
This results in the following datetime object, which is the same one you started with and therefore correct:
```
datetime.datetime(2011, 5, 25, 20, 34, 5, 787000)
``` |
JSON datetime between Python and JavaScript | 455,580 | 331 | 2009-01-18T17:51:11Z | 32,224,522 | 7 | 2015-08-26T10:45:12Z | [
"javascript",
"python",
"json"
] | I want to send a datetime.datetime object in serialized form from Python using [JSON](http://en.wikipedia.org/wiki/JSON) and de-serialize in JavaScript using JSON. What is the best way to do this? | Late in the game... :)
A very simple solution is to patch the json module default.
For example:
```
import json
import datetime
json.JSONEncoder.default = lambda self,obj: (obj.isoformat() if isinstance(obj, datetime.datetime) else None)
```
Now, you can use **json.dumps()** as if it had always supported datetime...
```
json.dumps({'created':datetime.datetime.now()})
```
This makes sense if you require this extension to the json module to always kick in and wish to not change the way you or others use json serialization (either in existing code or not).
Note that some may consider patching libraries in that way as bad practice.
Special care need to be taken in case you may wish to extend your application in more than one way - is such a case, I suggest to use the solution by ramen or JT and choose the proper json extension in each case. |
Limiting floats to two decimal points | 455,612 | 610 | 2009-01-18T18:16:41Z | 455,634 | 687 | 2009-01-18T18:23:53Z | [
"python",
"floating-point",
"precision"
] | I want `a` to be rounded to *13.95*.
```
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
```
The [`round`](https://docs.python.org/2/library/functions.html#round) function does not work the way I expected. | You are running into the old problem with floating point numbers that all numbers cannot be represented. The command line is just showing you the full floating point form from memory. In floating point your rounded version is the same number. Since computers are binary they store floating point numbers as an integer and then divide it by a power of two so 13.95 will be represented in a similar fashion to 125650429603636838/(2\*\*53). Double precision numbers have 53 bits (16 digits) of precision and regular floats have 24 bits (8 digits) of precision. The [floating point in python uses double precision](http://docs.python.org/tutorial/floatingpoint.html) to store the values.
for example
```
>>> 125650429603636838/(2**53)
13.949999999999999
>>> 234042163/(2**24)
13.949999988079071
>>> a=13.946
>>> print(a)
13.946
>>> print("%.2f" % a)
13.95
>>> round(a,2)
13.949999999999999
>>> print("%.2f" % round(a,2))
13.95
>>> print("{0:.2f}".format(a))
13.95
>>> print("{0:.2f}".format(round(a,2)))
13.95
>>> print("{0:.15f}".format(round(a,2)))
13.949999999999999
```
If you are after only two decimal places as in currency then you have a couple of better choices use integers and store values in cents not dollars and then divide by 100 to convert to dollars. Or use a fixed point number like [decimal](http://docs.python.org/library/decimal.html) |
Limiting floats to two decimal points | 455,612 | 610 | 2009-01-18T18:16:41Z | 455,658 | 36 | 2009-01-18T18:31:50Z | [
"python",
"floating-point",
"precision"
] | I want `a` to be rounded to *13.95*.
```
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
```
The [`round`](https://docs.python.org/2/library/functions.html#round) function does not work the way I expected. | What you can do is modify the output format:
```
>>> a = 13.95
>>> a
13.949999999999999
>>> print "%.2f" % a
13.95
``` |
Limiting floats to two decimal points | 455,612 | 610 | 2009-01-18T18:16:41Z | 455,662 | 8 | 2009-01-18T18:33:45Z | [
"python",
"floating-point",
"precision"
] | I want `a` to be rounded to *13.95*.
```
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
```
The [`round`](https://docs.python.org/2/library/functions.html#round) function does not work the way I expected. | It's doing exactly what you told it to do, and working correctly. Read more about [floating point confusion](http://www.lahey.com/float.htm) and maybe try [Decimal](http://docs.python.org/library/decimal.html) objects instead. |
Limiting floats to two decimal points | 455,612 | 610 | 2009-01-18T18:16:41Z | 455,678 | 71 | 2009-01-18T18:40:03Z | [
"python",
"floating-point",
"precision"
] | I want `a` to be rounded to *13.95*.
```
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
```
The [`round`](https://docs.python.org/2/library/functions.html#round) function does not work the way I expected. | Most numbers cannot be exactly represented in floats. If you want to round the number because that's what your mathematical formula or algorithm requires, then you want to use round. If you just want to restrict the display to a certain precision, then don't even use round and just format it as that string. (If you want to display it with some alternate rounding method, and there are tons, then you need to mix the two approaches.)
```
>>> "%.2f" % 3.14159
'3.14'
>>> "%.2f" % 13.9499999
'13.95'
```
And lastly, though perhaps most importantly, if you want *exact* math then you don't want floats at all. The usual example is dealing with money and to store 'cents' as an integer. |
Limiting floats to two decimal points | 455,612 | 610 | 2009-01-18T18:16:41Z | 456,343 | 13 | 2009-01-19T02:05:53Z | [
"python",
"floating-point",
"precision"
] | I want `a` to be rounded to *13.95*.
```
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
```
The [`round`](https://docs.python.org/2/library/functions.html#round) function does not work the way I expected. | The python tutorial has an appendix called: [Floating Point Arithmetic: Issues and Limitations](http://docs.python.org/tutorial/floatingpoint.html). Read it. It explains what is happening and why python is doing its best. It has even an example that matches yours. Let me quote a bit:
> ```
> >>> 0.1
> 0.10000000000000001
> ```
>
> you may be tempted to use the `round()`
> function to chop it back to the single
> digit you expect. But that makes no
> difference:
>
> ```
> >>> round(0.1, 1)
> 0.10000000000000001
> ```
>
> The problem is that the binary
> floating-point value stored for `â0.1â`
> was already the best possible binary
> approximation to `1/10`, so trying to
> round it again canât make it better:
> it was already as good as it gets.
>
> Another consequence is that since `0.1`
> is not exactly `1/10`, summing ten
> values of `0.1` may not yield exactly
> `1.0`, either:
>
> ```
> >>> sum = 0.0
> >>> for i in range(10):
> ... sum += 0.1
> ...
> >>> sum
> 0.99999999999999989
> ```
One alternative and solution to your problems would be using the [`decimal`](http://docs.python.org/library/decimal.html) module. |
Limiting floats to two decimal points | 455,612 | 610 | 2009-01-18T18:16:41Z | 6,539,677 | 249 | 2011-06-30T18:53:13Z | [
"python",
"floating-point",
"precision"
] | I want `a` to be rounded to *13.95*.
```
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
```
The [`round`](https://docs.python.org/2/library/functions.html#round) function does not work the way I expected. | There are new format specifications, here:
<http://docs.python.org/library/string.html#format-specification-mini-language>
You can do the same as:
```
"{0:.2f}".format(13.949999999999999)
```
**Note** that the above returns a string. in order to get as float, simply wrap with `float(...)`
```
float("{0:.2f}".format(13.949999999999999))
```
**Note** that wrapping with `float()` doesn't change anything:
```
>>> x = 13.949999999999999999
>>> x
13.95
>>> g = float("{0:.2f}".format(x))
>>> g
13.95
>>> x == g
True
>>> h = round(x, 2)
>>> h
13.95
>>> x == h
True
``` |
Limiting floats to two decimal points | 455,612 | 610 | 2009-01-18T18:16:41Z | 18,438,167 | 62 | 2013-08-26T06:46:25Z | [
"python",
"floating-point",
"precision"
] | I want `a` to be rounded to *13.95*.
```
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
```
The [`round`](https://docs.python.org/2/library/functions.html#round) function does not work the way I expected. | Try codes below:
```
>>> a = 0.99334
>>> a = int((a * 100) + 0.5) / 100.0 # Adding 0.5 rounds it up
>>> print a
0.99
``` |
Limiting floats to two decimal points | 455,612 | 610 | 2009-01-18T18:16:41Z | 20,512,207 | 22 | 2013-12-11T06:37:46Z | [
"python",
"floating-point",
"precision"
] | I want `a` to be rounded to *13.95*.
```
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
```
The [`round`](https://docs.python.org/2/library/functions.html#round) function does not work the way I expected. | With python < 3 (e.g. 2.6 or 2.7), there are two ways to do so.
```
# Option one
older_method_string = "%.9f" % numvar
# Option two (note ':' before the '.9f')
newer_method_string = "{:.9f}".format(numvar)
```
But note that for python versions above 3 (e.g. 3.2 or 3.3), option two is [prefered](http://docs.python.org/2/library/stdtypes.html#str.format)
For more info on option two, I suggest this link on [string formatting from the python docs](http://docs.python.org/2/library/string.html#formatstrings).
And for more info on option one, [this link will suffice and has info on the various flags.](http://docs.python.org/2/library/stdtypes.html#string-formatting)
Refrence: [Convert floating point number to certain precision, then copy to String](http://stackoverflow.com/questions/15263597/python-convert-floating-point-number-to-certain-precision-then-copy-to-string) |
Limiting floats to two decimal points | 455,612 | 610 | 2009-01-18T18:16:41Z | 28,142,318 | 50 | 2015-01-25T22:26:58Z | [
"python",
"floating-point",
"precision"
] | I want `a` to be rounded to *13.95*.
```
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
```
The [`round`](https://docs.python.org/2/library/functions.html#round) function does not work the way I expected. | I feel that the simplest approach is to use the `format()` function.
For example:
```
a = 13.949999999999999
format(a, '.2f')
13.95
```
This produces a float number as a string rounded to two decimal points. |
Limiting floats to two decimal points | 455,612 | 610 | 2009-01-18T18:16:41Z | 35,117,668 | 8 | 2016-01-31T18:33:53Z | [
"python",
"floating-point",
"precision"
] | I want `a` to be rounded to *13.95*.
```
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
```
The [`round`](https://docs.python.org/2/library/functions.html#round) function does not work the way I expected. | ## **tldr;)**
The rounding problem at input / output has been **solved by Python 2.7.0** definitively.
```
import random
for x in iter(random.random, None): # verify FOREVER fixed :-)
assert float(repr(x)) == x # Reversible repr() conversion
assert len(repr(round(x, 10))) <= 12 # Smart decimal places in repr() after round
if x >= 0.1: # Implicit rounding to 12 significant digits
assert str(x) == repr(round(x, 12)) # by str() is good enough for small errors
y = 1000 * x # Decimal type is excessive for shopping
assert str(x) == repr(round(x, 12 - 3)) # in the supermaket with Python 2.7+ :-)
```
## docs
See the [Release notes Python 2.7 - Other Language Changes](https://docs.python.org/2.7/whatsnew/2.7.html#other-language-changes) the fourth paragraph:
> **Conversions** between floating-point numbers and strings are now **correctly rounded** on most platforms. These conversions occur in many different places: str() on floats and complex numbers; the float and complex constructors; numeric formatting; serializing and de-serializing floats and complex numbers using the marshal, pickle and json modules; parsing of float and imaginary literals in Python code; and Decimal-to-float conversion.
>
> Related to this, the **repr()** of a floating-point number x now returns a result based on the **shortest decimal string thatâs guaranteed to round back to x** under correct rounding (with round-half-to-even rounding mode). Previously it gave a string based on rounding x to 17 decimal digits.
[The related issue](https://bugs.python.org/issue7117)
---
**EDIT - more info:**: The formatting of `float` before Python 2.7 was similar to the current `numpy.float64`. Both types use the same 64 bit [IEEE 754](https://en.wikipedia.org/wiki/Double-precision_floating-point_format) double precision with 52 bit mantisa. A big difference is that `np.float64.__repr__` is formated frequently with an excessive decimal number so that no bit can be lost, but no valid IEEE 754 number exists between 13.949999999999999 and 3.950000000000001, the result is not nice and the conversion `repr(float(number_as_string))` is not reversible. On the other side: `float.__repr__` is formatted so that every digit is important, the sequence is without gaps and the conversion is reversible. Simply: If you perhaps have a numpy.float64 number, convert it to normal float in order to be formatted for humans not for numeric processors, otherwise nothing more is necessary with Python 2.7+. |
Python 3 development and distribution challenges | 455,717 | 8 | 2009-01-18T19:01:08Z | 455,840 | 9 | 2009-01-18T20:38:22Z | [
"python",
"version-control",
"python-3.x"
] | Suppose I've developed a general-purpose end user utility written in Python. Previously, I had just one version available which was suitable for Python later than version 2.3 or so. It was sufficient to say, "download Python if you need to, then run this script". There was just one version of the script in source control (I'm using Git) to keep track of.
With Python 3, this is no longer necessarily true. For the foreseeable future, I will need to simultaneously develop two different versions, one suitable for Python 2.x and one suitable for Python 3.x. From a development perspective, I can think of a few options:
1. Maintain two different scripts in the same branch, making improvements to both simultaneously.
2. Maintain two separate branches, and merge common changes back and forth as development proceeds.
3. Maintain just one version of the script, plus check in a patch file that converts the script from one version to the other. When enough changes have been made that the patch no longer applies cleanly, resolve the conflicts and create a new patch.
I am currently leaning toward option 3, as the first two would involve a lot of error-prone tedium. But option 3 seems messy and my source control system is supposed to be managing patches for me.
For distribution packaging, there are more options to choose from:
1. Offer two different download packages, one suitable for Python 2 and one suitable for Python 3 (the user will have to know to download the correct one for whatever version of Python they have).
2. Offer one download package, with two different scripts inside (and then the user has to know to run the correct one).
3. One download package with two version-specific scripts, and a small stub loader that can run in both Python versions, that runs the correct script for the Python version installed.
Again I am currently leaning toward option 3 here, although I haven't tried to develop such a stub loader yet.
Any other ideas? | **Edit:** my original answer was based on the state of 2009, with Python 2.6 and 3.0 as the current versions. Now, with Python 2.7 and 3.3, there are other options. In particular, it is now quite feasible to use a single code base for Python 2 and Python 3.
See [Porting Python 2 Code to Python 3](http://docs.python.org/3/howto/pyporting.html)
**Original answer:**
The [official recommendation](http://docs.python.org/3.0/whatsnew/3.0.html#porting-to-python-3-0) says:
> For porting existing Python 2.5 or 2.6
> source code to Python 3.0, the best
> strategy is the following:
>
> 1. (Prerequisite:) Start with excellent test coverage.
> 2. Port to Python 2.6. This should be no more work than the average port
> from Python 2.x to Python 2.(x+1).
> Make sure all your tests pass.
> 3. (Still using 2.6:) Turn on the -3 command line switch. This enables
> warnings about features that will be
> removed (or change) in 3.0. Run your
> test suite again, and fix code that
> you get warnings about until there are
> no warnings left, and all your tests
> still pass.
> 4. Run the 2to3 source-to-source translator over your source code tree.
> (See 2to3 - Automated Python 2 to 3
> code translation for more on this
> tool.) Run the result of the
> translation under Python 3.0. Manually
> fix up any remaining issues, fixing
> problems until all tests pass again.
>
> It is not recommended to try to write
> source code that runs unchanged under
> both Python 2.6 and 3.0; youâd have to
> use a very contorted coding style,
> e.g. avoiding print statements,
> metaclasses, and much more. If you are
> maintaining a library that needs to
> support both Python 2.6 and Python
> 3.0, the best approach is to modify step 3 above by editing the 2.6
> version of the source code and running
> the 2to3 translator again, rather than
> editing the 3.0 version of the source
> code.
Ideally, you would end up with a single version, that is 2.6 compatible and can be translated to 3.0 using 2to3. In practice, you might not be able to achieve this goal completely. So you might need some manual modifications to get it to work under 3.0.
I would maintain these modifications in a branch, like your option 2. However, rather than maintaining the final 3.0-compatible version in this branch, I would consider to apply the manual modifications *before* the 2to3 translations, and put this modified 2.6 code into your branch. The advantage of this method would be that the difference between this branch and the 2.6 trunk would be rather small, and would only consist of manual changes, not the changes made by 2to3. This way, the separate branches should be easier to maintain and merge, and you should be able to benefit from future improvements in 2to3.
Alternatively, take a bit of a "wait and see" approach. Proceed with your porting only so far as you can go with a single 2.6 version plus 2to3 translation, and postpone the remaining manual modification until you really need a 3.0 version. Maybe by this time, you don't need any manual tweaks anymore... |
Is there any advantage in using a Python class? | 456,001 | 15 | 2009-01-18T22:09:07Z | 456,008 | 29 | 2009-01-18T22:12:04Z | [
"python",
"class",
"static-methods"
] | I have a Python class full of static methods. What are the advantages and disadvantages of packaging these in a class rather than raw functions? | There are none. This is what modules are for: grouping related functions. Using a class full of static methods makes me cringe from Javaitis. The only time I would use a static function is if the function is an integral part of the class. (In fact, I'd probably want to use a class method anyway.) |
Is there any advantage in using a Python class? | 456,001 | 15 | 2009-01-18T22:09:07Z | 456,018 | 11 | 2009-01-18T22:17:46Z | [
"python",
"class",
"static-methods"
] | I have a Python class full of static methods. What are the advantages and disadvantages of packaging these in a class rather than raw functions? | No. It would be better to make them functions and if they are related, place them into their own module. For instance, if you have a class like this:
```
class Something(object):
@staticmethod
def foo(x):
return x + 5
@staticmethod
def bar(x, y):
return y + 5 * x
```
Then it would be better to have a module like,
```
# something.py
def foo(x):
return x + 5
def bar(x, y):
return y + 5 * x
```
That way, you use them in the following way:
```
import something
print something.foo(10)
print something.bar(12, 14)
```
Don't be afraid of namespaces. `;-)` |
How do I do what strtok() does in C, in Python? | 456,084 | 7 | 2009-01-18T23:03:45Z | 456,089 | 26 | 2009-01-18T23:09:34Z | [
"python"
] | I am learning Python and trying to figure out an efficient way to tokenize a string of numbers separated by commas into a list. Well formed cases work as I expect, but less well formed cases not so much.
If I have this:
```
A = '1,2,3,4'
B = [int(x) for x in A.split(',')]
B results in [1, 2, 3, 4]
```
which is what I expect, but if the string is something more like
```
A = '1,,2,3,4,'
```
if I'm using the same list comprehension expression for B as above, I get an exception. I think I understand why (because some of the "x" string values are not integers), but I'm thinking that there would be a way to parse this still quite elegantly such that tokenization of the string a works a bit more directly like ***strtok(A,",\n\t")*** would have done when called iteratively in C.
To be clear what I am asking; I am looking for an elegant/efficient/typical way in Python to have all of the following example cases of strings:
```
A='1,,2,3,\n,4,\n'
A='1,2,3,4'
A=',1,2,3,4,\t\n'
A='\n\t,1,2,3,,4\n'
```
return with the same list of:
```
B=[1,2,3,4]
```
via some sort of compact expression. | How about this:
```
A = '1, 2,,3,4 '
B = [int(x) for x in A.split(',') if x.strip()]
```
x.strip() trims whitespace from the string, which will make it empty if the string is all whitespace. An empty string is "false" in a boolean context, so it's filtered by the if part of the list comprehension. |
Can't get Python to import from a different folder | 456,481 | 40 | 2009-01-19T03:53:35Z | 456,491 | 64 | 2009-01-19T04:00:15Z | [
"python"
] | I can't seem to get Python to import a module in a subfolder. I get the error when I try to create an instance of the class from the imported module, but the import itself succeeds. Here is my directory structure:
```
Server
-server.py
-Models
--user.py
```
Here's the contents of server.py:
```
from sys import path
from os import getcwd
path.append(getcwd() + "\\models") #Yes, i'm on windows
print path
import user
u=user.User() #error on this line
```
And user.py:
```
class User(Entity):
using_options(tablename='users')
username = Field(String(15))
password = Field(String(64))
email = Field(String(50))
status = Field(Integer)
created = Field(DateTime)
```
The error is:
AttributeError: 'module' object has no attribute 'User' | I believe you need to create a file called `__init__.py` in the Models directory so that python treats it as a module.
Then you can do:
```
from Models.user import User
```
You can include code in the `__init__.py` (for instance initialization code that a few different classes need) or leave it blank. But it must be there. |
Can't get Python to import from a different folder | 456,481 | 40 | 2009-01-19T03:53:35Z | 456,494 | 18 | 2009-01-19T04:02:22Z | [
"python"
] | I can't seem to get Python to import a module in a subfolder. I get the error when I try to create an instance of the class from the imported module, but the import itself succeeds. Here is my directory structure:
```
Server
-server.py
-Models
--user.py
```
Here's the contents of server.py:
```
from sys import path
from os import getcwd
path.append(getcwd() + "\\models") #Yes, i'm on windows
print path
import user
u=user.User() #error on this line
```
And user.py:
```
class User(Entity):
using_options(tablename='users')
username = Field(String(15))
password = Field(String(64))
email = Field(String(50))
status = Field(Integer)
created = Field(DateTime)
```
The error is:
AttributeError: 'module' object has no attribute 'User' | You have to create `__init__.py` on the `Models` subfolder. The file may be empty. It defines a package.
Then you can do:
```
from Models.user import User
```
Read all about it in python tutorial, [here](http://docs.python.org/tutorial/modules.html#packages).
There is also a good article about file organization of python projects [here](http://jcalderone.livejournal.com/39794.html). |
Can't get Python to import from a different folder | 456,481 | 40 | 2009-01-19T03:53:35Z | 456,495 | 7 | 2009-01-19T04:02:31Z | [
"python"
] | I can't seem to get Python to import a module in a subfolder. I get the error when I try to create an instance of the class from the imported module, but the import itself succeeds. Here is my directory structure:
```
Server
-server.py
-Models
--user.py
```
Here's the contents of server.py:
```
from sys import path
from os import getcwd
path.append(getcwd() + "\\models") #Yes, i'm on windows
print path
import user
u=user.User() #error on this line
```
And user.py:
```
class User(Entity):
using_options(tablename='users')
username = Field(String(15))
password = Field(String(64))
email = Field(String(50))
status = Field(Integer)
created = Field(DateTime)
```
The error is:
AttributeError: 'module' object has no attribute 'User' | You're missing \_\_init\_\_.py. From the Python tutorial:
> The \_\_init\_\_.py files are required to
> make Python treat the directories as
> containing packages; this is done to
> prevent directories with a common
> name, such as string, from
> unintentionally hiding valid modules
> that occur later on the module search
> path. In the simplest case,
> \_\_init\_\_.py can just be an empty file, but it can also execute initialization
> code for the package or set the
> \_\_all\_\_ variable, described later.
Put an empty file named \_\_init\_\_.py in your Models directory, and all should be golden. |
Can't get Python to import from a different folder | 456,481 | 40 | 2009-01-19T03:53:35Z | 457,630 | 9 | 2009-01-19T13:51:51Z | [
"python"
] | I can't seem to get Python to import a module in a subfolder. I get the error when I try to create an instance of the class from the imported module, but the import itself succeeds. Here is my directory structure:
```
Server
-server.py
-Models
--user.py
```
Here's the contents of server.py:
```
from sys import path
from os import getcwd
path.append(getcwd() + "\\models") #Yes, i'm on windows
print path
import user
u=user.User() #error on this line
```
And user.py:
```
class User(Entity):
using_options(tablename='users')
username = Field(String(15))
password = Field(String(64))
email = Field(String(50))
status = Field(Integer)
created = Field(DateTime)
```
The error is:
AttributeError: 'module' object has no attribute 'User' | > import user
>
> u=user.User() #error on this line
Because of the lack of \_\_init\_\_ mentioned above, you would expect an ImportError which would make the problem clearer.
You don't get one because 'user' is also an existing module in the standard library. Your import statement grabs that one and tries to find the User class inside it; that doesn't exist and only then do you get the error.
It is generally a good idea to make your import absolute:
```
import Server.Models.user
```
to avoid this kind of ambiguity. Indeed from Python 2.7 'import user' won't look relative to the current module at all.
If you really want relative imports, you can have them explicitly in Python 2.5 and up using the somewhat ugly syntax:
```
from .user import User
``` |
Throttling with urllib2 | 456,649 | 8 | 2009-01-19T05:53:58Z | 456,668 | 18 | 2009-01-19T06:06:17Z | [
"python",
"urllib2",
"bandwidth-throttling"
] | is it possible to easily cap the kbps when using `urllib2`?
If it is, any code examples or resources you could direct me to would be greatly appreciated. | There is the `urlretrieve(url, filename=None, reporthook=None, data=None)` function in the `urllib` module.
If you implement the `reporthook`-function/object as either a [token bucket](http://en.wikipedia.org/wiki/Token_bucket), or a leaky bucket, you have your global rate-limit.
**EDIT:** Upon closer examination I see that it isn't as easy to do global rate-limit with `reporthook` as I thought. `reporthook` is only given the downloaded amount and the total size, which on their own isn't enough to information to use with the token-bucket. One way to get around it is by storing the last downloaded amount in each rate-limiter, but use a global token-bucket.
---
**EDIT 2:** Combined both codes into one example.
```
"""Rate limiters with shared token bucket."""
import os
import sys
import threading
import time
import urllib
import urlparse
class TokenBucket(object):
"""An implementation of the token bucket algorithm.
source: http://code.activestate.com/recipes/511490/
>>> bucket = TokenBucket(80, 0.5)
>>> print bucket.consume(10)
True
>>> print bucket.consume(90)
False
"""
def __init__(self, tokens, fill_rate):
"""tokens is the total tokens in the bucket. fill_rate is the
rate in tokens/second that the bucket will be refilled."""
self.capacity = float(tokens)
self._tokens = float(tokens)
self.fill_rate = float(fill_rate)
self.timestamp = time.time()
self.lock = threading.RLock()
def consume(self, tokens):
"""Consume tokens from the bucket. Returns 0 if there were
sufficient tokens, otherwise the expected time until enough
tokens become available."""
self.lock.acquire()
tokens = max(tokens,self.tokens)
expected_time = (tokens - self.tokens) / self.fill_rate
if expected_time <= 0:
self._tokens -= tokens
self.lock.release()
return max(0,expected_time)
@property
def tokens(self):
self.lock.acquire()
if self._tokens < self.capacity:
now = time.time()
delta = self.fill_rate * (now - self.timestamp)
self._tokens = min(self.capacity, self._tokens + delta)
self.timestamp = now
value = self._tokens
self.lock.release()
return value
class RateLimit(object):
"""Rate limit a url fetch.
source: http://mail.python.org/pipermail/python-list/2008-January/472859.html
(but mostly rewritten)
"""
def __init__(self, bucket, filename):
self.bucket = bucket
self.last_update = 0
self.last_downloaded_kb = 0
self.filename = filename
self.avg_rate = None
def __call__(self, block_count, block_size, total_size):
total_kb = total_size / 1024.
downloaded_kb = (block_count * block_size) / 1024.
just_downloaded = downloaded_kb - self.last_downloaded_kb
self.last_downloaded_kb = downloaded_kb
predicted_size = block_size/1024.
wait_time = self.bucket.consume(predicted_size)
while wait_time > 0:
time.sleep(wait_time)
wait_time = self.bucket.consume(predicted_size)
now = time.time()
delta = now - self.last_update
if self.last_update != 0:
if delta > 0:
rate = just_downloaded / delta
if self.avg_rate is not None:
rate = 0.9 * self.avg_rate + 0.1 * rate
self.avg_rate = rate
else:
rate = self.avg_rate or 0.
print "%20s: %4.1f%%, %5.1f KiB/s, %.1f/%.1f KiB" % (
self.filename, 100. * downloaded_kb / total_kb,
rate, downloaded_kb, total_kb,
)
self.last_update = now
def main():
"""Fetch the contents of urls"""
if len(sys.argv) < 4:
print 'Syntax: %s rate url1 url2 ...' % sys.argv[0]
raise SystemExit(1)
rate_limit = float(sys.argv[1])
urls = sys.argv[2:]
bucket = TokenBucket(10*rate_limit, rate_limit)
print "rate limit = %.1f" % (rate_limit,)
threads = []
for url in urls:
path = urlparse.urlparse(url,'http')[2]
filename = os.path.basename(path)
print 'Downloading "%s" to "%s"...' % (url,filename)
rate_limiter = RateLimit(bucket, filename)
t = threading.Thread(
target=urllib.urlretrieve,
args=(url, filename, rate_limiter))
t.start()
threads.append(t)
for t in threads:
t.join()
print 'All downloads finished'
if __name__ == "__main__":
main()
``` |
Class factory in Python | 456,672 | 46 | 2009-01-19T06:08:16Z | 456,731 | 15 | 2009-01-19T06:39:00Z | [
"python",
"factory"
] | I'm new to Python and need some advice implementing the scenario below.
I have two classes for managing domains at two different registrars. Both have the same interface, e.g.
```
class RegistrarA(Object):
def __init__(self, domain):
self.domain = domain
def lookup(self):
...
def register(self, info):
...
```
and
```
class RegistrarB(object):
def __init__(self, domain):
self.domain = domain
def lookup(self):
...
def register(self, info):
...
```
I would like to create a Domain class that, given a domain name, loads the correct registrar class based on the extension, e.g.
```
com = Domain('test.com') #load RegistrarA
com.lookup()
biz = Domain('test.biz') #load RegistrarB
biz.lookup()
```
I know this can be accomplished using a factory function (see below), but is this the best way of doing it or is there a better way using OOP features?
```
def factory(domain):
if ...:
return RegistrarA(domain)
else:
return RegistrarB(domain)
``` | Assuming you need separate classes for different registrars (though it's not obvious in your example) your solution looks okay, though **RegistrarA** and **RegistrarB** probably share functionality and could be derived from an [Abstract Base Class](http://docs.python.org/2/library/abc.html).
As an alternative to your `factory` function, you could specify a dict, mapping to your registrar classes:
```
Registrar = {'test.com': RegistrarA, 'test.biz': RegistrarB}
```
Then:
```
registrar = Registrar['test.com'](domain)
```
One quibble: You're not really doing a Class Factory here as you're returning instances rather than classes. |
Class factory in Python | 456,672 | 46 | 2009-01-19T06:08:16Z | 456,747 | 62 | 2009-01-19T06:49:03Z | [
"python",
"factory"
] | I'm new to Python and need some advice implementing the scenario below.
I have two classes for managing domains at two different registrars. Both have the same interface, e.g.
```
class RegistrarA(Object):
def __init__(self, domain):
self.domain = domain
def lookup(self):
...
def register(self, info):
...
```
and
```
class RegistrarB(object):
def __init__(self, domain):
self.domain = domain
def lookup(self):
...
def register(self, info):
...
```
I would like to create a Domain class that, given a domain name, loads the correct registrar class based on the extension, e.g.
```
com = Domain('test.com') #load RegistrarA
com.lookup()
biz = Domain('test.biz') #load RegistrarB
biz.lookup()
```
I know this can be accomplished using a factory function (see below), but is this the best way of doing it or is there a better way using OOP features?
```
def factory(domain):
if ...:
return RegistrarA(domain)
else:
return RegistrarB(domain)
``` | I think using a function is fine.
The more interesting question is how do you determine which registrar to load? One option is to have an abstract base Registrar class which concrete implementations subclass, then iterate over its `__subclasses__()` calling an `is_registrar_for()` class method:
```
class Registrar(object):
def __init__(self, domain):
self.domain = domain
class RegistrarA(Registrar):
@classmethod
def is_registrar_for(cls, domain):
return domain == 'foo.com'
class RegistrarB(Registrar):
@classmethod
def is_registrar_for(cls, domain):
return domain == 'bar.com'
def Domain(domain):
for cls in Registrar.__subclasses__():
if cls.is_registrar_for(domain):
return cls(domain)
raise ValueError
print Domain('foo.com')
print Domain('bar.com')
```
This will let you transparently add new `Registrar`s and delegate the decision of which domains each supports, to them. |
Class factory in Python | 456,672 | 46 | 2009-01-19T06:08:16Z | 545,383 | 10 | 2009-02-13T09:51:01Z | [
"python",
"factory"
] | I'm new to Python and need some advice implementing the scenario below.
I have two classes for managing domains at two different registrars. Both have the same interface, e.g.
```
class RegistrarA(Object):
def __init__(self, domain):
self.domain = domain
def lookup(self):
...
def register(self, info):
...
```
and
```
class RegistrarB(object):
def __init__(self, domain):
self.domain = domain
def lookup(self):
...
def register(self, info):
...
```
I would like to create a Domain class that, given a domain name, loads the correct registrar class based on the extension, e.g.
```
com = Domain('test.com') #load RegistrarA
com.lookup()
biz = Domain('test.biz') #load RegistrarB
biz.lookup()
```
I know this can be accomplished using a factory function (see below), but is this the best way of doing it or is there a better way using OOP features?
```
def factory(domain):
if ...:
return RegistrarA(domain)
else:
return RegistrarB(domain)
``` | In Python you can change the actual class directly:
```
class Domain(object):
def __init__(self, domain):
self.domain = domain
if ...:
self.__class__ = RegistrarA
else:
self.__class__ = RegistrarB
```
And then following will work.
```
com = Domain('test.com') #load RegistrarA
com.lookup()
```
I'm using this approach successfully. |
AttributeError: 'module' object has no attribute 'model' | 456,867 | 14 | 2009-01-19T08:19:25Z | 456,886 | 58 | 2009-01-19T08:32:41Z | [
"python",
"django"
] | Can anyone help me please to solve this..
```
from django.db import models
# Create your models here.
class Poll(models.model):
question = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
class Choice(models.Model):
poll = models.ForeignKey(Poll)
choice = models.CharField(max_length=200)
votes = models.IntegerField()
```
Running:
```
c:\projects\mysite>python manage.py sql polls
Traceback (most recent call last):
File "manage.py", line 11, in <module>
execute_manager(settings)
File "C:\Python25\Lib\site-packages\django\core\management\__init__.py", line 340, in execute_manager
utility.execute()
File "C:\Python25\Lib\site-packages\django\core\management\__init__.py", line 295, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python25\Lib\site-packages\django\core\management\base.py", line 195, in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Python25\Lib\site-packages\django\core\management\base.py", line 221, in execute
self.validate()
File "C:\Python25\Lib\site-packages\django\core\management\base.py", line 249, in validate
num_errors = get_validation_errors(s, app)
File "C:\Python25\lib\site-packages\django\core\management\validation.py", line 28, in get_validation_errors
for (app_name, error) in get_app_errors().items():
File "C:\Python25\lib\site-packages\django\db\models\loading.py", line 128, in get_app_errors
self._populate()
File "C:\Python25\lib\site-packages\django\db\models\loading.py", line 57, in _populate
self.load_app(app_name, True)
File "C:\Python25\lib\site-packages\django\db\models\loading.py", line 72, in load_app
mod = __import__(app_name, {}, {}, ['models'])
File "c:\projects\mysite\..\mysite\polls\models.py", line 4, in <module>
class Poll(models.model):
AttributeError: 'module' object has no attribute 'model'
``` | It's called models.Model and not models.model (case sensitive). Fix your Poll model like this -
```
class Poll(models.Model):
question = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
```
Hope that helps... |
Extending python - to swig, not to swig or Cython | 456,884 | 55 | 2009-01-19T08:32:06Z | 456,949 | 53 | 2009-01-19T09:00:41Z | [
"python",
"c++",
"c",
"swig",
"cython"
] | I found the bottleneck in my python code, played around with psycho etc. Then decided to write a c/c++ extension for performance.
With the help of swig you almost don't need to care about arguments etc. Everything works fine.
Now my question: swig creates a quite large py-file which does a lot of 'checkings' and 'PySwigObject' before calling the actual .pyd or .so code.
Does anyone of you have any experience whether there is some more performance to gain if you hand-write this file or let swig do it. | You should consider Boost.Python if you are not planning to generate bindings for other languages as well with swig.
If you have a lot of functions and classes to bind, [Py++](http://sourceforge.net/projects/pygccxml/) is a great tool that automatically generates the needed code to make the bindings.
[Pybindgen](http://code.google.com/p/pybindgen/) may also be an option, but it's a new project and less complete that Boost.Python.
---
Edit:
Maybe I need to be more explicit about pro and cons.
* Swig:
pro: you can generate bindings for many scripting languages.
cons: I don't like the way the parser works. I don't know if the made some progress but two years ago the C++ parser was quite limited. Most of the time I had to copy/past my .h headers add some `%` characters and give extra hints to the swig parser.
I was also needed to deal with the Python C-API from time to time for (not so) complicated type conversions.
I'm not using it anymore.
* Boost.Python:
pro:
It's a very complete library. It allows you to do almost everything that is possible with the C-API, but in C++. I never had to write C-API code with this library. I also never encountered bug due to the library. Code for bindings either works like a charm or refuse compile.
It's probably one of the best solutions currently available if you already have some C++ library to bind. But if you only have a small C function to rewrite, I would probably try with Cython.
cons: if you don't have a pre-compiled Boost.Python library you're going to use Bjam (sort of make replacement). I really hate Bjam and its syntax.
Python libraries created with B.P tend to become obese. It also takes a **lot** of time to compile them.
* Py++ (discontinued): it's Boost.Python made easy. Py++ uses a C++ parser to read your code and then generates Boost.Python code automatically. You also have a great support from its author (no it's not me ;-) ).
cons: only the problems due to Boost.Python itself. Update: As of 2014 this project now looks discontinued.
* Pybindgen:
It generates the code dealing with the C-API. You can either describe functions and classes in a Python file, or let Pybindgen read your headers and generate bindings automatically (for this it uses pygccxml, a python library wrote by the author of Py++).
cons: it's a young project, with a smaller team than Boost.Python. There are still some limitations: you cannot use multiple inheritance for your C++ classes, Callbacks (not automatically, custom callback handling code can be written, though). Translation of Python exceptions to C.
It's definitely worth a good look.
* A new one:
On 2009/01/20 the author of Py++ announced a [new package](http://mail.python.org/pipermail/cplusplus-sig/2009-January/014198.html) for interfacing C/C++ code with python. It is based on ctypes. I didn't try it already but I will! Note: this project looks discontiued, as Py++.
* [CFFI](http://cffi.readthedocs.org/): I did not know the existence of this one until very recently so for now I cannot give my opinion. It looks like you can define C functions in Python strings and call them directly from the same Python module.
* [Cython](http://cython.org/): This is the method I'm currently using in my projects. Basically you write code in special .pyx files. Those files are compiled (translated) into C code which in turn are compiled to CPython modules.
Cython code can look like regular Python (and in fact pure Python are valid .pyx Cython files), but you can also more information like variable types. This optional typing allows Cython to generate faster C code. Code in Cython files can call both pure Python functions but also C and C++ functions (and also C++ methods).
It took me some time to think in Cython, that in the same code call C and C++ function, mix Python and C variables, and so on. But it's a very powerful language, with an active (in 2014) and friendly community. |
Extending python - to swig, not to swig or Cython | 456,884 | 55 | 2009-01-19T08:32:06Z | 456,995 | 23 | 2009-01-19T09:20:19Z | [
"python",
"c++",
"c",
"swig",
"cython"
] | I found the bottleneck in my python code, played around with psycho etc. Then decided to write a c/c++ extension for performance.
With the help of swig you almost don't need to care about arguments etc. Everything works fine.
Now my question: swig creates a quite large py-file which does a lot of 'checkings' and 'PySwigObject' before calling the actual .pyd or .so code.
Does anyone of you have any experience whether there is some more performance to gain if you hand-write this file or let swig do it. | For sure you will always have a performance gain doing this by hand, but the gain will be very small compared to the effort required to do this. I don't have any figure to give you but I don't recommend this, because you will need to maintain the interface by hand, and this is not an option if your module is large!
You did the right thing to chose to use a scripting language because you wanted rapid development. This way you've avoided the early optimization syndrome, and now you want to optimize bottleneck parts, great! But if you do the C/python interface by hand you will fall in the early optimization syndrome for sure.
If you want something with less interface code, you can think about creating a dll from your C code, and use that library directly from python with [cstruct](http://python.net/crew/theller/ctypes/).
Consider also [Cython](http://www.cython.org/) if you want to use only python code in your program. |
Extending python - to swig, not to swig or Cython | 456,884 | 55 | 2009-01-19T08:32:06Z | 457,099 | 16 | 2009-01-19T10:05:52Z | [
"python",
"c++",
"c",
"swig",
"cython"
] | I found the bottleneck in my python code, played around with psycho etc. Then decided to write a c/c++ extension for performance.
With the help of swig you almost don't need to care about arguments etc. Everything works fine.
Now my question: swig creates a quite large py-file which does a lot of 'checkings' and 'PySwigObject' before calling the actual .pyd or .so code.
Does anyone of you have any experience whether there is some more performance to gain if you hand-write this file or let swig do it. | Using [Cython](http://cython.org/) is pretty good. You can write your C extension with a Python-like syntax and have it generate C code. Boilerplate included. Since you have the code already in python, you have to do just a few changes to your bottleneck code and C code will be generated from it.
Example. `hello.pyx`:
```
cdef int hello(int a, int b):
return a + b
```
That generates **601 lines** of boilerplate code:
```
/* Generated by Cython 0.10.3 on Mon Jan 19 08:24:44 2009 */
#define PY_SSIZE_T_CLEAN
#include "Python.h"
#include "structmember.h"
#ifndef PY_LONG_LONG
#define PY_LONG_LONG LONG_LONG
#endif
#ifndef DL_EXPORT
#define DL_EXPORT(t) t
#endif
#if PY_VERSION_HEX < 0x02040000
#define METH_COEXIST 0
#endif
#if PY_VERSION_HEX < 0x02050000
typedef int Py_ssize_t;
#define PY_SSIZE_T_MAX INT_MAX
#define PY_SSIZE_T_MIN INT_MIN
#define PyInt_FromSsize_t(z) PyInt_FromLong(z)
#define PyInt_AsSsize_t(o) PyInt_AsLong(o)
#define PyNumber_Index(o) PyNumber_Int(o)
#define PyIndex_Check(o) PyNumber_Check(o)
#endif
#if PY_VERSION_HEX < 0x02060000
#define Py_REFCNT(ob) (((PyObject*)(ob))->ob_refcnt)
#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type)
#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size)
#define PyVarObject_HEAD_INIT(type, size) \
PyObject_HEAD_INIT(type) size,
#define PyType_Modified(t)
typedef struct {
void *buf;
PyObject *obj;
Py_ssize_t len;
Py_ssize_t itemsize;
int readonly;
int ndim;
char *format;
Py_ssize_t *shape;
Py_ssize_t *strides;
Py_ssize_t *suboffsets;
void *internal;
} Py_buffer;
#define PyBUF_SIMPLE 0
#define PyBUF_WRITABLE 0x0001
#define PyBUF_LOCK 0x0002
#define PyBUF_FORMAT 0x0004
#define PyBUF_ND 0x0008
#define PyBUF_STRIDES (0x0010 | PyBUF_ND)
#define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES)
#define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES)
#define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES)
#define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES)
#endif
#if PY_MAJOR_VERSION < 3
#define __Pyx_BUILTIN_MODULE_NAME "__builtin__"
#else
#define __Pyx_BUILTIN_MODULE_NAME "builtins"
#endif
#if PY_MAJOR_VERSION >= 3
#define Py_TPFLAGS_CHECKTYPES 0
#define Py_TPFLAGS_HAVE_INDEX 0
#endif
#if (PY_VERSION_HEX < 0x02060000) || (PY_MAJOR_VERSION >= 3)
#define Py_TPFLAGS_HAVE_NEWBUFFER 0
#endif
#if PY_MAJOR_VERSION >= 3
#define PyBaseString_Type PyUnicode_Type
#define PyString_Type PyBytes_Type
#define PyInt_Type PyLong_Type
#define PyInt_Check(op) PyLong_Check(op)
#define PyInt_CheckExact(op) PyLong_CheckExact(op)
#define PyInt_FromString PyLong_FromString
#define PyInt_FromUnicode PyLong_FromUnicode
#define PyInt_FromLong PyLong_FromLong
#define PyInt_FromSize_t PyLong_FromSize_t
#define PyInt_FromSsize_t PyLong_FromSsize_t
#define PyInt_AsLong PyLong_AsLong
#define PyInt_AS_LONG PyLong_AS_LONG
#define PyInt_AsSsize_t PyLong_AsSsize_t
#define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask
#define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask
#define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y)
#else
#define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y)
#define PyBytes_Type PyString_Type
#endif
#if PY_MAJOR_VERSION >= 3
#define PyMethod_New(func, self, klass) PyInstanceMethod_New(func)
#endif
#if !defined(WIN32) && !defined(MS_WINDOWS)
#ifndef __stdcall
#define __stdcall
#endif
#ifndef __cdecl
#define __cdecl
#endif
#else
#define _USE_MATH_DEFINES
#endif
#ifdef __cplusplus
#define __PYX_EXTERN_C extern "C"
#else
#define __PYX_EXTERN_C extern
#endif
#include <math.h>
#define __PYX_HAVE_API__helloworld
#ifdef __GNUC__
#define INLINE __inline__
#elif _WIN32
#define INLINE __inline
#else
#define INLINE
#endif
typedef struct
{PyObject **p; char *s; long n;
char is_unicode; char intern; char is_identifier;}
__Pyx_StringTabEntry; /*proto*/
static int __pyx_skip_dispatch = 0;
/* Type Conversion Predeclarations */
#if PY_MAJOR_VERSION < 3
#define __Pyx_PyBytes_FromString PyString_FromString
#define __Pyx_PyBytes_AsString PyString_AsString
#else
#define __Pyx_PyBytes_FromString PyBytes_FromString
#define __Pyx_PyBytes_AsString PyBytes_AsString
#endif
#define __Pyx_PyBool_FromLong(b) ((b) ? (Py_INCREF(Py_True), Py_True) : (Py_INCREF(Py_False), Py_False))
static INLINE int __Pyx_PyObject_IsTrue(PyObject* x);
static INLINE PY_LONG_LONG __pyx_PyInt_AsLongLong(PyObject* x);
static INLINE unsigned PY_LONG_LONG __pyx_PyInt_AsUnsignedLongLong(PyObject* x);
static INLINE Py_ssize_t __pyx_PyIndex_AsSsize_t(PyObject* b);
#define __pyx_PyInt_AsLong(x) (PyInt_CheckExact(x) ? PyInt_AS_LONG(x) : PyInt_AsLong(x))
#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))
static INLINE unsigned char __pyx_PyInt_unsigned_char(PyObject* x);
static INLINE unsigned short __pyx_PyInt_unsigned_short(PyObject* x);
static INLINE char __pyx_PyInt_char(PyObject* x);
static INLINE short __pyx_PyInt_short(PyObject* x);
static INLINE int __pyx_PyInt_int(PyObject* x);
static INLINE long __pyx_PyInt_long(PyObject* x);
static INLINE signed char __pyx_PyInt_signed_char(PyObject* x);
static INLINE signed short __pyx_PyInt_signed_short(PyObject* x);
static INLINE signed int __pyx_PyInt_signed_int(PyObject* x);
static INLINE signed long __pyx_PyInt_signed_long(PyObject* x);
static INLINE long double __pyx_PyInt_long_double(PyObject* x);
#ifdef __GNUC__
/* Test for GCC > 2.95 */
#if __GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))
#define likely(x) __builtin_expect(!!(x), 1)
#define unlikely(x) __builtin_expect(!!(x), 0)
#else /* __GNUC__ > 2 ... */
#define likely(x) (x)
#define unlikely(x) (x)
#endif /* __GNUC__ > 2 ... */
#else /* __GNUC__ */
#define likely(x) (x)
#define unlikely(x) (x)
#endif /* __GNUC__ */
static PyObject *__pyx_m;
static PyObject *__pyx_b;
static PyObject *__pyx_empty_tuple;
static int __pyx_lineno;
static int __pyx_clineno = 0;
static const char * __pyx_cfilenm= __FILE__;
static const char *__pyx_filename;
static const char **__pyx_f;
static void __Pyx_AddTraceback(const char *funcname); /*proto*/
/* Type declarations */
/* Module declarations from helloworld */
static int __pyx_f_10helloworld_hello(int, int); /*proto*/
/* Implementation of helloworld */
/* "/home/nosklo/devel/ctest/hello.pyx":1
* cdef int hello(int a, int b): # <<<<<<<<<<<<<<
* return a + b
*
*/
static int __pyx_f_10helloworld_hello(int __pyx_v_a, int __pyx_v_b) {
int __pyx_r;
/* "/home/nosklo/devel/ctest/hello.pyx":2
* cdef int hello(int a, int b):
* return a + b # <<<<<<<<<<<<<<
*
*/
__pyx_r = (__pyx_v_a + __pyx_v_b);
goto __pyx_L0;
__pyx_r = 0;
__pyx_L0:;
return __pyx_r;
}
static struct PyMethodDef __pyx_methods[] = {
{0, 0, 0, 0}
};
static void __pyx_init_filenames(void); /*proto*/
#if PY_MAJOR_VERSION >= 3
static struct PyModuleDef __pyx_moduledef = {
PyModuleDef_HEAD_INIT,
"helloworld",
0, /* m_doc */
-1, /* m_size */
__pyx_methods /* m_methods */,
NULL, /* m_reload */
NULL, /* m_traverse */
NULL, /* m_clear */
NULL /* m_free */
};
#endif
static int __Pyx_InitCachedBuiltins(void) {
return 0;
return -1;
}
static int __Pyx_InitGlobals(void) {
return 0;
return -1;
}
#if PY_MAJOR_VERSION < 3
PyMODINIT_FUNC inithelloworld(void); /*proto*/
PyMODINIT_FUNC inithelloworld(void)
#else
PyMODINIT_FUNC PyInit_helloworld(void); /*proto*/
PyMODINIT_FUNC PyInit_helloworld(void)
#endif
{
__pyx_empty_tuple = PyTuple_New(0);
if (unlikely(!__pyx_empty_tuple))
{__pyx_filename = __pyx_f[0]; __pyx_lineno = 1;
__pyx_clineno = __LINE__; goto __pyx_L1_error;}
/*--- Library function declarations ---*/
__pyx_init_filenames();
/*--- Initialize various global constants etc. ---*/
if (unlikely(__Pyx_InitGlobals() < 0))
{__pyx_filename = __pyx_f[0];
__pyx_lineno = 1;
__pyx_clineno = __LINE__;
goto __pyx_L1_error;}
/*--- Module creation code ---*/
#if PY_MAJOR_VERSION < 3
__pyx_m = Py_InitModule4("helloworld", __pyx_methods, 0, 0, PYTHON_API_VERSION);
#else
__pyx_m = PyModule_Create(&__pyx_moduledef);
#endif
if (!__pyx_m)
{__pyx_filename = __pyx_f[0];
__pyx_lineno = 1; __pyx_clineno = __LINE__;
goto __pyx_L1_error;};
#if PY_MAJOR_VERSION < 3
Py_INCREF(__pyx_m);
#endif
__pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME);
if (!__pyx_b)
{__pyx_filename = __pyx_f[0]; __pyx_lineno = 1;
__pyx_clineno = __LINE__; goto __pyx_L1_error;};
if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0)
{__pyx_filename = __pyx_f[0]; __pyx_lineno = 1;
__pyx_clineno = __LINE__; goto __pyx_L1_error;};
/*--- Builtin init code ---*/
if (unlikely(__Pyx_InitCachedBuiltins() < 0))
{__pyx_filename = __pyx_f[0]; __pyx_lineno = 1;
__pyx_clineno = __LINE__; goto __pyx_L1_error;}
__pyx_skip_dispatch = 0;
/*--- Global init code ---*/
/*--- Function export code ---*/
/*--- Type init code ---*/
/*--- Type import code ---*/
/*--- Function import code ---*/
/*--- Execution code ---*/
/* "/home/nosklo/devel/ctest/hello.pyx":1
* cdef int hello(int a, int b): # <<<<<<<<<<<<<<
* return a + b
*
*/
#if PY_MAJOR_VERSION < 3
return;
#else
return __pyx_m;
#endif
__pyx_L1_error:;
__Pyx_AddTraceback("helloworld");
#if PY_MAJOR_VERSION >= 3
return NULL;
#endif
}
static const char *__pyx_filenames[] = {
"hello.pyx",
};
/* Runtime support code */
static void __pyx_init_filenames(void) {
__pyx_f = __pyx_filenames;
}
#include "compile.h"
#include "frameobject.h"
#include "traceback.h"
static void __Pyx_AddTraceback(const char *funcname) {
PyObject *py_srcfile = 0;
PyObject *py_funcname = 0;
PyObject *py_globals = 0;
PyObject *empty_string = 0;
PyCodeObject *py_code = 0;
PyFrameObject *py_frame = 0;
#if PY_MAJOR_VERSION < 3
py_srcfile = PyString_FromString(__pyx_filename);
#else
py_srcfile = PyUnicode_FromString(__pyx_filename);
#endif
if (!py_srcfile) goto bad;
if (__pyx_clineno) {
#if PY_MAJOR_VERSION < 3
py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname,
__pyx_cfilenm, __pyx_clineno);
#else
py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname,
__pyx_cfilenm, __pyx_clineno);
#endif
}
else {
#if PY_MAJOR_VERSION < 3
py_funcname = PyString_FromString(funcname);
#else
py_funcname = PyUnicode_FromString(funcname);
#endif
}
if (!py_funcname) goto bad;
py_globals = PyModule_GetDict(__pyx_m);
if (!py_globals) goto bad;
#if PY_MAJOR_VERSION < 3
empty_string = PyString_FromStringAndSize("", 0);
#else
empty_string = PyBytes_FromStringAndSize("", 0);
#endif
if (!empty_string) goto bad;
py_code = PyCode_New(
0, /*int argcount,*/
#if PY_MAJOR_VERSION >= 3
0, /*int kwonlyargcount,*/
#endif
0, /*int nlocals,*/
0, /*int stacksize,*/
0, /*int flags,*/
empty_string, /*PyObject *code,*/
__pyx_empty_tuple, /*PyObject *consts,*/
__pyx_empty_tuple, /*PyObject *names,*/
__pyx_empty_tuple, /*PyObject *varnames,*/
__pyx_empty_tuple, /*PyObject *freevars,*/
__pyx_empty_tuple, /*PyObject *cellvars,*/
py_srcfile, /*PyObject *filename,*/
py_funcname, /*PyObject *name,*/
__pyx_lineno, /*int firstlineno,*/
empty_string /*PyObject *lnotab*/
);
if (!py_code) goto bad;
py_frame = PyFrame_New(
PyThreadState_GET(), /*PyThreadState *tstate,*/
py_code, /*PyCodeObject *code,*/
py_globals, /*PyObject *globals,*/
0 /*PyObject *locals*/
);
if (!py_frame) goto bad;
py_frame->f_lineno = __pyx_lineno;
PyTraceBack_Here(py_frame);
bad:
Py_XDECREF(py_srcfile);
Py_XDECREF(py_funcname);
Py_XDECREF(empty_string);
Py_XDECREF(py_code);
Py_XDECREF(py_frame);
}
/* Type Conversion Functions */
static INLINE Py_ssize_t __pyx_PyIndex_AsSsize_t(PyObject* b) {
Py_ssize_t ival;
PyObject* x = PyNumber_Index(b);
if (!x) return -1;
ival = PyInt_AsSsize_t(x);
Py_DECREF(x);
return ival;
}
static INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {
if (x == Py_True) return 1;
else if (x == Py_False) return 0;
else return PyObject_IsTrue(x);
}
static INLINE PY_LONG_LONG __pyx_PyInt_AsLongLong(PyObject* x) {
if (PyInt_CheckExact(x)) {
return PyInt_AS_LONG(x);
}
else if (PyLong_CheckExact(x)) {
return PyLong_AsLongLong(x);
}
else {
PY_LONG_LONG val;
PyObject* tmp = PyNumber_Int(x); if (!tmp) return (PY_LONG_LONG)-1;
val = __pyx_PyInt_AsLongLong(tmp);
Py_DECREF(tmp);
return val;
}
}
static INLINE unsigned PY_LONG_LONG __pyx_PyInt_AsUnsignedLongLong(PyObject* x) {
if (PyInt_CheckExact(x)) {
long val = PyInt_AS_LONG(x);
if (unlikely(val < 0)) {
PyErr_SetString(PyExc_TypeError, "Negative assignment to unsigned type.");
return (unsigned PY_LONG_LONG)-1;
}
return val;
}
else if (PyLong_CheckExact(x)) {
return PyLong_AsUnsignedLongLong(x);
}
else {
PY_LONG_LONG val;
PyObject* tmp = PyNumber_Int(x); if (!tmp) return (PY_LONG_LONG)-1;
val = __pyx_PyInt_AsUnsignedLongLong(tmp);
Py_DECREF(tmp);
return val;
}
}
static INLINE unsigned char __pyx_PyInt_unsigned_char(PyObject* x) {
if (sizeof(unsigned char) < sizeof(long)) {
long long_val = __pyx_PyInt_AsLong(x);
unsigned char val = (unsigned char)long_val;
if (unlikely((val != long_val) || (long_val < 0))) {
PyErr_SetString(PyExc_OverflowError, "value too large to convert to unsigned char");
return (unsigned char)-1;
}
return val;
}
else {
return __pyx_PyInt_AsLong(x);
}
}
static INLINE unsigned short __pyx_PyInt_unsigned_short(PyObject* x) {
if (sizeof(unsigned short) < sizeof(long)) {
long long_val = __pyx_PyInt_AsLong(x);
unsigned short val = (unsigned short)long_val;
if (unlikely((val != long_val) || (long_val < 0))) {
PyErr_SetString(PyExc_OverflowError, "value too large to convert to unsigned short");
return (unsigned short)-1;
}
return val;
}
else {
return __pyx_PyInt_AsLong(x);
}
}
static INLINE char __pyx_PyInt_char(PyObject* x) {
if (sizeof(char) < sizeof(long)) {
long long_val = __pyx_PyInt_AsLong(x);
char val = (char)long_val;
if (unlikely((val != long_val) )) {
PyErr_SetString(PyExc_OverflowError, "value too large to convert to char");
return (char)-1;
}
return val;
}
else {
return __pyx_PyInt_AsLong(x);
}
}
static INLINE short __pyx_PyInt_short(PyObject* x) {
if (sizeof(short) < sizeof(long)) {
long long_val = __pyx_PyInt_AsLong(x);
short val = (short)long_val;
if (unlikely((val != long_val) )) {
PyErr_SetString(PyExc_OverflowError, "value too large to convert to short");
return (short)-1;
}
return val;
}
else {
return __pyx_PyInt_AsLong(x);
}
}
static INLINE int __pyx_PyInt_int(PyObject* x) {
if (sizeof(int) < sizeof(long)) {
long long_val = __pyx_PyInt_AsLong(x);
int val = (int)long_val;
if (unlikely((val != long_val) )) {
PyErr_SetString(PyExc_OverflowError, "value too large to convert to int");
return (int)-1;
}
return val;
}
else {
return __pyx_PyInt_AsLong(x);
}
}
static INLINE long __pyx_PyInt_long(PyObject* x) {
if (sizeof(long) < sizeof(long)) {
long long_val = __pyx_PyInt_AsLong(x);
long val = (long)long_val;
if (unlikely((val != long_val) )) {
PyErr_SetString(PyExc_OverflowError, "value too large to convert to long");
return (long)-1;
}
return val;
}
else {
return __pyx_PyInt_AsLong(x);
}
}
static INLINE signed char __pyx_PyInt_signed_char(PyObject* x) {
if (sizeof(signed char) < sizeof(long)) {
long long_val = __pyx_PyInt_AsLong(x);
signed char val = (signed char)long_val;
if (unlikely((val != long_val) )) {
PyErr_SetString(PyExc_OverflowError, "value too large to convert to signed char");
return (signed char)-1;
}
return val;
}
else {
return __pyx_PyInt_AsLong(x);
}
}
static INLINE signed short __pyx_PyInt_signed_short(PyObject* x) {
if (sizeof(signed short) < sizeof(long)) {
long long_val = __pyx_PyInt_AsLong(x);
signed short val = (signed short)long_val;
if (unlikely((val != long_val) )) {
PyErr_SetString(PyExc_OverflowError, "value too large to convert to signed short");
return (signed short)-1;
}
return val;
}
else {
return __pyx_PyInt_AsLong(x);
}
}
static INLINE signed int __pyx_PyInt_signed_int(PyObject* x) {
if (sizeof(signed int) < sizeof(long)) {
long long_val = __pyx_PyInt_AsLong(x);
signed int val = (signed int)long_val;
if (unlikely((val != long_val) )) {
PyErr_SetString(PyExc_OverflowError, "value too large to convert to signed int");
return (signed int)-1;
}
return val;
}
else {
return __pyx_PyInt_AsLong(x);
}
}
static INLINE signed long __pyx_PyInt_signed_long(PyObject* x) {
if (sizeof(signed long) < sizeof(long)) {
long long_val = __pyx_PyInt_AsLong(x);
signed long val = (signed long)long_val;
if (unlikely((val != long_val) )) {
PyErr_SetString(PyExc_OverflowError, "value too large to convert to signed long");
return (signed long)-1;
}
return val;
}
else {
return __pyx_PyInt_AsLong(x);
}
}
static INLINE long double __pyx_PyInt_long_double(PyObject* x) {
if (sizeof(long double) < sizeof(long)) {
long long_val = __pyx_PyInt_AsLong(x);
long double val = (long double)long_val;
if (unlikely((val != long_val) )) {
PyErr_SetString(PyExc_OverflowError, "value too large to convert to long double");
return (long double)-1;
}
return val;
}
else {
return __pyx_PyInt_AsLong(x);
}
}
``` |
Extending python - to swig, not to swig or Cython | 456,884 | 55 | 2009-01-19T08:32:06Z | 3,167,276 | 7 | 2010-07-02T15:58:44Z | [
"python",
"c++",
"c",
"swig",
"cython"
] | I found the bottleneck in my python code, played around with psycho etc. Then decided to write a c/c++ extension for performance.
With the help of swig you almost don't need to care about arguments etc. Everything works fine.
Now my question: swig creates a quite large py-file which does a lot of 'checkings' and 'PySwigObject' before calling the actual .pyd or .so code.
Does anyone of you have any experience whether there is some more performance to gain if you hand-write this file or let swig do it. | An observation: Based on the benchmarking conducted by the pybindgen developers, there is no significant difference between boost.python and swig. I haven't done my own benchmarking to verify how much of this depends on the proper use of the boost.python functionality.
Note also that there may be a reason that pybindgen seems to be in general quite a bit faster than swig and boost.python: it *may* not produce as versatile a binding as the other two. For instance, exception propagation, call argument type checking, etc. I haven't had a chance to use pybindgen yet but I intend to.
Boost is in general quite big package to install, and last I saw you can't just install boost python you pretty much need the whole Boost library. As others have mentioned compilation will be slow due to heavy use of template programming, which also means typically rather cryptic error messages at compile time.
Summary: given how easy SWIG is to install and use, that it generates decent binding that is robust and versatile, and that one interface file allows your C++ DLL to be available from several other languages like LUA, C#, and Java, I would favor it over boost.python. But unless you really need multi-language support I would take a close look at PyBindGen because of its purported speed, and pay close attention to robustness and versatility of binding it generates. |
Extending python - to swig, not to swig or Cython | 456,884 | 55 | 2009-01-19T08:32:06Z | 6,580,735 | 26 | 2011-07-05T09:47:58Z | [
"python",
"c++",
"c",
"swig",
"cython"
] | I found the bottleneck in my python code, played around with psycho etc. Then decided to write a c/c++ extension for performance.
With the help of swig you almost don't need to care about arguments etc. Everything works fine.
Now my question: swig creates a quite large py-file which does a lot of 'checkings' and 'PySwigObject' before calling the actual .pyd or .so code.
Does anyone of you have any experience whether there is some more performance to gain if you hand-write this file or let swig do it. | SWIG 2.0.4 has introduced a new -builtin option that improves performance.
I did some benchmarking using an example program that does a lot of fast calls to a C++ extension.
I built the extension using boost.python, PyBindGen, SIP and SWIG with and without the -builtin option. Here are the results (average of 100 runs):
```
SWIG with -builtin 2.67s
SIP 2.70s
PyBindGen 2.74s
boost.python 3.07s
SWIG without -builtin 4.65s
```
SWIG used to be slowest. With the new -builtin option, SWIG seems to be fastest. |
Cropping pages of a .pdf file | 457,207 | 9 | 2009-01-19T10:43:23Z | 465,901 | 14 | 2009-01-21T16:12:44Z | [
"python",
"pdf"
] | I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python. | pypdf does what I expect in this area. Using the following script:
```
#!/usr/bin/python
#
from pyPdf import PdfFileWriter, PdfFileReader
input1 = PdfFileReader(file("in.pdf", "rb"))
output = PdfFileWriter()
numPages = input1.getNumPages()
print "document has %s pages." % numPages
for i in range(numPages):
page = input1.getPage(i)
print page.mediaBox.getUpperRight_x(), page.mediaBox.getUpperRight_y()
page.trimBox.lowerLeft = (25, 25)
page.trimBox.upperRight = (225, 225)
page.cropBox.lowerLeft = (50, 50)
page.cropBox.upperRight = (200, 200)
output.addPage(page)
outputStream = file("out.pdf", "wb")
output.write(outputStream)
outputStream.close()
```
The resulting document has a trim box that is 200x200 points and starts at 25,25 points inside the media box.
The crop box is 25 points inside the trim box.
Here is how my sample document looks in acrobat professional after processing with the above code:

This document will appear blank when loaded in acrobat reader. |
What is the best real time plotting widget for wxPython? | 457,246 | 12 | 2009-01-19T11:00:38Z | 457,524 | 10 | 2009-01-19T13:04:57Z | [
"python",
"wxpython",
"wxwidgets"
] | I would like to show a read time graph with one or two curves an up to 50 samples per second using Python and wxPython.
The widget should support both Win32 and Linux platforms.
Any hints are welcome.
Edited to add:
I don't need to update the display at 50 fps, but up need to show up to 50 samples of data on both curves, with a reasonable update rate for the display (5..10 fps should be okay).
Edited to add:
I have used mathplotlib in a project with good success.
I have then settled for wx.lib.plot for other projects, which I found to be simpler, but somewhat easier to use and consuming less CPU cycles. As wx.lib comes as part of the standard wxPython distribution is is particularly easy to use. | Here's a sample of a dynamic plotter with wxPython and matplotlib. While not 50 FPS, it draws smoothly and quickly enough for most real-time data views:
<http://eli.thegreenplace.net/2008/08/01/matplotlib-with-wxpython-guis/>
Here's just the code paste:
<http://paste.pocoo.org/show/100358/> |
What is the best real time plotting widget for wxPython? | 457,246 | 12 | 2009-01-19T11:00:38Z | 7,605,072 | 8 | 2011-09-30T01:24:00Z | [
"python",
"wxpython",
"wxwidgets"
] | I would like to show a read time graph with one or two curves an up to 50 samples per second using Python and wxPython.
The widget should support both Win32 and Linux platforms.
Any hints are welcome.
Edited to add:
I don't need to update the display at 50 fps, but up need to show up to 50 samples of data on both curves, with a reasonable update rate for the display (5..10 fps should be okay).
Edited to add:
I have used mathplotlib in a project with good success.
I have then settled for wx.lib.plot for other projects, which I found to be simpler, but somewhat easier to use and consuming less CPU cycles. As wx.lib comes as part of the standard wxPython distribution is is particularly easy to use. | If you want high performance with a minimal code footprint, look no farther than Python's built-in plotting library tkinter. No need to write special C / C++ code or use a large plotting package to get performance much better than 50 fps.

The following code scrolls a 1000x200 strip chart at 400 fps on a 2.2 GHz Core 2 duo, 1000 fps on a 3.4 GHz Core i3. The central routine "scrollstrip" plots a set of data points and corresponding colors at the right edge along with an optional vertical grid bar, then scrolls the stripchart to the left by 1. To plot horizontal grid bars just include them in the data and color arrays as constants along with your variable data points.
```
from tkinter import *
import math, random, threading, time
class StripChart:
def __init__(self, root):
self.gf = self.makeGraph(root)
self.cf = self.makeControls(root)
self.gf.pack()
self.cf.pack()
self.Reset()
def makeGraph(self, frame):
self.sw = 1000
self.h = 200
self.top = 2
gf = Canvas(frame, width=self.sw, height=self.h+10,
bg="#002", bd=0, highlightthickness=0)
gf.p = PhotoImage(width=2*self.sw, height=self.h)
self.item = gf.create_image(0, self.top, image=gf.p, anchor=NW)
return(gf)
def makeControls(self, frame):
cf = Frame(frame, borderwidth=1, relief="raised")
Button(cf, text="Run", command=self.Run).grid(column=2, row=2)
Button(cf, text="Stop", command=self.Stop).grid(column=4, row=2)
Button(cf, text="Reset", command=self.Reset).grid(column=6, row=2)
self.fps = Label(cf, text="0 fps")
self.fps.grid(column=2, row=4, columnspan=5)
return(cf)
def Run(self):
self.go = 1
for t in threading.enumerate():
if t.name == "_gen_":
print("already running")
return
threading.Thread(target=self.do_start, name="_gen_").start()
def Stop(self):
self.go = 0
for t in threading.enumerate():
if t.name == "_gen_":
t.join()
def Reset(self):
self.Stop()
self.clearstrip(self.gf.p, '#345')
def do_start(self):
t = 0
y2 = 0
tx = time.time()
while self.go:
y1 = 0.2*math.sin(0.02*math.pi*t)
y2 = 0.9*y2 + 0.1*(random.random()-0.5)
self.scrollstrip(self.gf.p,
(0.25+y1, 0.25, 0.7+y2, 0.6, 0.7, 0.8),
( '#ff4', '#f40', '#4af', '#080', '#0f0', '#080'),
"" if t % 65 else "#088")
t += 1
if not t % 100:
tx2 = time.time()
self.fps.config(text='%d fps' % int(100/(tx2 - tx)))
tx = tx2
# time.sleep(0.001)
def clearstrip(self, p, color): # Fill strip with background color
self.bg = color # save background color for scroll
self.data = None # clear previous data
self.x = 0
p.tk.call(p, 'put', color, '-to', 0, 0, p['width'], p['height'])
def scrollstrip(self, p, data, colors, bar=""): # Scroll the strip, add new data
self.x = (self.x + 1) % self.sw # x = double buffer position
bg = bar if bar else self.bg
p.tk.call(p, 'put', bg, '-to', self.x, 0,
self.x+1, self.h)
p.tk.call(p, 'put', bg, '-to', self.x+self.sw, 0,
self.x+self.sw+1, self.h)
self.gf.coords(self.item, -1-self.x, self.top) # scroll to just-written column
if not self.data:
self.data = data
for d in range(len(data)):
y0 = int((self.h-1) * (1.0-self.data[d])) # plot all the data points
y1 = int((self.h-1) * (1.0-data[d]))
ya, yb = sorted((y0, y1))
for y in range(ya, yb+1): # connect the dots
p.put(colors[d], (self.x,y))
p.put(colors[d], (self.x+self.sw,y))
self.data = data # save for next call
def main():
root = Tk()
root.title("StripChart")
app = StripChart(root)
root.mainloop()
main()
``` |
Is there an easily available implementation of erf() for Python? | 457,408 | 36 | 2009-01-19T12:10:58Z | 457,475 | 20 | 2009-01-19T12:47:09Z | [
"python",
"math"
] | I can implement the error function, erf, myself, but I'd prefer not to. Is there a python package with no external dependencies that contains an implementation of this function? I have found http://pylab.sourceforge.net/packages/included\_functions.html>this but this seems to be part of some much larger package (and it's not even clear which one!).
I'm sorry if this is a naive question - I'm totally new to Python. | I would recommend you download [numpy](http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103) (to have efficiant matrix in python) and [scipy](http://www.scipy.org/) (a Matlab toolbox substitute, which uses numpy). The erf function lies in scipy.
```
>>>from scipy.special import erf
>>>help(erf)
```
You can also use the erf function defined in pylab, but this is more intended at plotting the results of the things you compute with numpy and scipy. If you want an all-in-one
installation of these software you can use directly the [Python Enthought distribution](http://www.enthought.com/products/epd.php). |
Is there an easily available implementation of erf() for Python? | 457,408 | 36 | 2009-01-19T12:10:58Z | 457,805 | 39 | 2009-01-19T14:46:13Z | [
"python",
"math"
] | I can implement the error function, erf, myself, but I'd prefer not to. Is there a python package with no external dependencies that contains an implementation of this function? I have found http://pylab.sourceforge.net/packages/included\_functions.html>this but this seems to be part of some much larger package (and it's not even clear which one!).
I'm sorry if this is a naive question - I'm totally new to Python. | I recommend SciPy for numerical functions in Python, but if you want something with no dependencies, here is a function with an error error is less than 1.5 \* 10-7 for all inputs.
```
def erf(x):
# save the sign of x
sign = 1 if x >= 0 else -1
x = abs(x)
# constants
a1 = 0.254829592
a2 = -0.284496736
a3 = 1.421413741
a4 = -1.453152027
a5 = 1.061405429
p = 0.3275911
# A&S formula 7.1.26
t = 1.0/(1.0 + p*x)
y = 1.0 - (((((a5*t + a4)*t) + a3)*t + a2)*t + a1)*t*math.exp(-x*x)
return sign*y # erf(-x) = -erf(x)
```
The algorithm comes from [Handbook of Mathematical Functions](http://rads.stackoverflow.com/amzn/click/0486612724), formula 7.1.26. |
Is there an easily available implementation of erf() for Python? | 457,408 | 36 | 2009-01-19T12:10:58Z | 463,261 | 7 | 2009-01-20T21:44:09Z | [
"python",
"math"
] | I can implement the error function, erf, myself, but I'd prefer not to. Is there a python package with no external dependencies that contains an implementation of this function? I have found http://pylab.sourceforge.net/packages/included\_functions.html>this but this seems to be part of some much larger package (and it's not even clear which one!).
I'm sorry if this is a naive question - I'm totally new to Python. | A pure python implementation can be found in the mpmath module (http://code.google.com/p/mpmath/)
From the doc string:
```
>>> from mpmath import *
>>> mp.dps = 15
>>> print erf(0)
0.0
>>> print erf(1)
0.842700792949715
>>> print erf(-1)
-0.842700792949715
>>> print erf(inf)
1.0
>>> print erf(-inf)
-1.0
```
For large real `x`, `\mathrm{erf}(x)` approaches 1 very
rapidly::
```
>>> print erf(3)
0.999977909503001
>>> print erf(5)
0.999999999998463
```
The error function is an odd function::
```
>>> nprint(chop(taylor(erf, 0, 5)))
[0.0, 1.12838, 0.0, -0.376126, 0.0, 0.112838]
```
:func:`erf` implements arbitrary-precision evaluation and
supports complex numbers::
```
>>> mp.dps = 50
>>> print erf(0.5)
0.52049987781304653768274665389196452873645157575796
>>> mp.dps = 25
>>> print erf(1+j)
(1.316151281697947644880271 + 0.1904534692378346862841089j)
```
**Related functions**
See also :func:`erfc`, which is more accurate for large `x`,
and :func:`erfi` which gives the antiderivative of
`\exp(t^2)`.
The Fresnel integrals :func:`fresnels` and :func:`fresnelc`
are also related to the error function. |
Is there an easily available implementation of erf() for Python? | 457,408 | 36 | 2009-01-19T12:10:58Z | 6,662,057 | 44 | 2011-07-12T09:31:08Z | [
"python",
"math"
] | I can implement the error function, erf, myself, but I'd prefer not to. Is there a python package with no external dependencies that contains an implementation of this function? I have found http://pylab.sourceforge.net/packages/included\_functions.html>this but this seems to be part of some much larger package (and it's not even clear which one!).
I'm sorry if this is a naive question - I'm totally new to Python. | Since v.2.7. the standard *math* module contains *erf* function. This should be the easiest way.
<http://docs.python.org/2/library/math.html#math.erf> |
Is there a way to detach matplotlib plots so that the computation can continue? | 458,209 | 122 | 2009-01-19T16:30:57Z | 458,246 | 19 | 2009-01-19T16:40:24Z | [
"python",
"matplotlib",
"plot"
] | After these instructions in the Python interpreter one gets a window with a plot:
```
from matplotlib.pyplot import *
plot([1,2,3])
show()
# other code
```
Unfortunately, I don't know how to continue to interactively explore the figure created by `show()` while the program does further calculations.
Is it possible at all? Sometimes calculations are long and it would help if they would proceed during examination of intermediate results. | It is better to always check with the library you are using if it supports usage in a **non-blocking** way.
But if you want a more generic solution, or if there is no other way, you can run anything that blocks in a separated process by using the [`multprocessing`](http://docs.python.org/library/multiprocessing.html) module included in python. Computation will continue:
```
from multiprocessing import Process
from matplotlib.pyplot import plot, show
def plot_graph(*args):
for data in args:
plot(data)
show()
p = Process(target=plot_graph, args=([1, 2, 3],))
p.start()
print 'yay'
print 'computation continues...'
print 'that rocks.'
print 'Now lets wait for the graph be closed to continue...:'
p.join()
```
That has the overhead of launching a new process, and is sometimes harder to debug on complex scenarios, so I'd prefer the other solution (using `matplotlib`'s [nonblocking API calls](http://stackoverflow.com/questions/458209/is-there-a-way-to-detach-matplotlib-plots-so-that-the-computation-can-continue#458295)) |
Is there a way to detach matplotlib plots so that the computation can continue? | 458,209 | 122 | 2009-01-19T16:30:57Z | 458,295 | 112 | 2009-01-19T16:52:17Z | [
"python",
"matplotlib",
"plot"
] | After these instructions in the Python interpreter one gets a window with a plot:
```
from matplotlib.pyplot import *
plot([1,2,3])
show()
# other code
```
Unfortunately, I don't know how to continue to interactively explore the figure created by `show()` while the program does further calculations.
Is it possible at all? Sometimes calculations are long and it would help if they would proceed during examination of intermediate results. | Use `matplotlib`'s calls that won't block:
Using `draw()`:
```
from matplotlib.pyplot import plot, draw, show
plot([1,2,3])
draw()
print 'continue computation'
# at the end call show to ensure window won't close.
show()
```
Using interactive mode:
```
from matplotlib.pyplot import plot, ion, show
ion() # enables interactive mode
plot([1,2,3]) # result shows immediatelly (implicit draw())
print 'continue computation'
# at the end call show to ensure window won't close.
show()
``` |
Is there a way to detach matplotlib plots so that the computation can continue? | 458,209 | 122 | 2009-01-19T16:30:57Z | 458,321 | 8 | 2009-01-19T17:00:04Z | [
"python",
"matplotlib",
"plot"
] | After these instructions in the Python interpreter one gets a window with a plot:
```
from matplotlib.pyplot import *
plot([1,2,3])
show()
# other code
```
Unfortunately, I don't know how to continue to interactively explore the figure created by `show()` while the program does further calculations.
Is it possible at all? Sometimes calculations are long and it would help if they would proceed during examination of intermediate results. | You may want to read this document in `matplotlib`'s documentation, titled:
[Using matplotlib in a python shell](http://matplotlib.sourceforge.net/users/shell.html#using-matplotlib-in-a-python-shell) |
Is there a way to detach matplotlib plots so that the computation can continue? | 458,209 | 122 | 2009-01-19T16:30:57Z | 13,361,748 | 67 | 2012-11-13T13:40:54Z | [
"python",
"matplotlib",
"plot"
] | After these instructions in the Python interpreter one gets a window with a plot:
```
from matplotlib.pyplot import *
plot([1,2,3])
show()
# other code
```
Unfortunately, I don't know how to continue to interactively explore the figure created by `show()` while the program does further calculations.
Is it possible at all? Sometimes calculations are long and it would help if they would proceed during examination of intermediate results. | Use the keyword 'block' to override the blocking behavior, e.g.
```
from matplotlib.pyplot import show, plot
plot(1)
show(block=False)
# your code
```
to continue your code. |
Is there a way to detach matplotlib plots so that the computation can continue? | 458,209 | 122 | 2009-01-19T16:30:57Z | 14,398,396 | 13 | 2013-01-18T11:53:33Z | [
"python",
"matplotlib",
"plot"
] | After these instructions in the Python interpreter one gets a window with a plot:
```
from matplotlib.pyplot import *
plot([1,2,3])
show()
# other code
```
Unfortunately, I don't know how to continue to interactively explore the figure created by `show()` while the program does further calculations.
Is it possible at all? Sometimes calculations are long and it would help if they would proceed during examination of intermediate results. | Try
```
from matplotlib.pyplot import *
plot([1,2,3])
show(block=False)
# other code
# [...]
# Put
show()
# at the very end of your script
# to make sure Python doesn't bail out
# before you finished examining.
```
The [`show()` documentation](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.show) says:
> In non-interactive mode, display all figures and block until the figures have been closed; in interactive mode it has no effect unless figures were created prior to a change from non-interactive to interactive mode (not recommended). In that case it displays the figures but does not block.
>
> A single experimental keyword argument, `block`, may be set to `True` or `False` to override the blocking behavior described above. |
How do I install an .egg file without easy_install in Windows? | 458,311 | 14 | 2009-01-19T16:58:10Z | 458,339 | 12 | 2009-01-19T17:05:36Z | [
"python",
"easy-install",
"egg"
] | I have Python 2.6 and I want to install easy \_ install module. The problem is that the only available installation package of easy \_ install for Python 2.6 is an .egg file! What should I do? | You could try [this script](http://peak.telecommunity.com/dist/ez_setup.py).
```
#!python
"""Bootstrap setuptools installation
If you want to use setuptools in your package's setup.py, just include this
file in the same directory with it, and add this to the top of your setup.py::
  from ez_setup import use_setuptools
  use_setuptools()
If you want to require a specific version of setuptools, set a download
mirror, or use an alternate download directory, you can do so by supplying
the appropriate options to ``use_setuptools()``.
This file can also be run as a script to install or upgrade setuptools.
"""
import sys
DEFAULT_VERSION = "0.6c11"
DEFAULT_URL Â Â = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3]
md5_data = {
  'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca',
  'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb',
  'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b',
  'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a',
  'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618',
  'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac',
  'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5',
  'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4',
  'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c',
  'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b',
  'setuptools-0.6c10-py2.3.egg': 'ce1e2ab5d3a0256456d9fc13800a7090',
  'setuptools-0.6c10-py2.4.egg': '57d6d9d6e9b80772c59a53a8433a5dd4',
  'setuptools-0.6c10-py2.5.egg': 'de46ac8b1c97c895572e5e8596aeb8c7',
  'setuptools-0.6c10-py2.6.egg': '58ea40aef06da02ce641495523a0b7f5',
  'setuptools-0.6c11-py2.3.egg': '2baeac6e13d414a9d28e7ba5b5a596de',
  'setuptools-0.6c11-py2.4.egg': 'bd639f9b0eac4c42497034dec2ec0c2b',
  'setuptools-0.6c11-py2.5.egg': '64c94f3bf7a72a13ec83e0b24f2749b2',
  'setuptools-0.6c11-py2.6.egg': 'bfa92100bd772d5a213eedd356d64086',
  'setuptools-0.6c2-py2.3.egg': 'f0064bf6aa2b7d0f3ba0b43f20817c27',
  'setuptools-0.6c2-py2.4.egg': '616192eec35f47e8ea16cd6a122b7277',
  'setuptools-0.6c3-py2.3.egg': 'f181fa125dfe85a259c9cd6f1d7b78fa',
  'setuptools-0.6c3-py2.4.egg': 'e0ed74682c998bfb73bf803a50e7b71e',
  'setuptools-0.6c3-py2.5.egg': 'abef16fdd61955514841c7c6bd98965e',
  'setuptools-0.6c4-py2.3.egg': 'b0b9131acab32022bfac7f44c5d7971f',
  'setuptools-0.6c4-py2.4.egg': '2a1f9656d4fbf3c97bf946c0a124e6e2',
  'setuptools-0.6c4-py2.5.egg': '8f5a052e32cdb9c72bcf4b5526f28afc',
  'setuptools-0.6c5-py2.3.egg': 'ee9fd80965da04f2f3e6b3576e9d8167',
  'setuptools-0.6c5-py2.4.egg': 'afe2adf1c01701ee841761f5bcd8aa64',
  'setuptools-0.6c5-py2.5.egg': 'a8d3f61494ccaa8714dfed37bccd3d5d',
  'setuptools-0.6c6-py2.3.egg': '35686b78116a668847237b69d549ec20',
  'setuptools-0.6c6-py2.4.egg': '3c56af57be3225019260a644430065ab',
  'setuptools-0.6c6-py2.5.egg': 'b2f8a7520709a5b34f80946de5f02f53',
  'setuptools-0.6c7-py2.3.egg': '209fdf9adc3a615e5115b725658e13e2',
  'setuptools-0.6c7-py2.4.egg': '5a8f954807d46a0fb67cf1f26c55a82e',
  'setuptools-0.6c7-py2.5.egg': '45d2ad28f9750e7434111fde831e8372',
  'setuptools-0.6c8-py2.3.egg': '50759d29b349db8cfd807ba8303f1902',
  'setuptools-0.6c8-py2.4.egg': 'cba38d74f7d483c06e9daa6070cce6de',
  'setuptools-0.6c8-py2.5.egg': '1721747ee329dc150590a58b3e1ac95b',
  'setuptools-0.6c9-py2.3.egg': 'a83c4020414807b496e4cfbe08507c03',
  'setuptools-0.6c9-py2.4.egg': '260a2be2e5388d66bdaee06abec6342a',
  'setuptools-0.6c9-py2.5.egg': 'fe67c3e5a17b12c0e7c541b7ea43a8e6',
  'setuptools-0.6c9-py2.6.egg': 'ca37b1ff16fa2ede6e19383e7b59245a',
}
import sys, os
try: from hashlib import md5
except ImportError: from md5 import md5
def _validate_md5(egg_name, data):
  if egg_name in md5_data:
    digest = md5(data).hexdigest()
    if digest != md5_data[egg_name]:
      print >>sys.stderr, (
        "md5 validation of %s failed!  (Possible download problem?)"
        % egg_name
      )
      sys.exit(2)
  return data
def use_setuptools(
  version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
  download_delay=15
):
  """Automatically find/download setuptools and make it available on sys.path
  `version` should be a valid setuptools version number that is available
  as an egg for download under the `download_base` URL (which should end with
  a '/').  `to_dir` is the directory where setuptools will be downloaded, if
  it is not already available.  If `download_delay` is specified, it should
  be the number of seconds that will be paused before initiating a download,
  should one be required.  If an older version of setuptools is installed,
  this routine will print a message to ``sys.stderr`` and raise SystemExit in
  an attempt to abort the calling script.
  """
  was_imported = 'pkg_resources' in sys.modules or 'setuptools' in sys.modules
  def do_download():
    egg = download_setuptools(version, download_base, to_dir, download_delay)
    sys.path.insert(0, egg)
    import setuptools; setuptools.bootstrap_install_from = egg
  try:
    import pkg_resources
  except ImportError:
    return do_download()    Â
  try:
    pkg_resources.require("setuptools>="+version); return
  except pkg_resources.VersionConflict, e:
    if was_imported:
      print >>sys.stderr, (
      "The required version of setuptools (>=%s) is not available, and\n"
      "can't be installed while this script is running. Please install\n"
      " a more recent version first, using 'easy_install -U setuptools'."
      "\n\n(Currently using %r)"
      ) % (version, e.args[0])
      sys.exit(2)
  except pkg_resources.DistributionNotFound:
    pass
  del pkg_resources, sys.modules['pkg_resources']   # reload ok
  return do_download()
def download_setuptools(
  version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
  delay = 15
):
  """Download setuptools from a specified location and return its filename
  `version` should be a valid setuptools version number that is available
  as an egg for download under the `download_base` URL (which should end
  with a '/'). `to_dir` is the directory where the egg will be downloaded.
  `delay` is the number of seconds to pause before an actual download attempt.
  """
  import urllib2, shutil
  egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])
  url = download_base + egg_name
  saveto = os.path.join(to_dir, egg_name)
  src = dst = None
  if not os.path.exists(saveto):  # Avoid repeated downloads
    try:
      from distutils import log
      if delay:
        log.warn("""
---------------------------------------------------------------------------
This script requires setuptools version %s to run (even to display
help). Â I will attempt to download it for you (from
%s), but
you may need to enable firewall access for this script first.
I will start the download in %d seconds.
(Note: if this machine does not have network access, please obtain the file
  %s
and place it in this directory before rerunning this script.)
---------------------------------------------------------------------------""",
          version, download_base, delay, url
        ); from time import sleep; sleep(delay)
      log.warn("Downloading %s", url)
      src = urllib2.urlopen(url)
      # Read/write all in one block, so we don't create a corrupt file
      # if the download is interrupted.
      data = _validate_md5(egg_name, src.read())
      dst = open(saveto,"wb"); dst.write(data)
    finally:
      if src: src.close()
      if dst: dst.close()
  return os.path.realpath(saveto)
def main(argv, version=DEFAULT_VERSION):
  """Install or upgrade setuptools and EasyInstall"""
  try:
    import setuptools
  except ImportError:
    egg = None
    try:
      egg = download_setuptools(version, delay=0)
      sys.path.insert(0,egg)
      from setuptools.command.easy_install import main
      return main(list(argv)+[egg])  # we're done here
    finally:
      if egg and os.path.exists(egg):
        os.unlink(egg)
  else:
    if setuptools.__version__ == '0.0.1':
      print >>sys.stderr, (
      "You have an obsolete version of setuptools installed.  Please\n"
      "remove it from your system entirely before rerunning this script."
      )
      sys.exit(2)
  req = "setuptools>="+version
  import pkg_resources
  try:
    pkg_resources.require(req)
  except pkg_resources.VersionConflict:
    try:
      from setuptools.command.easy_install import main
    except ImportError:
      from easy_install import main
    main(list(argv)+[download_setuptools(delay=0)])
    sys.exit(0) # try to force an exit
  else:
    if argv:
      from setuptools.command.easy_install import main
      main(argv)
    else:
      print "Setuptools version",version,"or greater has been installed."
      print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)'
def update_md5(filenames):
  """Update our built-in md5 registry"""
  import re
  for name in filenames:
    base = os.path.basename(name)
    f = open(name,'rb')
    md5_data[base] = md5(f.read()).hexdigest()
    f.close()
  data = ["   %r: %r,\n" % it for it in md5_data.items()]
  data.sort()
  repl = "".join(data)
  import inspect
  srcfile = inspect.getsourcefile(sys.modules[__name__])
  f = open(srcfile, 'rb'); src = f.read(); f.close()
  match = re.search("\nmd5_data = {\n([^}]+)}", src)
  if not match:
    print >>sys.stderr, "Internal error!"
    sys.exit(2)
  src = src[:match.start(1)] + repl + src[match.end(1):]
  f = open(srcfile,'w')
  f.write(src)
  f.close()
if __name__=='__main__':
  if len(sys.argv)>2 and sys.argv[1]=='--md5update':
    update_md5(sys.argv[2:])
  else:
    main(sys.argv[1:])
``` |
Adding folders to a zip file using python | 458,436 | 31 | 2009-01-19T17:33:17Z | 459,242 | 10 | 2009-01-19T21:34:13Z | [
"python",
"file",
"zip",
"folder",
"zipfile"
] | I want to create a zip file. Add a folder to the zip file and then add a bunch of files to that folder.
So I want to end up with a zip file with a single folder with files in.
I dont know if its bad practice to have folders in zip files or something but google gives me nothing on the subject.
I started out with this:
```
def addFolderToZip(myZipFile,folder):
folder = folder.encode('ascii') #convert path to ascii for ZipFile Method
for file in glob.glob(folder+"/*"):
if os.path.isfile(file):
print file
myZipFile.write(file, os.path.basename(file), zipfile.ZIP_DEFLATED)
elif os.path.isdir(file):
addFolderToZip(myZipFile,file)
def createZipFile(filename,files,folders):
curTime=strftime("__%Y_%m_%d", time.localtime())
filename=filename+curTime;
print filename
zipFilename=utils.getFileName("files", filename+".zip")
myZipFile = zipfile.ZipFile( zipFilename, "w" ) # Open the zip file for writing
for file in files:
file = file.encode('ascii') #convert path to ascii for ZipFile Method
if os.path.isfile(file):
(filepath, filename) = os.path.split(file)
myZipFile.write( file, filename, zipfile.ZIP_DEFLATED )
for folder in folders:
addFolderToZip(myZipFile,folder)
myZipFile.close()
return (1,zipFilename)
(success,filename)=createZipFile(planName,files,folders);
```
Taken from: <http://mail.python.org/pipermail/python-list/2006-August/396166.html>
Which gets rid of all folders and puts all files in the target folder (and its subfolders) into a single zip file. I couldnt get it to add an entire folder.
If I feed the path to a folder in myZipFile.write, I get
> IOError: [Errno 13] Permission denied: '..\packed\bin'
Any help is much welcome.
Related question: [How do I zip the contents of a folder using python (version 2.5)?](http://stackoverflow.com/questions/296499/how-do-i-zip-the-contents-of-a-folder-using-python-version-2-5) | A zip file has no directory structure, it just has a bunch of pathnames and their contents. These pathnames should be relative to an imaginary root folder (the ZIP file itself). "../" prefixes have no defined meaning in a zip file.
Consider you have a file, `a` and you want to store it in a "folder" inside a zip file. All you have to do is prefix the filename with a folder name when storing the file in the zipfile:
```
zipi= zipfile.ZipInfo()
zipi.filename= "folder/a" # this is what you want
zipi.date_time= time.localtime(os.path.getmtime("a"))[:6]
zipi.compress_type= zipfile.ZIP_DEFLATED
filedata= open("a", "rb").read()
zipfile1.writestr(zipi, filedata) # zipfile1 is a zipfile.ZipFile instance
```
I don't know of any ZIP implementations allowing the inclusion of an *empty* folder in a ZIP file. I can think of a workaround (storing a *dummy* filename in the zip "folder" which should be ignored on extraction), but not portable across implementations. |
Adding folders to a zip file using python | 458,436 | 31 | 2009-01-19T17:33:17Z | 459,419 | 30 | 2009-01-19T22:21:46Z | [
"python",
"file",
"zip",
"folder",
"zipfile"
] | I want to create a zip file. Add a folder to the zip file and then add a bunch of files to that folder.
So I want to end up with a zip file with a single folder with files in.
I dont know if its bad practice to have folders in zip files or something but google gives me nothing on the subject.
I started out with this:
```
def addFolderToZip(myZipFile,folder):
folder = folder.encode('ascii') #convert path to ascii for ZipFile Method
for file in glob.glob(folder+"/*"):
if os.path.isfile(file):
print file
myZipFile.write(file, os.path.basename(file), zipfile.ZIP_DEFLATED)
elif os.path.isdir(file):
addFolderToZip(myZipFile,file)
def createZipFile(filename,files,folders):
curTime=strftime("__%Y_%m_%d", time.localtime())
filename=filename+curTime;
print filename
zipFilename=utils.getFileName("files", filename+".zip")
myZipFile = zipfile.ZipFile( zipFilename, "w" ) # Open the zip file for writing
for file in files:
file = file.encode('ascii') #convert path to ascii for ZipFile Method
if os.path.isfile(file):
(filepath, filename) = os.path.split(file)
myZipFile.write( file, filename, zipfile.ZIP_DEFLATED )
for folder in folders:
addFolderToZip(myZipFile,folder)
myZipFile.close()
return (1,zipFilename)
(success,filename)=createZipFile(planName,files,folders);
```
Taken from: <http://mail.python.org/pipermail/python-list/2006-August/396166.html>
Which gets rid of all folders and puts all files in the target folder (and its subfolders) into a single zip file. I couldnt get it to add an entire folder.
If I feed the path to a folder in myZipFile.write, I get
> IOError: [Errno 13] Permission denied: '..\packed\bin'
Any help is much welcome.
Related question: [How do I zip the contents of a folder using python (version 2.5)?](http://stackoverflow.com/questions/296499/how-do-i-zip-the-contents-of-a-folder-using-python-version-2-5) | Ok, after i understood what you want, it is as simple as using the second argument of `zipfile.write`, where you can use whatever you want:
```
import zipfile
myZipFile = zipfile.ZipFile("zip.zip", "w" )
myZipFile.write("test.py", "dir\\test.py", zipfile.ZIP_DEFLATED )
```
creates a zipfile where `test.py` would be extracted to a directory called `dir`
EDIT:
I once had to create an empty directory in a zip file: it is possible.
after the code above just delete the file test.py from the zipfile, the file is gone, but the empty directory stays. |
Adding folders to a zip file using python | 458,436 | 31 | 2009-01-19T17:33:17Z | 6,511,788 | 44 | 2011-06-28T19:05:19Z | [
"python",
"file",
"zip",
"folder",
"zipfile"
] | I want to create a zip file. Add a folder to the zip file and then add a bunch of files to that folder.
So I want to end up with a zip file with a single folder with files in.
I dont know if its bad practice to have folders in zip files or something but google gives me nothing on the subject.
I started out with this:
```
def addFolderToZip(myZipFile,folder):
folder = folder.encode('ascii') #convert path to ascii for ZipFile Method
for file in glob.glob(folder+"/*"):
if os.path.isfile(file):
print file
myZipFile.write(file, os.path.basename(file), zipfile.ZIP_DEFLATED)
elif os.path.isdir(file):
addFolderToZip(myZipFile,file)
def createZipFile(filename,files,folders):
curTime=strftime("__%Y_%m_%d", time.localtime())
filename=filename+curTime;
print filename
zipFilename=utils.getFileName("files", filename+".zip")
myZipFile = zipfile.ZipFile( zipFilename, "w" ) # Open the zip file for writing
for file in files:
file = file.encode('ascii') #convert path to ascii for ZipFile Method
if os.path.isfile(file):
(filepath, filename) = os.path.split(file)
myZipFile.write( file, filename, zipfile.ZIP_DEFLATED )
for folder in folders:
addFolderToZip(myZipFile,folder)
myZipFile.close()
return (1,zipFilename)
(success,filename)=createZipFile(planName,files,folders);
```
Taken from: <http://mail.python.org/pipermail/python-list/2006-August/396166.html>
Which gets rid of all folders and puts all files in the target folder (and its subfolders) into a single zip file. I couldnt get it to add an entire folder.
If I feed the path to a folder in myZipFile.write, I get
> IOError: [Errno 13] Permission denied: '..\packed\bin'
Any help is much welcome.
Related question: [How do I zip the contents of a folder using python (version 2.5)?](http://stackoverflow.com/questions/296499/how-do-i-zip-the-contents-of-a-folder-using-python-version-2-5) | You can also use shutil
```
import shutil
shutil.make_archive("desired_zipfile_name_no", "zip", "name_of_the_folder_you_want_to_zip")
```
This will put the whole folder in the zip. |
Standard way to embed version into python package? | 458,550 | 126 | 2009-01-19T18:05:54Z | 459,185 | 64 | 2009-01-19T21:13:57Z | [
"python",
"string",
"package"
] | Is there a standard way to associate version string with a python package in such way that I could do the following?
```
import foo
print foo.version
```
I would imagine there's some way to retrieve that data without any extra hardcoding, since minor/major strings are specified in `setup.py` already. Alternative solution that I found was to have `import __version__` in my `foo/__init__.py` and then have `__version__.py` generated by `setup.py`. | Not directly an answer to your question, but you should consider naming it `__version__`, not `version`.
This is almost a quasi-standard. Many modules in the standard library use `__version__`, and this is also used in [lots](http://www.google.com/codesearch?as_q=__version__&btnG=Search+Code&hl=en&as_lang=python&as_license_restrict=i&as_license=&as_package=&as_filename=&as_case=) of 3rd-party modules, so it's the quasi-standard.
Usually, `__version__` is a string, but sometimes it's also a float or tuple.
Edit: as mentioned by S.Lott (Thank you!), [PEP 8](http://www.python.org/dev/peps/pep-0008/) says it explicitly:
> Version Bookkeeping
>
> If you have to have Subversion, CVS, or RCS crud in your source file,
> do it as follows.
>
> ```
> __version__ = "$Revision: 63990 $"
> # $Source$
> ```
>
> These lines should be included after the module's docstring, before
> any other code, separated by a blank line above and below.
You should also make sure that the version number conforms to the format described in [PEP 440](http://www.python.org/dev/peps/pep-0440/) ([PEP 386](http://www.python.org/dev/peps/pep-0386/) a previous version of this standard). |
Standard way to embed version into python package? | 458,550 | 126 | 2009-01-19T18:05:54Z | 1,131,751 | 17 | 2009-07-15T14:29:04Z | [
"python",
"string",
"package"
] | Is there a standard way to associate version string with a python package in such way that I could do the following?
```
import foo
print foo.version
```
I would imagine there's some way to retrieve that data without any extra hardcoding, since minor/major strings are specified in `setup.py` already. Alternative solution that I found was to have `import __version__` in my `foo/__init__.py` and then have `__version__.py` generated by `setup.py`. | Though this is probably far too late, there is a slightly simpler alternative to the previous answer:
```
__version_info__ = ('1', '2', '3')
__version__ = '.'.join(__version_info__)
```
(And it would be fairly simple to convert auto-incrementing portions of version numbers to a string using `str()`.)
Of course, from what I've seen, people tend to use something like the previously-mentioned version when using `__version_info__`, and as such store it as a tuple of ints; however, I don't quite see the point in doing so, as I doubt there are situations where you would perform mathematical operations such as addition and subtraction on portions of version numbers for any purpose besides curiosity or auto-incrementation (and even then, `int()` and `str()` can be used fairly easily). (On the other hand, there is the possibility of someone else's code expecting a numerical tuple rather than a string tuple and thus failing.)
This is, of course, my own view, and I would gladly like others' input on using a numerical tuple.
---
As shezi reminded me, (lexical) comparisons of number strings do not necessarily have the same result as direct numerical comparisons; leading zeroes would be required to provide for that. So in the end, storing `__version_info__` (or whatever it would be called) as a tuple of integer values would allow for more efficient version comparisons. |
Standard way to embed version into python package? | 458,550 | 126 | 2009-01-19T18:05:54Z | 7,071,358 | 67 | 2011-08-15T22:04:35Z | [
"python",
"string",
"package"
] | Is there a standard way to associate version string with a python package in such way that I could do the following?
```
import foo
print foo.version
```
I would imagine there's some way to retrieve that data without any extra hardcoding, since minor/major strings are specified in `setup.py` already. Alternative solution that I found was to have `import __version__` in my `foo/__init__.py` and then have `__version__.py` generated by `setup.py`. | Here is how I do this. Advantages of the following method:
1. It provides a `__version__` attribute.
2. It provides the standard metadata version. Therefore it will be detected by `pkg_resources` or other tools that parse the package metadata (EGG-INFO and/or PKG-INFO, PEP 0345).
3. It doesn't import your package (or anything else) when building your package, which can cause problems in some situations. (See the comments below about what problems this can cause.)
4. There is only one place that the version number is written down, so there is only one place to change it when the version number changes, and there is less chance of inconsistent versions.
Here is how it works: the "one canonical place" to store the version number is a .py file, named "\_version.py" which is in your Python package, for example in `myniftyapp/_version.py`. This file is a Python module, but your setup.py doesn't import it! (That would defeat feature 3.) Instead your setup.py knows that the contents of this file is very simple, something like:
```
__version__ = "3.6.5"
```
And so your setup.py opens the file and parses it, with code like:
```
import re
VERSIONFILE="myniftyapp/_version.py"
verstrline = open(VERSIONFILE, "rt").read()
VSRE = r"^__version__ = ['\"]([^'\"]*)['\"]"
mo = re.search(VSRE, verstrline, re.M)
if mo:
verstr = mo.group(1)
else:
raise RuntimeError("Unable to find version string in %s." % (VERSIONFILE,))
```
Then your setup.py passes that string as the value of the "version" argument to `setup()`, thus satisfying feature 2.
To satisfy feature 1, you can have your package (at run-time, not at setup time!) import the \_version file from `myniftyapp/__init__.py` like this:
```
from _version import __version__
```
Here is [an example of this technique](https://tahoe-lafs.org/trac/zfec/browser/trunk/zfec/setup.py?rev=390#L84) that I've been using for years.
The code in that example is a bit more complicated, but the simplified example that I wrote into this comment should be a complete implementation.
Here is [example code of importing the version](https://tahoe-lafs.org/trac/zfec/browser/trunk/zfec/zfec/__init__.py?rev=363).
If you see anything wrong with this approach, please let me know: zooko at zooko dot com. If you don't see anything wrong with this approach then use it! Because the more packages come with their version numbers in the expected places the better! |
Standard way to embed version into python package? | 458,550 | 126 | 2009-01-19T18:05:54Z | 15,952,533 | 19 | 2013-04-11T15:16:03Z | [
"python",
"string",
"package"
] | Is there a standard way to associate version string with a python package in such way that I could do the following?
```
import foo
print foo.version
```
I would imagine there's some way to retrieve that data without any extra hardcoding, since minor/major strings are specified in `setup.py` already. Alternative solution that I found was to have `import __version__` in my `foo/__init__.py` and then have `__version__.py` generated by `setup.py`. | Many of the existing answers argue that "There doesn't seem to be a standard way" or that a style "is almost a quasi-standard."
In fact *there is a standard way* to do this\*:
* **[PEP 396](http://www.python.org/dev/peps/pep-0396/): Module Version Numbers**
This describes, with rationale, an (admittedly optional) standard for modules to follow. Here's a snippet:
> 3) When a module (or package) includes a version number, the version SHOULD be available in the `__version__` attribute.
>
> 4)
> For modules which live inside a namespace package, the module SHOULD include the `__version__` attribute. The namespace package itself SHOULD NOT include its own `__version__` attribute.
>
> 5)
> The `__version__` attribute's value SHOULD be a string.
\* Edited to add: As per the comments, this actually is **not** an accepted standard; it was deferred. |
Standard way to embed version into python package? | 458,550 | 126 | 2009-01-19T18:05:54Z | 16,084,844 | 44 | 2013-04-18T13:50:57Z | [
"python",
"string",
"package"
] | Is there a standard way to associate version string with a python package in such way that I could do the following?
```
import foo
print foo.version
```
I would imagine there's some way to retrieve that data without any extra hardcoding, since minor/major strings are specified in `setup.py` already. Alternative solution that I found was to have `import __version__` in my `foo/__init__.py` and then have `__version__.py` generated by `setup.py`. | Here is the best solution I've seen so far and it also explains why:
Inside `yourpackage/version.py`:
```
# Store the version here so:
# 1) we don't load dependencies by storing it in __init__.py
# 2) we can import it in setup.py for the same reason
# 3) we can import it into your module module
__version__ = '0.12'
```
Inside `yourpackage/__init__.py`:
```
from .version import __version__
```
Inside `setup.py`:
```
exec(open('yourpackage/version.py').read())
setup(
...
version=__version__,
...
```
If you know another approach that seems to be better let me know. |
How do you run your own code alongside Tkinter's event loop? | 459,083 | 64 | 2009-01-19T20:40:39Z | 459,131 | 79 | 2009-01-19T20:55:36Z | [
"python",
"events",
"tkinter"
] | My little brother is just getting into programming, and for his Science Fair project, he's doing a simulation of a flock of birds in the sky. He's gotten most of his code written, and it works nicely, but the birds need to move *every moment*.
Tkinter, however, hogs the time for its own event loop, and so his code won't run. Doing `root.mainloop()` runs, runs, and keeps running, and the only thing it runs is the event handlers.
Is there a way to have his code run alongside the mainloop (without multithreading, it's confusing and this should be kept simple), and if so, what is it?
Right now, he came up with an ugly hack, tying his `move()` function to `<b1-motion>`, so that as long as he holds the button down and wiggles the mouse, it works. But there's got to be a better way. | Use the `after` method on the `Tk` object:
```
from tkinter import *
root = Tk()
def task():
print("hello")
root.after(2000, task) # reschedule event in 2 seconds
root.after(2000, task)
root.mainloop()
```
Here's the declaration and documentation for the `after` method:
```
def after(self, ms, func=None, *args):
"""Call function once after given time.
MS specifies the time in milliseconds. FUNC gives the
function which shall be called. Additional parameters
are given as parameters to the function call. Return
identifier to cancel scheduling with after_cancel."""
``` |
How do you run your own code alongside Tkinter's event loop? | 459,083 | 64 | 2009-01-19T20:40:39Z | 1,835,036 | 26 | 2009-12-02T18:55:46Z | [
"python",
"events",
"tkinter"
] | My little brother is just getting into programming, and for his Science Fair project, he's doing a simulation of a flock of birds in the sky. He's gotten most of his code written, and it works nicely, but the birds need to move *every moment*.
Tkinter, however, hogs the time for its own event loop, and so his code won't run. Doing `root.mainloop()` runs, runs, and keeps running, and the only thing it runs is the event handlers.
Is there a way to have his code run alongside the mainloop (without multithreading, it's confusing and this should be kept simple), and if so, what is it?
Right now, he came up with an ugly hack, tying his `move()` function to `<b1-motion>`, so that as long as he holds the button down and wiggles the mouse, it works. But there's got to be a better way. | The solution posted by Bjorn results in a "RuntimeError: Calling
Tcl from different appartment" message on my computer (RedHat Enterprise 5, python 2.6.1). Bjorn might not have gotten this message, since, according to [one place I checked](http://www.mail-archive.com/[email protected]/msg01808.html), mishandling threading with Tkinter is unpredictable and platform-dependent.
The problem seems to be that `app.start()` counts as a reference to Tk, since app contains Tk elements. I fixed this by replacing `app.start()` with a `self.start()` inside `__init__`. I also made it so that all Tk references are either inside the *function that calls `mainloop()`* or are inside *functions that are called by* the function that calls `mainloop()` (this is apparently critical to avoid the "different apartment" error).
Finally, I added a protocol handler with a callback, since without this the program exits with an error when the Tk window is closed by the user.
The revised code is as follows:
```
# Run tkinter code in another thread
import tkinter as tk
import threading
class App(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.start()
def callback(self):
self.root.quit()
def run(self):
self.root = tk.Tk()
self.root.protocol("WM_DELETE_WINDOW", self.callback)
label = tk.Label(self.root, text="Hello World")
label.pack()
self.root.mainloop()
app = App()
print('Now we can continue running code while mainloop runs!')
for i in range(100000):
print(i)
``` |
How do you run your own code alongside Tkinter's event loop? | 459,083 | 64 | 2009-01-19T20:40:39Z | 4,836,121 | 8 | 2011-01-29T09:35:20Z | [
"python",
"events",
"tkinter"
] | My little brother is just getting into programming, and for his Science Fair project, he's doing a simulation of a flock of birds in the sky. He's gotten most of his code written, and it works nicely, but the birds need to move *every moment*.
Tkinter, however, hogs the time for its own event loop, and so his code won't run. Doing `root.mainloop()` runs, runs, and keeps running, and the only thing it runs is the event handlers.
Is there a way to have his code run alongside the mainloop (without multithreading, it's confusing and this should be kept simple), and if so, what is it?
Right now, he came up with an ugly hack, tying his `move()` function to `<b1-motion>`, so that as long as he holds the button down and wiggles the mouse, it works. But there's got to be a better way. | When writing your own loop, as in the simulation (I assume), you need to call the `update` function which does what the `mainloop` does: updates the window with your changes, but you do it in your loop.
```
def task():
# do something
root.update()
while 1:
task()
``` |
Filtering by relation count in SQLAlchemy | 459,125 | 6 | 2009-01-19T20:53:54Z | 459,313 | 11 | 2009-01-19T21:48:24Z | [
"python",
"sql",
"database",
"sqlalchemy",
"pylons"
] | I'm using the SQLAlchemy Python ORM in a Pylons project. I have a class "Project" which has a one to many relationship with another class "Entry". I want to do a query in SQLAlchemy that gives me all of the projects which have one or more entries associated with them. At the moment I'm doing:
```
[project for project in Session.query(Project) if len(project.entries)>0]
```
which I know isn't ideal, but I can't figure out how to do a filter that does what I require (e.g. Session.query(Project).filter(Project.entries.exists())).
Any ideas? | Project.entries.any() should work. |
BeautifulSoup - modifying all links in a piece of HTML? | 459,981 | 13 | 2009-01-20T02:52:19Z | 459,991 | 29 | 2009-01-20T03:02:34Z | [
"python",
"beautifulsoup"
] | I need to be able to modify every single link in an HTML document. I know that I need to use the `SoupStrainer` but I'm not 100% positive on how to implement it. If someone could direct me to a good resource or provide a code example, it'd be very much appreciated.
Thanks. | Maybe something like this would work? (I don't have a Python interpreter in front of me, unfortunately)
```
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup('<p>Blah blah blah <a href="http://google.com">Google</a></p>')
for a in soup.findAll('a'):
a['href'] = a['href'].replace("google", "mysite")
result = str(soup)
``` |
BeautifulSoup - modifying all links in a piece of HTML? | 459,981 | 13 | 2009-01-20T02:52:19Z | 460,002 | 22 | 2009-01-20T03:09:49Z | [
"python",
"beautifulsoup"
] | I need to be able to modify every single link in an HTML document. I know that I need to use the `SoupStrainer` but I'm not 100% positive on how to implement it. If someone could direct me to a good resource or provide a code example, it'd be very much appreciated.
Thanks. | ```
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup('<p>Blah blah blah <a href="http://google.com">Google</a></p>')
for a in soup.findAll('a'):
a['href'] = a['href'].replace("google", "mysite")
print str(soup)
```
This is Lusid's solution, but since he didn't have a Python interpreter in front of him, he wasn't able to test it and it had a few errors. I just wanted to post the working condition. Thank's Lusid! |
Algorithm to generate spanning set | 460,479 | 10 | 2009-01-20T08:38:46Z | 460,529 | 11 | 2009-01-20T09:02:54Z | [
"python",
"algorithm"
] | Given this input: [1,2,3,4]
I'd like to generate the set of spanning sets:
```
[1] [2] [3] [4]
[1] [2] [3,4]
[1] [2,3] [4]
[1] [3] [2,4]
[1,2] [3] [4]
[1,3] [2] [4]
[1,4] [2] [3]
[1,2] [3,4]
[1,3] [2,4]
[1,4] [2,3]
[1,2,3] [4]
[1,2,4] [3]
[1,3,4] [2]
[2,3,4] [1]
[1,2,3,4]
```
Every set has all the elements of the original set, permuted to appear in unique subsets. What is the algorithm that produces these sets? I've tried Python generator functions using choose, permutation, combination, power set, and so on, but can't get the right combination.
20 Jan 2009
This is not a homework question. This is an improved answer I was working on for www.projecteuler.net problem # 118. I already had a slow solution but came up with a better way -- except I could not figure out how to do the spanning set.
I'll post my code when I get back from an Inauguration Party.
21 Jan 2009
This is the eventual algorithm I used:
```
def spanningsets(items):
if len(items) == 1:
yield [items]
else:
left_set, last = items[:-1], [items[-1]]
for cc in spanningsets(left_set):
yield cc + [last]
for i,elem in enumerate(cc):
yield cc[:i] + [elem + last] + cc[i+1:]
```
@Yuval F: I know how to do a powerset. Here's a straightforward implementation:
```
def powerset(s) :
length = len(s)
for i in xrange(0, 2**length) :
yield [c for j, c in enumerate(s) if (1 << j) & i]
return
``` | This should work, though I haven't tested it enough.
```
def spanningsets(items):
if not items: return
if len(items) == 1:
yield [[items[-1]]]
else:
for cc in spanningsets(items[:-1]):
yield cc + [[items[-1]]]
for i in range(len(cc)):
yield cc[:i] + [cc[i] + [items[-1]]] + cc[i+1:]
for sset in spanningsets([1, 2, 3, 4]):
print ' '.join(map(str, sset))
```
Output:
```
[1] [2] [3] [4]
[1, 4] [2] [3]
[1] [2, 4] [3]
[1] [2] [3, 4]
[1, 3] [2] [4]
[1, 3, 4] [2]
[1, 3] [2, 4]
[1] [2, 3] [4]
[1, 4] [2, 3]
[1] [2, 3, 4]
[1, 2] [3] [4]
[1, 2, 4] [3]
[1, 2] [3, 4]
[1, 2, 3] [4]
[1, 2, 3, 4]
``` |
Simulating a 'local static' variable in python | 460,586 | 23 | 2009-01-20T09:32:42Z | 460,601 | 15 | 2009-01-20T09:39:21Z | [
"python"
] | Consider the following code:
```
def CalcSomething(a):
if CalcSomething._cache.has_key(a):
return CalcSomething._cache[a]
CalcSomething._cache[a] = ReallyCalc(a)
return CalcSomething._cache[a]
CalcSomething._cache = { }
```
This is the easiest way I can think of for simulating a 'local static' variable in python.
What bothers me is that CalcSomething.\_cache is mentioned outside the function's definition, but the alternative would be something like that:
```
if not hasattr(CalcSomething, "_cache"):
setattr(CalcSomething, "_cache", { } )
```
inside the function's definition, which is really cumbersome.
Is there a more elegant way?
[EDIT]
Just to clarify, this question is not about local function caches, as the example above might suggest. Here is another short example where a 'static local' might be handy:
```
def ParseString(s):
return ParseString._parser.parse(s)
# Create a Parser object once, which will be used for all parsings.
# Assuming a Parser object is heave on resources, for the sake of this example.
ParseString._parser = Parser()
``` | Turn it into a decorator.
```
def static_var(var_name, initial_value):
def _set_var(obj):
setattr(obj, var_name, initial_value)
return obj
return _set_var
@static_var("_cache", {})
def CalcSomething(a):
...
``` |
Simulating a 'local static' variable in python | 460,586 | 23 | 2009-01-20T09:32:42Z | 460,691 | 10 | 2009-01-20T10:10:57Z | [
"python"
] | Consider the following code:
```
def CalcSomething(a):
if CalcSomething._cache.has_key(a):
return CalcSomething._cache[a]
CalcSomething._cache[a] = ReallyCalc(a)
return CalcSomething._cache[a]
CalcSomething._cache = { }
```
This is the easiest way I can think of for simulating a 'local static' variable in python.
What bothers me is that CalcSomething.\_cache is mentioned outside the function's definition, but the alternative would be something like that:
```
if not hasattr(CalcSomething, "_cache"):
setattr(CalcSomething, "_cache", { } )
```
inside the function's definition, which is really cumbersome.
Is there a more elegant way?
[EDIT]
Just to clarify, this question is not about local function caches, as the example above might suggest. Here is another short example where a 'static local' might be handy:
```
def ParseString(s):
return ParseString._parser.parse(s)
# Create a Parser object once, which will be used for all parsings.
# Assuming a Parser object is heave on resources, for the sake of this example.
ParseString._parser = Parser()
``` | Consider writing decorator that will maintain cache and your function won't be contaminated by caching code:
```
def cacheResults(aFunc):
'''This decorator funcion binds a map between the tuple of arguments
and results computed by aFunc for those arguments'''
def cachedFunc(*args):
if not hasattr(aFunc, '_cache'):
aFunc._cache = {}
if args in aFunc._cache:
return aFunc._cache[args]
newVal = aFunc(*args)
aFunc._cache[args] = newVal
return newVal
return cachedFunc
@cacheResults
def ReallyCalc(a):
'''This function does only actual computation'''
return pow(a, 42)
```
Maybe it doesn't look great at first, but you can use `cacheResults()` anywhere you don't need keyword parameters. It is possible to create similar decorator that would work also for keyword params, but that didn't seem necessary this time. |
Simulating a 'local static' variable in python | 460,586 | 23 | 2009-01-20T09:32:42Z | 460,811 | 44 | 2009-01-20T10:57:30Z | [
"python"
] | Consider the following code:
```
def CalcSomething(a):
if CalcSomething._cache.has_key(a):
return CalcSomething._cache[a]
CalcSomething._cache[a] = ReallyCalc(a)
return CalcSomething._cache[a]
CalcSomething._cache = { }
```
This is the easiest way I can think of for simulating a 'local static' variable in python.
What bothers me is that CalcSomething.\_cache is mentioned outside the function's definition, but the alternative would be something like that:
```
if not hasattr(CalcSomething, "_cache"):
setattr(CalcSomething, "_cache", { } )
```
inside the function's definition, which is really cumbersome.
Is there a more elegant way?
[EDIT]
Just to clarify, this question is not about local function caches, as the example above might suggest. Here is another short example where a 'static local' might be handy:
```
def ParseString(s):
return ParseString._parser.parse(s)
# Create a Parser object once, which will be used for all parsings.
# Assuming a Parser object is heave on resources, for the sake of this example.
ParseString._parser = Parser()
``` | Turn it into a callable object (since that's what it really is.)
```
class CalcSomething(object):
def __init__(self):
self._cache = {}
def __call__(self, a):
if a not in self._cache:
self._cache[a] = self.reallyCalc(a)
return self._cache[a]
def reallyCalc(self, a):
return # a real answer
calcSomething = CalcSomething()
```
Now you can use `calcSomething` as if it were a function. But it remains tidy and self-contained. |
Packet sniffing in Python (Windows) | 462,439 | 5 | 2009-01-20T18:18:33Z | 462,447 | 9 | 2009-01-20T18:20:59Z | [
"python",
"sniffing",
"sniffer"
] | What is the best way to sniff network packets using Python?
I've heard from several places that the best module for this is a module called Scapy, unfortunately, it makes python.exe crash on my system. I would assume that it's just a problem with how I installed it, except that many other people have told me that it doesn't work particularly well on Windows. (If anyone is interested, I'm running Windows Vista, which might affect things).
Does anyone know of a better solution?
UPD:
After reading the answer telling me to install PyPcap, I messed around with it a bit and found out that Scapy, which I had tried using, was telling me to install PyPcap as well, except that it's a modified version for it's use. It was this modified PyPcap that was causing the problem, apparently, since the example in the answer also caused a hang.
I installed the original version of PyPcap (from Google's site), and Scapy started working fine (I didn't try many things, but at least it didn't crash as soon as I started sniffing). I sent a new defect ticket to the Scapy developers: <http://trac.secdev.org/scapy/ticket/166>, hope they can do something with it.
Anyways, just thought I'd let y'all know. | Use [python-libpcap](http://sourceforge.net/projects/pylibpcap/).
```
import pcap
p = pcap.pcapObject()
dev = pcap.lookupdev()
p.open_live(dev, 1600, 0, 100)
#p.setnonblock(1)
try:
for pktlen, data, timestamp in p:
print "[%s] Got data: %s" % (time.strftime('%H:%M',
time.localtime(timestamp)),
data)
except KeyboardInterrupt:
print '%s' % sys.exc_type
print 'shutting down'
print ('%d packets received, %d packets dropped'
' %d packets dropped by interface') % p.stats()
``` |
Packet sniffing in Python (Windows) | 462,439 | 5 | 2009-01-20T18:18:33Z | 462,497 | 7 | 2009-01-20T18:31:15Z | [
"python",
"sniffing",
"sniffer"
] | What is the best way to sniff network packets using Python?
I've heard from several places that the best module for this is a module called Scapy, unfortunately, it makes python.exe crash on my system. I would assume that it's just a problem with how I installed it, except that many other people have told me that it doesn't work particularly well on Windows. (If anyone is interested, I'm running Windows Vista, which might affect things).
Does anyone know of a better solution?
UPD:
After reading the answer telling me to install PyPcap, I messed around with it a bit and found out that Scapy, which I had tried using, was telling me to install PyPcap as well, except that it's a modified version for it's use. It was this modified PyPcap that was causing the problem, apparently, since the example in the answer also caused a hang.
I installed the original version of PyPcap (from Google's site), and Scapy started working fine (I didn't try many things, but at least it didn't crash as soon as I started sniffing). I sent a new defect ticket to the Scapy developers: <http://trac.secdev.org/scapy/ticket/166>, hope they can do something with it.
Anyways, just thought I'd let y'all know. | Using [pypcap](http://code.google.com/p/pypcap/):
```
import dpkt, pcap
pc = pcap.pcap() # construct pcap object
pc.setfilter('icmp') # filter out unwanted packets
for timestamp, packet in pc:
print dpkt.ethernet.Ethernet(packet)
```
output sample:
```
Ethernet(src='\x00\x03G\xb2M\xe4', dst='\x00\x03G\x06h\x18', data=IP(src='\n\x00\x01\x1c',
dst='\n\x00\x01\x10', sum=39799, len=60, p=1, ttl=128, id=35102, data=ICMP(sum=24667,
type=8, data=Echo(id=512, seq=60160, data='abcdefghijklmnopqrstuvwabcdefghi'))))
Ethernet(src='\x00\x03G\x06h\x18', dst='\x00\x03G\xb2M\xe4', data=IP(src='\n\x00\x01\x10',
dst='\n\x00\x01\x1c', sum=43697, len=60, p=1, ttl=255, id=64227, data=ICMP(sum=26715,
data=Echo(id=512, seq=60160, data='abcdefghijklmnopqrstuvwabcdefghi'))))
``` |
Can I get the matrix determinant using Numpy? | 462,500 | 29 | 2009-01-20T18:32:16Z | 462,514 | 51 | 2009-01-20T18:35:26Z | [
"python",
"numpy"
] | I read in the manual of Numpy that there is function `det(M)` that can calculate the determinant. However, I can't find the `det()` method in Numpy.
By the way, I use Python 2.5. There should be no compatibility problems with Numpy. | You can use [`numpy.linalg.det`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.det.html) to compute the determinant of an array:
```
In [1]: import numpy
In [2]: M = [[1, 2], [3, 4]]
In [3]: print numpy.linalg.det(M)
Out[3]: -2.0000000000000004
``` |
Can I get the matrix determinant using Numpy? | 462,500 | 29 | 2009-01-20T18:32:16Z | 19,317,237 | 14 | 2013-10-11T11:37:57Z | [
"python",
"numpy"
] | I read in the manual of Numpy that there is function `det(M)` that can calculate the determinant. However, I can't find the `det()` method in Numpy.
By the way, I use Python 2.5. There should be no compatibility problems with Numpy. | For **large arrays** underflow/overflow may occur when using `numpy.linalg.det`, or you may get `inf` or `-inf` as an answer.
In many of these cases you can use `numpy.linalg.slogdet` ([see documentation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.slogdet.html)), which returns:
```
(sign, logdet)
```
where `sign` is the sign and `logdet` the logarithm of the determinant, such that you can calculate the determinant `det` doing:
```
det = np.exp(logdet)
``` |
UnicodeEncodeError with BeautifulSoup 3.1.0.1 and Python 2.5.2 | 463,215 | 4 | 2009-01-20T21:33:25Z | 463,382 | 11 | 2009-01-20T22:24:20Z | [
"python",
"encoding",
"screen-scraping",
"beautifulsoup"
] | With BeautifulSoup 3.1.0.1 and Python 2.5.2, and trying to parse a web page in French. However, as soon as I call findAll, I get the following error:
*UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 1146: ordinal not in range(128)*
Below is the code I am currently running:
```
import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://fr.encarta.msn.com/encyclopedia_761561798/Paris.html")
soup = BeautifulSoup(page, fromEncoding="latin1")
r = soup.findAll("table")
print r
```
Does anybody have an idea why?
Thanks!
**UPDATE**: As resquested, below is the full Traceback
```
Traceback (most recent call last):
File "[...]\test.py", line 6, in <module>
print r
UnicodeEncodeError: 'ascii' codec can't encode characters in position 1146-1147: ordinal not in range(128)
``` | Here is another idea. Your terminal is not capable of displaying an unicode string from Python. The interpreter tries to convert it to ASCII first. You should encode it explicitly before printing. I don't know the exact semantics of `soup.findAll()`. But it is probably something like:
```
for t in soup.findAll("table"):
print t.encode('latin1')
```
If `t` really is a string. Maybe its just another object from which you have to build the data that you want to display. |
How can I type-check variables in Python? | 463,604 | 20 | 2009-01-20T23:58:45Z | 463,630 | 21 | 2009-01-21T00:06:11Z | [
"python",
"typing",
"dynamic-typing"
] | I have a Python function that takes a numeric argument that **must** be an integer in order for it behave correctly. What is the preferred way of verifying this in Python?
My first reaction is to do something like this:
```
def isInteger(n):
return int(n) == n
```
But I can't help thinking that this is 1) expensive 2) ugly and 3) subject to the tender mercies of machine epsilon.
Does Python provide any native means of type checking variables? Or is this considered to be a violation of the language's dynamically typed design?
EDIT: since a number of people have asked - the application in question works with IPv4 prefixes, sourcing data from flat text files. If any input is parsed into a float, that record should be viewed as malformed and ignored. | ```
isinstance(n, int)
```
If you need to know whether it's definitely an actual int and not a subclass of int (generally you shouldn't need to do this):
```
type(n) is int
```
this:
```
return int(n) == n
```
isn't such a good idea, as cross-type comparisons can be true - notably `int(3.0)==3.0` |
How can I type-check variables in Python? | 463,604 | 20 | 2009-01-20T23:58:45Z | 463,953 | 17 | 2009-01-21T02:50:17Z | [
"python",
"typing",
"dynamic-typing"
] | I have a Python function that takes a numeric argument that **must** be an integer in order for it behave correctly. What is the preferred way of verifying this in Python?
My first reaction is to do something like this:
```
def isInteger(n):
return int(n) == n
```
But I can't help thinking that this is 1) expensive 2) ugly and 3) subject to the tender mercies of machine epsilon.
Does Python provide any native means of type checking variables? Or is this considered to be a violation of the language's dynamically typed design?
EDIT: since a number of people have asked - the application in question works with IPv4 prefixes, sourcing data from flat text files. If any input is parsed into a float, that record should be viewed as malformed and ignored. | Yeah, as Evan said, don't type check. Just try to use the value:
```
def myintfunction(value):
""" Please pass an integer """
return 2 + value
```
That doesn't have a typecheck. It is much better! Let's see what happens when I try it:
```
>>> myintfunction(5)
7
```
That works, because it is an integer. Hm. Lets try some text.
```
>>> myintfunction('text')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in myintfunction
TypeError: unsupported operand type(s) for +: 'int' and 'str'
```
It shows an error, TypeError, which is what it should do anyway. If caller wants to catch that, it is possible.
What would you do if you did a typecheck? Show an error right? So you don't have to typecheck because the error is already showing up automatically.
Plus since you didn't typecheck, you have your function working with other types:
Floats:
```
>>> print myintfunction(2.2)
4.2
```
Complex numbers:
```
>>> print myintfunction(5j)
(2+5j)
```
Decimals:
```
>>> import decimal
>>> myintfunction(decimal.Decimal('15'))
Decimal("17")
```
Even completely arbitrary objects that can add numbers!
```
>>> class MyAdderClass(object):
... def __radd__(self, value):
... print 'got some value: ', value
... return 25
...
>>> m = MyAdderClass()
>>> print myintfunction(m)
got some value: 2
25
```
So you clearly get nothing by typechecking. And lose a lot.
---
# UPDATE:
Since you've edited the question, it is now clear that your application calls some upstream routine that makes sense only with ints.
That being the case, I still think you should pass the parameter **as received** to the upstream function. The upstream function will deal with it correctly e.g. raising an error if it needs to. I highly **doubt that** your function that deals with IPs will behave strangely if you pass it a float. If you can give us the name of the library we can check that for you.
But... If the upstream function will behave incorrectly and kill some kids if you pass it a float (I still highly doubt it), then just just call `int()` on it:
```
def myintfunction(value):
""" Please pass an integer """
return upstreamfunction(int(value))
```
You're still not typechecking, so you get most benefits of not typechecking.
---
If even after all that, you really want to type check, despite it reducing your application's readability and performance for absolutely no benefit, use an `assert` to do it.
```
assert isinstance(...)
assert type() is xxxx
```
That way we can turn off `assert`s and remove this `<sarcasm>`*feature*`</sarcasm>` from the program by calling it as
```
python -OO program.py
``` |
Python - Doing absolute imports from a subfolder | 463,643 | 11 | 2009-01-21T00:10:47Z | 463,904 | 10 | 2009-01-21T02:15:30Z | [
"python",
"python-import"
] | Basically I'm asking the same question as this guy: <http://stackoverflow.com/questions/72852/how-to-do-relative-imports-in-python>
But no one gave him a correct answer. Given that you are inside a subfolder and you want to go up a directory and then into ANOTHER subfolder, doing what they suggested does not work (as the OP pointed out in his comments to their answers).
I know that you can do this by using sys.path, but I would prefer a cleaner method.
Example:
```
App
__init__.py
Package_A
--__init__.py
--Module_A.py
Package_B
--__init__.py
--Module_B.py
```
How would I import Module\_A into Module\_B? | ```
main.py
setup.py
app/ ->
__init__.py
package_a/ ->
__init__.py
module_a.py
package_b/ ->
__init__.py
module_b.py
```
1. You run `python main.py`.
2. `main.py` does: `import app.package_a.module_a`
3. `module_a.py` does `import app.package_b.module_b`
Alternatively 2 or 3 could use: `from app.package_a import module_a`
That will work as long as you have `app` in your PYTHONPATH. `main.py` could be anywhere then.
So you write a `setup.py` to copy (install) the whole app package and subpackages to the target system's python folders, and `main.py` to target system's script folders. |
Django missing translation of some strings. Any idea why? | 463,714 | 7 | 2009-01-21T00:38:19Z | 463,741 | 8 | 2009-01-21T00:54:59Z | [
"python",
"django",
"internationalization",
"translation"
] | I have a medium sized Django project, (running on AppEngine if it makes any difference), and have all the strings living in .po files like they should.
I'm seeing strange behavior where certain strings just don't translate. They show up in the .po file when I run make\_messages, with the correct file locations marked where my {% trans %} tags are. The translations are in place and look correct compared to other strings on either side of them. But when I display the page in question, about 1/4 of the strings simply don't translate.
Digging into the relevant generated .mo file, I don't see either the msgid or the msgstr present.
Has anybody seen anything similar to this? Any idea what might be happening?
* trans tags look correct
* .po files look correct
* no errors during compile\_messages | Ugh. Django, you're killing me.
Here's what was happening:
<http://blog.e-shell.org/124>
For some reason only Django knows, it decided to decorate some of my translations with the comment '# fuzzy'. It seems to have chosen which ones to mark randomly.
Anyway, #fuzzy means this: "don't translate this, even though here's the translation:"
I'll leave this here in case some other poor soul comes across it in the future. |
Django missing translation of some strings. Any idea why? | 463,714 | 7 | 2009-01-21T00:38:19Z | 463,928 | 10 | 2009-01-21T02:31:43Z | [
"python",
"django",
"internationalization",
"translation"
] | I have a medium sized Django project, (running on AppEngine if it makes any difference), and have all the strings living in .po files like they should.
I'm seeing strange behavior where certain strings just don't translate. They show up in the .po file when I run make\_messages, with the correct file locations marked where my {% trans %} tags are. The translations are in place and look correct compared to other strings on either side of them. But when I display the page in question, about 1/4 of the strings simply don't translate.
Digging into the relevant generated .mo file, I don't see either the msgid or the msgstr present.
Has anybody seen anything similar to this? Any idea what might be happening?
* trans tags look correct
* .po files look correct
* no errors during compile\_messages | The fuzzy marker is added to the .po file by makemessages. When you have a new string (with no translations), it looks for similar strings, and includes them as the translation, with the fuzzy marker. This means, this is a crude match, so don't display it to the user, but it could be a good start for the human translator.
It isn't a Django behavior, it comes from the gettext facility. |
Using DPAPI with Python? | 463,832 | 13 | 2009-01-21T01:30:50Z | 463,852 | 8 | 2009-01-21T01:37:54Z | [
"python",
"windows",
"security",
"encryption",
"dpapi"
] | Is there a way to use the DPAPI (Data Protection Application Programming Interface) on Windows XP with Python?
I would prefer to use an existing module if there is one that can do it. Unfortunately I haven't been able to find a way with Google or Stack Overflow.
**EDIT:** I've taken the example code pointed to by "dF" and tweaked it into a standalone library which can be simply used at a high level to crypt and decrypt using DPAPI in user mode. Simply call dpapi.cryptData(text\_to\_encrypt) which returns an encrypted string, or the reverse decryptData(encrypted\_data\_string), which returns the plain text. Here's the library:
```
# DPAPI access library
# This file uses code originally created by Crusher Joe:
# http://article.gmane.org/gmane.comp.python.ctypes/420
#
from ctypes import *
from ctypes.wintypes import DWORD
LocalFree = windll.kernel32.LocalFree
memcpy = cdll.msvcrt.memcpy
CryptProtectData = windll.crypt32.CryptProtectData
CryptUnprotectData = windll.crypt32.CryptUnprotectData
CRYPTPROTECT_UI_FORBIDDEN = 0x01
extraEntropy = "cl;ad13 \0al;323kjd #(adl;k$#ajsd"
class DATA_BLOB(Structure):
_fields_ = [("cbData", DWORD), ("pbData", POINTER(c_char))]
def getData(blobOut):
cbData = int(blobOut.cbData)
pbData = blobOut.pbData
buffer = c_buffer(cbData)
memcpy(buffer, pbData, cbData)
LocalFree(pbData);
return buffer.raw
def Win32CryptProtectData(plainText, entropy):
bufferIn = c_buffer(plainText, len(plainText))
blobIn = DATA_BLOB(len(plainText), bufferIn)
bufferEntropy = c_buffer(entropy, len(entropy))
blobEntropy = DATA_BLOB(len(entropy), bufferEntropy)
blobOut = DATA_BLOB()
if CryptProtectData(byref(blobIn), u"python_data", byref(blobEntropy),
None, None, CRYPTPROTECT_UI_FORBIDDEN, byref(blobOut)):
return getData(blobOut)
else:
return ""
def Win32CryptUnprotectData(cipherText, entropy):
bufferIn = c_buffer(cipherText, len(cipherText))
blobIn = DATA_BLOB(len(cipherText), bufferIn)
bufferEntropy = c_buffer(entropy, len(entropy))
blobEntropy = DATA_BLOB(len(entropy), bufferEntropy)
blobOut = DATA_BLOB()
if CryptUnprotectData(byref(blobIn), None, byref(blobEntropy), None, None,
CRYPTPROTECT_UI_FORBIDDEN, byref(blobOut)):
return getData(blobOut)
else:
return ""
def cryptData(text):
return Win32CryptProtectData(text, extraEntropy)
def decryptData(cipher_text):
return Win32CryptUnprotectData(cipher_text, extraEntropy)
``` | I have been using `CryptProtectData` and `CryptUnprotectData` through ctypes, with the code from
<http://article.gmane.org/gmane.comp.python.ctypes/420>
and it has been working well. |
Parallel processing from a command queue on Linux (bash, python, ruby... whatever) | 463,963 | 42 | 2009-01-21T02:54:45Z | 464,007 | 7 | 2009-01-21T03:31:22Z | [
"python",
"ruby",
"bash",
"shell",
"parallel-processing"
] | I have a list/queue of 200 commands that I need to run in a shell on a Linux server.
I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer.
When a process finishes I want the next command to be "popped" from the queue and executed.
Does anyone have code to solve this problem?
**Further elaboration:**
There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done.
The actual problem I'm trying to solve is using `imapsync` to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly. | GNU make (and perhaps other implementations as well) has the -j argument, which governs how many jobs it will run at once. When a job completes, make will start another one. |
Parallel processing from a command queue on Linux (bash, python, ruby... whatever) | 463,963 | 42 | 2009-01-21T02:54:45Z | 464,029 | 41 | 2009-01-21T03:53:00Z | [
"python",
"ruby",
"bash",
"shell",
"parallel-processing"
] | I have a list/queue of 200 commands that I need to run in a shell on a Linux server.
I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer.
When a process finishes I want the next command to be "popped" from the queue and executed.
Does anyone have code to solve this problem?
**Further elaboration:**
There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done.
The actual problem I'm trying to solve is using `imapsync` to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly. | On the shell, `xargs` can be used to queue parallel command processing. For example, for having always 3 sleeps in parallel, sleeping for 1 second each, and executing 10 sleeps in total do
```
echo {1..10} | xargs -d ' ' -n1 -P3 sh -c 'sleep 1s' _
```
And it would sleep for 4 seconds in total. If you have a list of names, and want to pass the names to commands executed, again executing 3 commands in parallel, do
```
cat names | xargs -n1 -P3 process_name
```
Would execute the command `process_name alice`, `process_name bob` and so on. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.