title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Shuffle an array with python, randomize array item order with python | 473,973 | 120 | 2009-01-23T18:34:29Z | 19,631,040 | 7 | 2013-10-28T09:23:27Z | [
"python",
"arrays",
"random",
"order"
] | What's the easiest way to shuffle an array with python? | When dealing with regular Python lists, `random.shuffle()` will do the job just as the previous answers show.
But when it come to `ndarray`(`numpy.array`), `random.shuffle` seems to break the original `ndarray`. Here is an example:
```
import random
import numpy as np
import numpy.random
a = np.array([1,2,3,4,5,6])
a.shape = (3,2)
print a
random.shuffle(a) # a will definitely be destroyed
print a
```
Just use: `np.random.shuffle(a)`
Like `random.shuffle`, `np.random.shuffle` shuffles the array in-place. |
Which GTK widget combination to use for scrollable column of widgets? | 474,034 | 3 | 2009-01-23T18:53:35Z | 474,134 | 7 | 2009-01-23T19:23:01Z | [
"python",
"gtk",
"pygtk",
"widget"
] | I'm working with PyGTK, trying to come up with a combination of widgets that will do the following:
* Let me add an endless number of widgets in a column
* Provide a vertical scrollbar to get to the ones that run off the bottom
* Make the widgets' width adjust to fill available horizontal space when the window is resized
Thanks - I'm new to GTK. | * An endless number of widgets in a column: Sounds like a GtkVBox.
* Vertical scrollbar: Put your VBox in a GtkScrolledWindow.
* Horizontal stretching: This requires setting the appropriate properties for the VBox, ScrolledWindow, and your other widgets. At least in Glade the defaults seem to mostly handle this (You will probably want to change the scrollbar policy of the ScrolledWindow).
Now for the trick. If you just do what I've listed above, the contents of the VBox will try to resize vertically as well as horizontally, and you won't get your scrollbar. The solution is to place your VBox in a GtkViewport.
So the final hierarchy is ScrolledWindow( Viewport( VBox( widgets ) ) ). |
What is the best way to repeatedly execute a function every x seconds in Python? | 474,528 | 68 | 2009-01-23T21:07:05Z | 474,543 | 78 | 2009-01-23T21:09:33Z | [
"python",
"timer"
] | I want to repeatedly execute a function in Python every 60 seconds forever (just like an [NSTimer](http://developer.apple.com/documentation/Cocoa/Reference/Foundation/Classes/NSTimer_Class/Reference/NSTimer.html) in Objective C). This code will run as a daemon and is effectively like calling the python script every minute using a cron, but without requiring that to be set up by the user.
In [this question about a cron implemented in Python](http://stackoverflow.com/questions/373335/suggestions-for-a-cron-like-scheduler-in-python), the solution appears to effectively just [sleep()](http://docs.python.org/library/time.html#time.sleep) for x seconds. I don't need such advanced functionality so perhaps something like this would work
```
while True:
# Code executed here
time.sleep(60)
```
Are there any foreseeable problems with this code? | Use the [sched](http://docs.python.org/library/sched.html) module, which implements a general purpose event scheduler.
```
import sched, time
s = sched.scheduler(time.time, time.sleep)
def do_something(sc):
print "Doing stuff..."
# do your stuff
s.enter(60, 1, do_something, (sc,))
s.enter(60, 1, do_something, (s,))
s.run()
``` |
What is the best way to repeatedly execute a function every x seconds in Python? | 474,528 | 68 | 2009-01-23T21:07:05Z | 474,570 | 30 | 2009-01-23T21:14:06Z | [
"python",
"timer"
] | I want to repeatedly execute a function in Python every 60 seconds forever (just like an [NSTimer](http://developer.apple.com/documentation/Cocoa/Reference/Foundation/Classes/NSTimer_Class/Reference/NSTimer.html) in Objective C). This code will run as a daemon and is effectively like calling the python script every minute using a cron, but without requiring that to be set up by the user.
In [this question about a cron implemented in Python](http://stackoverflow.com/questions/373335/suggestions-for-a-cron-like-scheduler-in-python), the solution appears to effectively just [sleep()](http://docs.python.org/library/time.html#time.sleep) for x seconds. I don't need such advanced functionality so perhaps something like this would work
```
while True:
# Code executed here
time.sleep(60)
```
Are there any foreseeable problems with this code? | You might want to consider [Twisted](http://twistedmatrix.com/trac/) which is a python networking library that implements the [Reactor Pattern](http://en.wikipedia.org/wiki/Reactor_pattern).
```
from twisted.internet import task
from twisted.internet import reactor
timeout = 60.0 # Sixty seconds
def doWork():
#do work here
pass
l = task.LoopingCall(doWork)
l.start(timeout) # call every sixty seconds
reactor.run()
```
While "while True: sleep(60)" will probably work Twisted probably already implements many of the features that you will eventually need (daemonization, logging or exception handling as pointed out by bobince) and will probably be a more robust solution |
What is the best way to repeatedly execute a function every x seconds in Python? | 474,528 | 68 | 2009-01-23T21:07:05Z | 13,217,744 | 21 | 2012-11-04T10:26:10Z | [
"python",
"timer"
] | I want to repeatedly execute a function in Python every 60 seconds forever (just like an [NSTimer](http://developer.apple.com/documentation/Cocoa/Reference/Foundation/Classes/NSTimer_Class/Reference/NSTimer.html) in Objective C). This code will run as a daemon and is effectively like calling the python script every minute using a cron, but without requiring that to be set up by the user.
In [this question about a cron implemented in Python](http://stackoverflow.com/questions/373335/suggestions-for-a-cron-like-scheduler-in-python), the solution appears to effectively just [sleep()](http://docs.python.org/library/time.html#time.sleep) for x seconds. I don't need such advanced functionality so perhaps something like this would work
```
while True:
# Code executed here
time.sleep(60)
```
Are there any foreseeable problems with this code? | The easier way I believe to be:
```
import time
def executeSomething():
#code here
time.sleep(60)
while True:
executeSomething()
```
This way your code is executed, then it waits 60 seconds then it executes again, waits, execute, etc...
No need to complicate things :D |
What is the best way to repeatedly execute a function every x seconds in Python? | 474,528 | 68 | 2009-01-23T21:07:05Z | 25,251,804 | 31 | 2014-08-11T20:25:25Z | [
"python",
"timer"
] | I want to repeatedly execute a function in Python every 60 seconds forever (just like an [NSTimer](http://developer.apple.com/documentation/Cocoa/Reference/Foundation/Classes/NSTimer_Class/Reference/NSTimer.html) in Objective C). This code will run as a daemon and is effectively like calling the python script every minute using a cron, but without requiring that to be set up by the user.
In [this question about a cron implemented in Python](http://stackoverflow.com/questions/373335/suggestions-for-a-cron-like-scheduler-in-python), the solution appears to effectively just [sleep()](http://docs.python.org/library/time.html#time.sleep) for x seconds. I don't need such advanced functionality so perhaps something like this would work
```
while True:
# Code executed here
time.sleep(60)
```
Are there any foreseeable problems with this code? | Just lock your time loop to the system clock. Easy.
```
import time
starttime=time.time()
while True:
print "tick"
time.sleep(60.0 - ((time.time() - starttime) % 60.0))
``` |
Python Applications: Can You Secure Your Code Somehow? | 475,216 | 6 | 2009-01-24T00:40:09Z | 475,246 | 12 | 2009-01-24T00:56:39Z | [
"python",
"security",
"reverse-engineering"
] | If there is truly a 'best' way, what *is* the best way to ship a python app and ensure people can't (easily) reverse engineer your algorithms/security/work in general?
If there isn't a 'best' way, what are the different options available?
Background:
I love coding in Python and would love to release more apps with it. One thing that I wonder about is the possibility of people circumventing any licensing code I put in, or being able to just rip off my entire source base. I've heard of Py2Exe and similar applications, but I'm curious if there are 'preferred' ways of doing it, or if this problem is just a fact of life. | Security through obscurity *never* works. If you must use a proprietary license, enforce it through the law, not half-baked obfuscation attempts.
If you're worried about them learning your security (e.g. cryptography) algorithm, the same applies. Real, useful, security algorithms (like AES) are secure even though the algorithm is fully known. |
Python Applications: Can You Secure Your Code Somehow? | 475,216 | 6 | 2009-01-24T00:40:09Z | 475,394 | 7 | 2009-01-24T03:02:04Z | [
"python",
"security",
"reverse-engineering"
] | If there is truly a 'best' way, what *is* the best way to ship a python app and ensure people can't (easily) reverse engineer your algorithms/security/work in general?
If there isn't a 'best' way, what are the different options available?
Background:
I love coding in Python and would love to release more apps with it. One thing that I wonder about is the possibility of people circumventing any licensing code I put in, or being able to just rip off my entire source base. I've heard of Py2Exe and similar applications, but I'm curious if there are 'preferred' ways of doing it, or if this problem is just a fact of life. | Even if you use a compiled language like C# or Java, people can perform reverse engineering if they are motivated and technically competent. Obfuscation is not a reliable protection against this.
You can add prohibition against reverse-engineering to your end-user license agreement for your software. Most proprietary companies do this. But that doesn't prevent violation, it only gives you legal recourse.
The *best* solution is to offer products and services in which the user's access to read your code does not harm your ability to sell your product or service. Base your business on service provided, or subscription to periodic updates to data, rather than the code itself.
Example: Slashdot actually makes their code for their website available. Does this harm their ability to run their website? No.
Another remedy is to set your price point such that the effort to pirate your code is more costly than simply buying legitimate licenses to use your product. Joel Spolsky has made a recommendation to this effects in his articles and podcasts. |
group by in django | 475,552 | 3 | 2009-01-24T05:15:15Z | 476,156 | 12 | 2009-01-24T15:37:28Z | [
"python",
"django",
"django-models"
] | How can i create simple group by query in trunk version of django?
I need something like
```
SELECT name
FROM mytable
GROUP BY name
```
actually what i want to do is simply get all entries with distinct names. | If you need all the distinct names, just do this:
```
Foo.objects.values('name').distinct()
```
And you'll get a list of dictionaries, each one with a **name** key. If you need other data, just add more attribute names as parameters to the .values() call. Of course, if you add in attributes that may vary between rows with the same name, you'll break the .distinct().
This won't help if you want to get complete model objects back. But getting distinct names and getting full data are inherently incompatible goals anyway; how do you know *which* row with a given name you want returned in its entirety? If you want to calculate some sort of aggregate data for all the rows with a given name, [aggregation support](http://docs.djangoproject.com/en/dev/topics/db/aggregation/#topics-db-aggregation) was recently added to Django trunk and can take care of that for you. |
Python - why use "self" in a class? | 475,871 | 68 | 2009-01-24T11:13:19Z | 475,873 | 107 | 2009-01-24T11:17:47Z | [
"python",
"oop"
] | How do these 2 classes differ?
```
class A():
x=3
class B():
def __init__(self):
self.x=3
```
Is there any significant difference? | `A.x` is a *class variable*.
`B`'s `self.x` is an *instance variable*.
i.e. `A`'s `x` is shared between instances.
It would be easier to demonstrate the difference with something that can be modified like a list:
```
#!/usr/bin/env python
class A:
x = []
def add(self):
self.x.append(1)
class B:
def __init__(self):
self.x = []
def add(self):
self.x.append(1)
x = A()
y = A()
x.add()
y.add()
print "A's x:",x.x
x = B()
y = B()
x.add()
y.add()
print "B's x:",x.x
```
Output
> A's x: [1, 1]
> B's x: [1] |
Python - why use "self" in a class? | 475,871 | 68 | 2009-01-24T11:13:19Z | 475,905 | 17 | 2009-01-24T11:41:32Z | [
"python",
"oop"
] | How do these 2 classes differ?
```
class A():
x=3
class B():
def __init__(self):
self.x=3
```
Is there any significant difference? | A.x is a class variable, and will be shared across all instances of A, unless specifically overridden within an instance.
B.x is an instance variable, and each instance of B has its own version of it.
I hope the following Python example can clarify:
```
>>> class Foo():
... i = 3
... def bar(self):
... print 'Foo.i is', Foo.i
... print 'self.i is', self.i
...
>>> f = Foo() # Create an instance of the Foo class
>>> f.bar()
Foo.i is 3
self.i is 3
>>> Foo.i = 5 # Change the global value of Foo.i over all instances
>>> f.bar()
Foo.i is 5
self.i is 5
>>> f.i = 3 # Override this instance's definition of i
>>> f.bar()
Foo.i is 5
self.i is 3
``` |
Python - why use "self" in a class? | 475,871 | 68 | 2009-01-24T11:13:19Z | 475,919 | 42 | 2009-01-24T11:57:59Z | [
"python",
"oop"
] | How do these 2 classes differ?
```
class A():
x=3
class B():
def __init__(self):
self.x=3
```
Is there any significant difference? | Just as a side note: `self` is actually just a randomly chosen word, that everyone uses, but you could also use `this`, `foo`, or `myself` or anything else you want, it's just the first parameter of every non static method for a class. This means that the word `self` is not a language construct but just a name:
```
>>> class A:
... def __init__(s):
... s.bla = 2
...
>>>
>>> a = A()
>>> a.bla
2
``` |
Python - why use "self" in a class? | 475,871 | 68 | 2009-01-24T11:13:19Z | 11,342,486 | 13 | 2012-07-05T10:30:27Z | [
"python",
"oop"
] | How do these 2 classes differ?
```
class A():
x=3
class B():
def __init__(self):
self.x=3
```
Is there any significant difference? | I used to explain it with this example
```
# By TMOTTM
class Machine:
# Class Variable counts how many machines have been created.
# The value is the same for all objects of this class.
counter = 0
def __init__(self):
# Notice: no 'self'.
Machine.counter += 1
# Instance variable.
# Different for every object of the class.
self.id = Machine.counter
if __name__ == '__main__':
machine1 = Machine()
machine2 = Machine()
machine3 = Machine()
#The value is different for all objects.
print 'machine1.id', machine1.id
print 'machine2.id', machine2.id
print 'machine3.id', machine3.id
#The value is the same for all objects.
print 'machine1.counter', machine1.counter
print 'machine2.counter', machine2.counter
print 'machine3.counter', machine3.counter
```
The output then will by
```
machine1.id 1
machine2.id 2
machine3.id 3
machine1.counter 3
machine2.counter 3
machine3.counter 3
``` |
Order a QuerySet by aggregate field value | 476,017 | 24 | 2009-01-24T13:26:58Z | 476,024 | 9 | 2009-01-24T13:34:36Z | [
"python",
"database",
"django",
"order"
] | Let's say I have the following model:
```
class Contest:
title = models.CharField( max_length = 200 )
description = models.TextField()
class Image:
title = models.CharField( max_length = 200 )
description = models.TextField()
contest = models.ForeignKey( Contest )
user = models.ForeignKey( User )
def score( self ):
return self.vote_set.all().aggregate( models.Sum( 'value' ) )[ 'value__sum' ]
class Vote:
value = models.SmallIntegerField()
user = models.ForeignKey( User )
image = models.ForeignKey( Image )
```
The users of a site can contribute their images to several contests. Then other users can vote them up or down.
Everything works fine, but now I want to display a page on which users can see all contributions to a certain contest. The images shall be ordered by their score.
Therefore I tried the following:
```
Contest.objects.get( pk = id ).image_set.order_by( 'score' )
```
As I feared it doesn't work since 'score' is no database field that could be used in queries. | You can write your own sort in Python very simply.
```
def getScore( anObject ):
return anObject.score()
objects= list(Contest.objects.get( pk = id ).image_set)
objects.sort( key=getScore )
```
This works nicely because we sorted the list, which we're going to provide to the template. |
Order a QuerySet by aggregate field value | 476,017 | 24 | 2009-01-24T13:26:58Z | 476,033 | 44 | 2009-01-24T13:44:37Z | [
"python",
"database",
"django",
"order"
] | Let's say I have the following model:
```
class Contest:
title = models.CharField( max_length = 200 )
description = models.TextField()
class Image:
title = models.CharField( max_length = 200 )
description = models.TextField()
contest = models.ForeignKey( Contest )
user = models.ForeignKey( User )
def score( self ):
return self.vote_set.all().aggregate( models.Sum( 'value' ) )[ 'value__sum' ]
class Vote:
value = models.SmallIntegerField()
user = models.ForeignKey( User )
image = models.ForeignKey( Image )
```
The users of a site can contribute their images to several contests. Then other users can vote them up or down.
Everything works fine, but now I want to display a page on which users can see all contributions to a certain contest. The images shall be ordered by their score.
Therefore I tried the following:
```
Contest.objects.get( pk = id ).image_set.order_by( 'score' )
```
As I feared it doesn't work since 'score' is no database field that could be used in queries. | Oh, of course I forget about new aggregation support in Django and its `annotate` functionality.
So query may look like this:
```
Contest.objects.get(pk=id).image_set.annotate(score=Sum('vote__value')).order_by( 'score' )
``` |
Detect duplicate MP3 files with different bitrates and/or different ID3 tags? | 476,227 | 12 | 2009-01-24T16:14:12Z | 476,382 | 13 | 2009-01-24T17:39:41Z | [
"python",
"file",
"mp3",
"duplicates",
"id3"
] | How could I detect (preferably with Python) duplicate MP3 files that can be encoded with different bitrates (but they are the same song) and ID3 tags that can be incorrect?
I know I can do an [MD5](http://en.wikipedia.org/wiki/MD5) checksum of the files content but that won't work for different bitrates. And I don't know if ID3 tags have influence in generating the MD5 checksum. Should I re-encode MP3 files that have a different bitrate and then I can do the checksum? What do you recommend? | The exact same question that people at the old AudioScrobbler and currently at [MusicBrainz](http://musicbrainz.org/) have worked on since long ago. For the time being, the Python project that can aid in your quest, is [Picard](http://musicbrainz.org/doc/PicardDownload), which will tag audio files (not only MPEG 1 Layer 3 files) with a GUID (actually, several of them), and from then on, matching the tags is quite simple.
If you prefer to do it as a project of your own, [libofa](http://code.google.com/p/musicip-libofa/) might be of help. |
Django "Did you mean?" query | 476,394 | 3 | 2009-01-24T17:47:30Z | 476,420 | 9 | 2009-01-24T18:07:09Z | [
"python",
"django",
"spell-checking"
] | I am writing a fairly simple Django application where users can enter string queries. The application will the search through the database for this string.
```
Entry.objects.filter(headline__contains=query)
```
This query is pretty strait forward but not really helpful to someone who isn't 100% sure what they are looking for. So I expanded the search.
```
from django.utils import stopwords
results = Entry.objects.filter(headline__contains=query)
if(!results):
query = strip_stopwords(query)
for(q in query.split(' ')):
results += Entry.objects.filter(headline__contains=q)
```
I would like to add some additional functionality to this. Searching for miss spelled words, plurals, common homophones (sound the same spelled differently), ect. I was just wondering if any of these things were built into Djangos query language. It isn't important enough for me to write a huge algorithm for I am really just looking for something built in.
Thanks in advance for all the answers. | You could try using python's [difflib](http://docs.python.org/library/difflib) module.
```
>>> from difflib import get_close_matches
>>> get_close_matches('appel', ['ape', 'apple', 'peach', 'puppy'])
['apple', 'ape']
>>> import keyword
>>> get_close_matches('wheel', keyword.kwlist)
['while']
>>> get_close_matches('apple', keyword.kwlist)
[]
>>> get_close_matches('accept', keyword.kwlist)
['except']
```
Problem is that to use difflib one must build a list of words from the database. That can be expensive. Maybe if you cache the list of words and only rebuild it once in a while.
Some database systems support a search method to do what you want, like PostgreSQL's [`fuzzystrmatch`](http://www.postgresql.org/docs/8.3/static/fuzzystrmatch.html) module. If that is your case you could try calling it.
---
**edit:**
For your new "requirement", well, you are out of luck. No, there is nothing built in inside django's query language. |
Resolving a relative url path to its absolute path | 476,511 | 43 | 2009-01-24T19:09:59Z | 476,521 | 70 | 2009-01-24T19:20:19Z | [
"python",
"url",
"path"
] | Is there a library in python that works like this?
```
>>> resolvePath("http://www.asite.com/folder/currentpage.html", "anotherpage.html")
'http://www.asite.com/folder/anotherpage.html'
>>> resolvePath("http://www.asite.com/folder/currentpage.html", "folder2/anotherpage.html")
'http://www.asite.com/folder/folder2/anotherpage.html'
>>> resolvePath("http://www.asite.com/folder/currentpage.html", "/folder3/anotherpage.html")
'http://www.asite.com/folder3/anotherpage.html'
>>> resolvePath("http://www.asite.com/folder/currentpage.html", "../finalpage.html")
'http://www.asite.com/finalpage.html'
``` | Yes, there is [`urlparse.urljoin`](https://docs.python.org/2/library/urlparse.html#urlparse.urljoin), or [`urllib.parse.urljoin`](https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urljoin) for Python 3.
```
>>> try: from urlparse import urljoin # Python2
... except ImportError: from urllib.parse import urljoin # Python3
...
>>> urljoin("http://www.asite.com/folder/currentpage.html", "anotherpage.html")
'http://www.asite.com/folder/anotherpage.html'
>>> urljoin("http://www.asite.com/folder/currentpage.html", "folder2/anotherpage.html")
'http://www.asite.com/folder/folder2/anotherpage.html'
>>> urljoin("http://www.asite.com/folder/currentpage.html", "/folder3/anotherpage.html")
'http://www.asite.com/folder3/anotherpage.html'
>>> urljoin("http://www.asite.com/folder/currentpage.html", "../finalpage.html")
'http://www.asite.com/finalpage.html'
```
for copy-and-paste:
```
try:
from urlparse import urljoin # Python2
except ImportError:
from urllib.parse import urljoin # Python3
``` |
Is anyone using meta-meta-classes / meta-meta-meta-classes in Python/ other languages? | 476,586 | 13 | 2009-01-24T20:10:48Z | 476,641 | 18 | 2009-01-24T21:00:00Z | [
"python",
"metaprogramming",
"design-patterns",
"factory",
"metaclass"
] | I recently discovered metaclasses in python.
Basically a metaclass in python is a class that creates a class. There are many useful reasons why you would want to do this - any kind of class initialisation for example. Registering classes on factories, complex validation of attributes, altering how inheritance works, etc. All of this becomes not only possible but simple.
But in python, metaclasses are also plain classes. So, I started wondering if the abstraction could usefully go higher, and it seems to me that it can and that:
* a metaclass corresponds to or implements a role in a pattern (as in GOF pattern languages).
* a meta-metaclass is the pattern itself (if we allow it to create tuples of classes representing abstract roles, rather than just a single class)
* a meta-meta-metaclass is a *pattern factory*, which corresponds to the GOF pattern groupings, e.g. Creational, Structural, Behavioural. A factory where you could describe a case of a certain type of problem and it would give you a set of classes that solved it.
* a meta-meta-meta-metaclass (as far as I could go), is a *pattern factory factory*, a factory to which you could perhaps describe the type of your problem and it would give you a pattern factory to ask.
I have found some stuff about this online, but mostly not very useful. One problem is that different languages define metaclasses slightly differently.
Has anyone else used metaclasses like this in python/elsewhere, or seen this used in the wild, or thought about it? What are the analogues in other languages? E.g. in C++ how deep can the template recursion go?
I'd very much like to research it further. | This reminds me of the eternal quest some people seem to be on to make a "generic implementation of a pattern." Like a factory that can create any object ([including another factory](http://discuss.joelonsoftware.com/default.asp?joel.3.219431.12)), or a general-purpose dependency injection framework that is far more complex to manage than simply writing code that actually *does* something.
I had to deal with people intent on abstraction to the point of navel-gazing when I was managing the Zend Framework project. I turned down a bunch of proposals to create components that didn't do anything, they were just magical implementations of GoF patterns, as though the pattern were a goal in itself, instead of a means to a goal.
There's a point of diminishing returns for abstraction. Some abstraction is great, but eventually you need to write code that does something useful.
Otherwise it's just [turtles all the way down](http://en.wikipedia.org/wiki/Turtles_all_the_way_down). |
Resources for TDD aimed at Python Web Development | 476,668 | 15 | 2009-01-24T21:19:07Z | 476,858 | 13 | 2009-01-24T23:08:13Z | [
"python",
"tdd",
"testing"
] | I am a hacker not and not a full-time programmer but am looking to start my own full application development experiment. I apologize if I am missing something easy here. I am looking for recommendations for books, articles, sites, etc for learning more about test driven development specifically compatible with or aimed at Python web application programming. I understand that Python has built-in tools to assist. What would be the best way to learn about these outside of RTFM? I have searched on StackOverflow and found the Kent Beck's and David Astels book on the subject. I have also bookmarked the Wikipedia article as it has many of these types of resources.
Are there any particular ones you would recommend for this language/application? | I wrote a series of blogs on [TDD in Django](http://blog.cerris.com/category/django-tdd/) that covers some TDD with the [nose testing framework](http://somethingaboutorange.com/mrl/projects/nose/).
There are a lot of free online resources out there for learning about TDD:
* [The c2 wiki article](http://c2.com/cgi-bin/wiki?TestDrivenDevelopment) gives good background on general TDD philosophy.
* [The onlamp article](http://www.onlamp.com/pub/a/python/2004/12/02/tdd_pyunit.html) is a simple introduction.
* Here's a presentation on [TDD with game development in pygame](http://powertwenty.com/kpd/blog/index.php/python/test_driven_development_in_python) that really helped me understand TDD.
For testing web applications, test first or otherwise, I'd recommend [twill](http://twill.idyll.org/) and [selenium](http://seleniumhq.org/) as tools to use. |
Resources for TDD aimed at Python Web Development | 476,668 | 15 | 2009-01-24T21:19:07Z | 10,648,767 | 8 | 2012-05-18T07:50:46Z | [
"python",
"tdd",
"testing"
] | I am a hacker not and not a full-time programmer but am looking to start my own full application development experiment. I apologize if I am missing something easy here. I am looking for recommendations for books, articles, sites, etc for learning more about test driven development specifically compatible with or aimed at Python web application programming. I understand that Python has built-in tools to assist. What would be the best way to learn about these outside of RTFM? I have searched on StackOverflow and found the Kent Beck's and David Astels book on the subject. I have also bookmarked the Wikipedia article as it has many of these types of resources.
Are there any particular ones you would recommend for this language/application? | Can I plug my own tutorial, which covers the materials from the official Django tutorial, but uses full TDD all the way - including "proper" functional/acceptance tests using the Selenium browser-automation tool... <http://tdd-django-tutorial.com>
[update 2014-01] I now have a book, just about to be published by OReilly, that covers all the stuff from the tutorial and much more. The full thing is available online (free) at <http://www.obeythetestinggoat.com> |
python string join performance | 476,772 | 10 | 2009-01-24T22:26:13Z | 476,788 | 12 | 2009-01-24T22:32:32Z | [
"python",
"performance",
"string"
] | There are a lot of articles around the web concerning python performance, the first thing you read: concatenating strings should not be done using '+': avoid s1+s2+s3, instead use str.join
I tried the following: concatenating two strings as part of a directory path: three approaches:
1. '+' which i should not do
2. str.join
3. os.path.join
Here is my code:
```
import os,time
s1='/part/one/of/dir'
s2='part/two/of/dir'
N=10000
t=time.clock()
for i in xrange(N):
s=s1+os.sep+s2
print time.clock()-t
t=time.clock()
for i in xrange(N):
s=os.sep.join((s1,s2))
print time.clock()-t
t=time.clock()
for i in xrange(N):
s=os.path.join(s1,s2)
print time.clock()-t
```
Here the results (python 2.5 WinXP)
```
0.0182201927899
0.0262544541275
0.120238186697
```
Shouldn't it be exactly the other way round ? | Most of the performance issues with string concatenation are ones of asymptotic performance, so the differences become most significant when you are concatenating many long strings. In your sample, you are performing the same concatenation many times. You aren't building up any long string, and it may be that the python interpreter is optimizing your loops. This would explain why the time increases when you move to str.join and path.join - they are more complex functions that are not as easily reduced. (os.path.join does a lot of checking on the strings to see if they need to be rewritten in any way before they are concatenated. This sacrifices some performance for the sake of portability.)
By the way, since file paths are not usually very long, you almost certainly want to use os.path.join for the sake of the portability. If the performance of the concatenation is a problem, you're doing something very odd with your filesystem. |
Splitting a large XML file in Python | 476,949 | 9 | 2009-01-25T00:22:01Z | 476,982 | 9 | 2009-01-25T00:49:08Z | [
"python",
"xml"
] | I'm looking to split a huge XML file into smaller bits. I'd like to scan through the file looking for a specific tag, then grab all info between and , then save that into a file, then continue on through the rest of the file.
My issue is trying to find a clean way to note the start and end of the tags, so that I can grab the text inside as I scan through the file with "for line in f"
I'd rather not use sentinel variables. Is there a pythonic way to get this done?
The file is too big to read into memory. | There are two common ways to handle XML data.
One is called DOM, which stands for Document Object Model. This style of XML parsing is probably what you have seen when looking at documentation, because it reads the entire XML into memory to create the object model.
The second is called SAX, which is a streaming method. The parser starts reading the XML and sends signals to your code about certain events, e.g. when a new start tag is found.
So SAX is clearly what you need for your situation. Sax parsers can be found in the python library under [xml.sax](http://www.python.org/doc/2.5.2/lib/module-xml.sax.html) and [xml.parsers.expat](http://www.python.org/doc/2.5.2/lib/module-xml.parsers.expat.html). |
Using a java library from python | 476,968 | 25 | 2009-01-25T00:41:22Z | 476,977 | 8 | 2009-01-25T00:46:46Z | [
"java",
"python",
"jython"
] | I have a python app and java app. The python app generates input for the java app and invokes it on the command line.
I'm sure there must be a more elegant solution to this; just like using JNI to invoke C code from Java.
Any pointers?
(FYI I'm v. new to Python)
**Clarification** (at the cost of a long question: apologies)
The py app (which I don't own) takes user input in the form of a number of configuration files. It then interprits these and farms work off to a number of (hidden) tools via a plugin mechanism. I'm looking to add support for the functionality provided by the legacy Java app.
So it doesn't make sense to call the python app from the java app and I can't run the py app in a jython environment (on the JVM).
Since there is no obvious mechanism for this I think the simple CL invocation is the best solution. | Take a look at [Jython](http://www.jython.org/). It's kind of like JNI, but replace C with Python, i.e. you can call Python from Java and vice versa. It's not totally clear what you're trying to do or why your current approach isn't what you want. |
Using a java library from python | 476,968 | 25 | 2009-01-25T00:41:22Z | 3,793,496 | 37 | 2010-09-25T10:51:57Z | [
"java",
"python",
"jython"
] | I have a python app and java app. The python app generates input for the java app and invokes it on the command line.
I'm sure there must be a more elegant solution to this; just like using JNI to invoke C code from Java.
Any pointers?
(FYI I'm v. new to Python)
**Clarification** (at the cost of a long question: apologies)
The py app (which I don't own) takes user input in the form of a number of configuration files. It then interprits these and farms work off to a number of (hidden) tools via a plugin mechanism. I'm looking to add support for the functionality provided by the legacy Java app.
So it doesn't make sense to call the python app from the java app and I can't run the py app in a jython environment (on the JVM).
Since there is no obvious mechanism for this I think the simple CL invocation is the best solution. | Sorry to ressurect the thread, but there was no accepted answer...
You could also use [Py4J](http://py4j.sourceforge.net/index.html). There is an example on the frontpage and lots of documentation, but essentially, you just call Java methods from your python code as if they were python methods:
```
>>> from py4j.java_gateway import JavaGateway
>>> gateway = JavaGateway() # connect to the JVM
>>> java_object = gateway.jvm.mypackage.MyClass() # invoke constructor
>>> other_object = java_object.doThat()
>>> other_object.doThis(1,'abc')
>>> gateway.jvm.java.lang.System.out.println('Hello World!') # call a static method
```
As opposed to Jython, Py4J runs in the Python VM so it is always "up to date" with the latest version of Python and you can use libraries that do not run well on Jython (e.g., lxml). The communication is done through sockets instead of JNI.
*Disclaimer: I am the author of Py4J* |
How to read Unicode input and compare Unicode strings in Python? | 477,061 | 26 | 2009-01-25T02:19:21Z | 477,084 | 14 | 2009-01-25T02:38:34Z | [
"python",
"unicode"
] | I work in Python and would like to read user input (from command line) in Unicode format, ie a Unicode equivalent of `raw_input`?
Also, I would like to test Unicode strings for equality and it looks like a standard `==` does not work.
Thank you for your help ! | It should work. `raw_input` returns a byte string which you must decode using the correct encoding to get your `unicode` object. For example, the following works for me under Python 2.5 / Terminal.app / OSX:
```
>>> bytes = raw_input()
æ¥æ¬èª Îλληνικά
>>> bytes
'\xe6\x97\xa5\xe6\x9c\xac\xe8\xaa\x9e \xce\x95\xce\xbb\xce\xbb\xce\xb7\xce\xbd\xce\xb9\xce\xba\xce\xac'
>>> uni = bytes.decode('utf-8') # substitute the encoding of your terminal if it's not utf-8
>>> uni
u'\u65e5\u672c\u8a9e \u0395\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03ac'
>>> print uni
æ¥æ¬èª Îλληνικά
```
As for comparing unicode strings: can you post an example where the comparison doesn't work? |
How to read Unicode input and compare Unicode strings in Python? | 477,061 | 26 | 2009-01-25T02:19:21Z | 477,496 | 48 | 2009-01-25T10:25:54Z | [
"python",
"unicode"
] | I work in Python and would like to read user input (from command line) in Unicode format, ie a Unicode equivalent of `raw_input`?
Also, I would like to test Unicode strings for equality and it looks like a standard `==` does not work.
Thank you for your help ! | [`raw_input()`](https://docs.python.org/2/library/functions.html#raw_input "raw_input()") returns strings as encoded by the OS or UI facilities. The difficulty is knowing which is that decoding. You might attempt the following:
```
import sys, locale
text= raw_input().decode(sys.stdin.encoding or locale.getpreferredencoding(True))
```
which should work correctly in most of the cases.
We need more data about not working Unicode comparisons in order to help you. However, it might be a matter of normalization. Consider the following:
```
>>> a1= u'\xeatre'
>>> a2= u'e\u0302tre'
```
`a1` and `a2` are equivalent but not equal:
```
>>> print a1, a2
être être
>>> print a1 == a2
False
```
So you might want to use the [`unicodedata.normalize()`](https://docs.python.org/2/library/unicodedata.html#unicodedata.normalize "unicodedata.normalize()") method:
```
>>> import unicodedata as ud
>>> ud.normalize('NFC', a1)
u'\xeatre'
>>> ud.normalize('NFC', a2)
u'\xeatre'
>>> ud.normalize('NFC', a1) == ud.normalize('NFC', a2)
True
```
If you give us more information, we might be able to help you more, though. |
Python import coding style | 477,096 | 46 | 2009-01-25T03:06:09Z | 477,103 | 18 | 2009-01-25T03:19:08Z | [
"python",
"coding-style"
] | I've discovered a new pattern. Is this pattern well known or what is the opinion about it?
Basically, I have a hard time scrubbing up and down source files to figure out what module imports are available and so forth, so now, instead of
```
import foo
from bar.baz import quux
def myFunction():
foo.this.that(quux)
```
I move all my imports into the function where they're actually used., like this:
```
def myFunction():
import foo
from bar.baz import quux
foo.this.that(quux)
```
This does a few things. First, I rarely accidentally pollute my modules with the contents of other modules. I could set the `__all__` variable for the module, but then I'd have to update it as the module evolves, and that doesn't help the namespace pollution for code that actually lives in the module.
Second, I rarely end up with a litany of imports at the top of my modules, half or more of which I no longer need because I've refactored it. Finally, I find this pattern MUCH easier to read, since every referenced name is right there in the function body. | A few problems with this approach:
* It's not immediately obvious when opening the file which modules it depends on.
* It will confuse programs that have to analyze dependencies, such as `py2exe`, `py2app` etc.
* What about modules that you use in many functions? You will either end up with a lot of redundant imports or you'll have to have some at the top of the file and some inside functions.
So... the preferred way is to put all imports at the top of the file. I've found that if my imports get hard to keep track of, it usually means I have too much code that I'd be better off splitting it into two or more files.
Some situations where I *have* found imports inside functions to be useful:
* To deal with circular dependencies (if you really really can't avoid them)
* Platform specific code
Also: putting imports inside each function is actually *not* appreciably slower than at the top of the file. The first time each module is loaded it is put into `sys.modules`, and each subsequent import costs only the time to look up the module, which is fairly fast (it is not reloaded). |
Python import coding style | 477,096 | 46 | 2009-01-25T03:06:09Z | 477,107 | 48 | 2009-01-25T03:24:50Z | [
"python",
"coding-style"
] | I've discovered a new pattern. Is this pattern well known or what is the opinion about it?
Basically, I have a hard time scrubbing up and down source files to figure out what module imports are available and so forth, so now, instead of
```
import foo
from bar.baz import quux
def myFunction():
foo.this.that(quux)
```
I move all my imports into the function where they're actually used., like this:
```
def myFunction():
import foo
from bar.baz import quux
foo.this.that(quux)
```
This does a few things. First, I rarely accidentally pollute my modules with the contents of other modules. I could set the `__all__` variable for the module, but then I'd have to update it as the module evolves, and that doesn't help the namespace pollution for code that actually lives in the module.
Second, I rarely end up with a litany of imports at the top of my modules, half or more of which I no longer need because I've refactored it. Finally, I find this pattern MUCH easier to read, since every referenced name is right there in the function body. | This does have a few disadvantages.
# Testing
On the off chance you want to test your module through runtime modification, it may make it more difficult. Instead of doing
```
import mymodule
mymodule.othermodule = module_stub
```
You'll have to do
```
import othermodule
othermodule.foo = foo_stub
```
This means that you'll have to patch the othermodule globally, as opposed to just change what the reference in mymodule points to.
# Dependency Tracking
This makes it non-obvious what modules your module depends on. This is especially irritating if you use many third party libraries or are re-organizing code.
I had to maintain some legacy code that used imports inline all over the place, it made the code extremely difficult to refactor or repackage.
# Notes On Performance
Because of the way python caches modules, there isn't a performance hit. In fact, since the module is in the local namespace, there is a slight performance benefit to importing modules in a function.
## Top Import
```
import random
def f():
L = []
for i in xrange(1000):
L.append(random.random())
for i in xrange(10000):
f()
$ time python test.py
real 0m1.569s
user 0m1.560s
sys 0m0.010s
```
## Import in Function Body
```
def f():
import random
L = []
for i in xrange(1000):
L.append(random.random())
for i in xrange(10000):
f()
$ time python test2.py
real 0m1.385s
user 0m1.380s
sys 0m0.000s
``` |
Python import coding style | 477,096 | 46 | 2009-01-25T03:06:09Z | 477,116 | 8 | 2009-01-25T03:32:28Z | [
"python",
"coding-style"
] | I've discovered a new pattern. Is this pattern well known or what is the opinion about it?
Basically, I have a hard time scrubbing up and down source files to figure out what module imports are available and so forth, so now, instead of
```
import foo
from bar.baz import quux
def myFunction():
foo.this.that(quux)
```
I move all my imports into the function where they're actually used., like this:
```
def myFunction():
import foo
from bar.baz import quux
foo.this.that(quux)
```
This does a few things. First, I rarely accidentally pollute my modules with the contents of other modules. I could set the `__all__` variable for the module, but then I'd have to update it as the module evolves, and that doesn't help the namespace pollution for code that actually lives in the module.
Second, I rarely end up with a litany of imports at the top of my modules, half or more of which I no longer need because I've refactored it. Finally, I find this pattern MUCH easier to read, since every referenced name is right there in the function body. | Another useful thing to note is that the `from module import *` syntax inside of a function has been removed in Python 3.0.
There is a brief mention of it under "Removed Syntax" here:
<http://docs.python.org/3.0/whatsnew/3.0.html> |
Python import coding style | 477,096 | 46 | 2009-01-25T03:06:09Z | 4,789,963 | 63 | 2011-01-25T04:23:27Z | [
"python",
"coding-style"
] | I've discovered a new pattern. Is this pattern well known or what is the opinion about it?
Basically, I have a hard time scrubbing up and down source files to figure out what module imports are available and so forth, so now, instead of
```
import foo
from bar.baz import quux
def myFunction():
foo.this.that(quux)
```
I move all my imports into the function where they're actually used., like this:
```
def myFunction():
import foo
from bar.baz import quux
foo.this.that(quux)
```
This does a few things. First, I rarely accidentally pollute my modules with the contents of other modules. I could set the `__all__` variable for the module, but then I'd have to update it as the module evolves, and that doesn't help the namespace pollution for code that actually lives in the module.
Second, I rarely end up with a litany of imports at the top of my modules, half or more of which I no longer need because I've refactored it. Finally, I find this pattern MUCH easier to read, since every referenced name is right there in the function body. | The (previously) [top-voted answer](http://stackoverflow.com/questions/477096/python-import-coding-style/477107#477107) to this question is nicely formatted but absolutely wrong about performance. Let me demonstrate
# Performance
## Top Import
```
import random
def f():
L = []
for i in xrange(1000):
L.append(random.random())
for i in xrange(1000):
f()
$ time python import.py
real 0m0.721s
user 0m0.412s
sys 0m0.020s
```
## Import in Function Body
```
def f():
import random
L = []
for i in xrange(1000):
L.append(random.random())
for i in xrange(1000):
f()
$ time python import2.py
real 0m0.661s
user 0m0.404s
sys 0m0.008s
```
As you can see, it can be **more** efficient to import the module in the function. The reason for this is simple. It moves the reference from a global reference to a local reference. This means that, for CPython at least, the compiler will emit `LOAD_FAST` instructions instead of `LOAD_GLOBAL` instructions. These are, as the name implies, faster. The other answerer artificially inflated the performance hit of looking in `sys.modules` by *importing on every single iteration of the loop*.
As a rule, it's best to import at the top but performance is *not* the reason if you are accessing the module a lot of times. The reasons are that one can keep track of what a module depends on more easily and that doing so is consistent with most of the rest of the Python universe. |
How to list the files in a static directory? | 477,135 | 8 | 2009-01-25T03:59:08Z | 477,887 | 7 | 2009-01-25T16:23:31Z | [
"python",
"google-app-engine"
] | I am playing with Google App Engine and Python and I cannot list the files of a static directory. Below is the code I currently use.
**app.yaml**
```
- url: /data
static_dir: data
```
**Python code to list the files**
```
myFiles = []
for root, dirs, files in os.walk(os.path.join(os.path.dirname(__file__), 'data/') ):
for name in files:
full_name = os.path.join(root, name)
myFiles.append('%s;%s\n' % (name, datetime.fromtimestamp(os.stat(full_name).st_mtime)))
```
When I run this code locally on my machine, everything is alright. I have my Python script at the root of the directory and it walks the files under the data directory. However, when I upload and run the exact same code in GAE, it doesn`t work. It seems to me that the directory structure of my application is not exactly replicated in Google App Engine. **Where are the static files?**
Thanks! | <https://developers.google.com/appengine/docs/python/config/appconfig#Python_app_yaml_Static_file_handlers>
They're not where you think they are, GAE puts static content into GoogleFS which is equivalent of a CDN. The idea is that static content is meant to be served directly to your users and not act as a file store you can manipulate. Furthermore GAE has 1K file limit and it would be difficult to police this rule if you could manipulate your static file store. |
Generating and submitting a dynamic number of objects in a form with Django | 477,183 | 2 | 2009-01-25T05:02:30Z | 477,194 | 8 | 2009-01-25T05:17:59Z | [
"python",
"html",
"django",
"forms",
"web-applications"
] | I want to be able to update a dynamic number of objects within a single form using Django and I'm wondering what the best way to do this would be. An example of a similar situation may help.
Model:
```
class Customer(Model.models):
name = models.CharField(max_length=100)
active = models.BooleanField()
```
Form (I know I'm mixing view and template code here which doesn't work but this is a general idea for what the form is supposed to do):
```
customers = Customer.objects.all()
for c in customers:
print <li> {{ c.name }} <input type="checkbox" value="{{ c.active }}" name="?" />
```
How would I go about submitting a list of these objects? Would the best bet be to attach the id of the customer into each 'row' and then process based on the id? Is there a mechanism for submitting a list of tuples? What would be the ideal solution? | [Formsets](http://docs.djangoproject.com/en/dev/topics/forms/formsets/)!
Also, the equivalent for forms generated directly models are [model formsets](http://docs.djangoproject.com/en/dev/topics/forms/modelforms/#id1). |
How do I simulate flip of biased coin in python? | 477,237 | 20 | 2009-01-25T06:13:00Z | 477,248 | 37 | 2009-01-25T06:17:59Z | [
"python",
"random",
"coin-flipping"
] | In unbiased coin flip H or T occurs 50% of times.
But I want to simulate coin which gives H with probability 'p' and T with probability '(1-p)'.
something like this:
```
def flip(p):
'''this function return H with probability p'''
# do something
return result
>> [flip(0.8) for i in xrange(10)]
[H,H,T,H,H,H,T,H,H,H]
``` | `random.random()` returns a *uniformly distributed* pseudo-random floating point number in the range [0, 1). This number is less than a given number `p` in the range [0,1) with probability `p`. Thus:
```
def flip(p):
return 'H' if random.random() < p else 'T'
```
Some experiments:
```
>>> N = 100
>>> flips = [flip(0.2) for i in xrange(N)]
>>> float(flips.count('H'))/N
0.17999999999999999 # Approximately 20% of the coins are heads
>>> N = 10000
>>> flips = [flip(0.2) for i in xrange(N)]
>>> float(flips.count('H'))/N
0.20549999999999999 # Better approximation
``` |
How do I simulate flip of biased coin in python? | 477,237 | 20 | 2009-01-25T06:13:00Z | 478,513 | 7 | 2009-01-25T23:24:57Z | [
"python",
"random",
"coin-flipping"
] | In unbiased coin flip H or T occurs 50% of times.
But I want to simulate coin which gives H with probability 'p' and T with probability '(1-p)'.
something like this:
```
def flip(p):
'''this function return H with probability p'''
# do something
return result
>> [flip(0.8) for i in xrange(10)]
[H,H,T,H,H,H,T,H,H,H]
``` | Do you want the "bias" to be based in symmetric distribuition? Or maybe exponential distribution? Gaussian anyone?
Well, here are all the methods, extracted from random documentation itself.
First, an example of triangular distribution:
```
print random.triangular(0, 1, 0.7)
```
> **`random.triangular(low, high, mode)`**:
>
> Return a random floating point number `N` such that `low <= N < high` and
> with the specified mode between those
> bounds. The `low` and `high` bounds
> default to *zero* and *one*. The `mode`
> argument defaults to the midpoint
> between the bounds, giving a symmetric
> distribution.
>
> **`random.betavariate(alpha, beta)`**:
>
> Beta distribution. Conditions on the parameters are `alpha > 0` and
> `beta > 0`. Returned values range between `0` and `1`.
>
> **`random.expovariate(lambd)`**:
>
> Exponential distribution. `lambd` is `1.0`
> divided by the desired mean. It should
> be *nonzero*. (The parameter would be
> called â*`lambda`*â, but that is a
> reserved word in Python.) Returned
> values range from `0` to *positive
> infinity* if `lambd` is positive, and
> from *negative infinity* to `0` if `lambd`
> is negative.
>
> **`random.gammavariate(alpha, beta)`**:
>
> Gamma distribution. (Not the gamma
> function!) Conditions on the
> parameters are `alpha > 0` and `beta > 0`.
>
> **`random.gauss(mu, sigma)`**:
>
> Gaussian distribution. `mu` is the mean, and `sigma` is the standard
> deviation. This is slightly faster
> than the `normalvariate()` function
> defined below.
>
> **`random.lognormvariate(mu, sigma)`**:
>
> Log normal distribution. If you take
> the natural logarithm of this
> distribution, youâll get a normal
> distribution with mean `mu` and standard
> deviation `sigma`. `mu` can have any
> value, and `sigma` must be greater than
> *zero*.
>
> **`random.normalvariate(mu, sigma)`**:
>
> Normal distribution. `mu` is the mean,
> and `sigma` is the standard deviation.
>
> **`random.vonmisesvariate(mu, kappa)`**:
>
> `mu` is the mean angle, expressed in
> radians between `0` and `2*pi`, and `kappa`
> is the concentration parameter, which
> must be greater than or equal to *zero*.
> If `kappa` is equal to *zero*, this
> distribution reduces to a uniform
> random angle over the range `0` to `2*pi`.
>
> **`random.paretovariate(alpha)`**:
>
> Pareto distribution. `alpha` is the
> shape parameter.
>
> **`random.weibullvariate(alpha, beta)`**
>
> Weibull distribution. `alpha` is the
> scale parameter and `beta` is the shape
> parameter. |
Python decimal range() step value | 477,486 | 283 | 2009-01-25T10:20:43Z | 477,506 | 22 | 2009-01-25T10:32:01Z | [
"python",
"floating-point",
"range"
] | Is there a way to step between 0 and 1 by 0.1?
I thought I could do it like the following, but it failed:
```
for i in range(0, 1, 0.1):
print i
```
Instead, it says that the step argument cannot be zero, which I did not expect. | Increase the magnitude of `i` for the loop and then reduce it when you need it.
```
for i * 100 in range(0, 100, 10):
print i / 100.0
```
**EDIT: I honestly cannot remember why I thought that would work syntactically**
```
for i in range(0, 11, 1):
print i / 10.0
```
That should have the desired output. |
Python decimal range() step value | 477,486 | 283 | 2009-01-25T10:20:43Z | 477,507 | 7 | 2009-01-25T10:32:43Z | [
"python",
"floating-point",
"range"
] | Is there a way to step between 0 and 1 by 0.1?
I thought I could do it like the following, but it failed:
```
for i in range(0, 1, 0.1):
print i
```
Instead, it says that the step argument cannot be zero, which I did not expect. | The range() built-in function returns a sequence of integer values, I'm afraid, so you can't use it to do a decimal step.
I'd say just use a while loop:
```
i = 0.0
while i <= 1.0:
print i
i += 0.1
```
If you're curious, Python is converting your 0.1 to 0, which is why it's telling you the argument can't be zero. |
Python decimal range() step value | 477,486 | 283 | 2009-01-25T10:20:43Z | 477,513 | 109 | 2009-01-25T10:35:25Z | [
"python",
"floating-point",
"range"
] | Is there a way to step between 0 and 1 by 0.1?
I thought I could do it like the following, but it failed:
```
for i in range(0, 1, 0.1):
print i
```
Instead, it says that the step argument cannot be zero, which I did not expect. | Python's range() can only do integers, not floating point. In your specific case, you can use a list comprehension instead:
```
[x * 0.1 for x in range(0, 10)]
```
(Replace the call to range with that expression.)
For the more general case, you may want to write a custom function or generator. |
Python decimal range() step value | 477,486 | 283 | 2009-01-25T10:20:43Z | 477,610 | 132 | 2009-01-25T11:57:17Z | [
"python",
"floating-point",
"range"
] | Is there a way to step between 0 and 1 by 0.1?
I thought I could do it like the following, but it failed:
```
for i in range(0, 1, 0.1):
print i
```
Instead, it says that the step argument cannot be zero, which I did not expect. | Building on ['xrange([start], stop[, step])'](http://docs.python.org/library/functions.html#xrange), you can define a generator that accepts and produces any type you choose (stick to types supporting `+` and `<`):
```
>>> def drange(start, stop, step):
... r = start
... while r < stop:
... yield r
... r += step
...
>>> i0=drange(0.0, 1.0, 0.1)
>>> ["%g" % x for x in i0]
['0', '0.1', '0.2', '0.3', '0.4', '0.5', '0.6', '0.7', '0.8', '0.9', '1']
>>>
``` |
Python decimal range() step value | 477,486 | 283 | 2009-01-25T10:20:43Z | 477,635 | 343 | 2009-01-25T12:26:08Z | [
"python",
"floating-point",
"range"
] | Is there a way to step between 0 and 1 by 0.1?
I thought I could do it like the following, but it failed:
```
for i in range(0, 1, 0.1):
print i
```
Instead, it says that the step argument cannot be zero, which I did not expect. | You can also use the [NumPy](http://en.wikipedia.org/wiki/NumPy) library (which isn't part of standard library but is relatively easy to obtain) which has the `arange` function:
```
>>> import numpy as np
>>> np.arange(0,1,0.1)
array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
```
as well as the `linspace` function which lets you have control over what happens at the endpoint (non-trivial for floating point numbers when things won't always divide into the correct number of "slices"):
```
>>> np.linspace(0,1,11)
array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ])
>>> np.linspace(0,1,10,endpoint=False)
array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
``` |
Python decimal range() step value | 477,486 | 283 | 2009-01-25T10:20:43Z | 10,986,098 | 12 | 2012-06-11T19:10:53Z | [
"python",
"floating-point",
"range"
] | Is there a way to step between 0 and 1 by 0.1?
I thought I could do it like the following, but it failed:
```
for i in range(0, 1, 0.1):
print i
```
Instead, it says that the step argument cannot be zero, which I did not expect. | Similar to [R's](http://en.wikipedia.org/wiki/R_%28programming_language%29) `seq` function, this one returns a sequence in any order given the correct step value. The last value is equal to the stop value.
```
def seq(start, stop, step=1):
n = int(round((stop - start)/float(step)))
if n > 1:
return([start + step*i for i in range(n+1)])
else:
return([])
```
### Results
```
seq(1, 5, 0.5)
```
> [1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0]
```
seq(10, 0, -1)
```
> [10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
```
seq(10, 0, -2)
```
> [10, 8, 6, 4, 2, 0] |
Python decimal range() step value | 477,486 | 283 | 2009-01-25T10:20:43Z | 12,310,785 | 7 | 2012-09-07T02:01:56Z | [
"python",
"floating-point",
"range"
] | Is there a way to step between 0 and 1 by 0.1?
I thought I could do it like the following, but it failed:
```
for i in range(0, 1, 0.1):
print i
```
Instead, it says that the step argument cannot be zero, which I did not expect. | ```
import numpy as np
for i in np.arange(0, 1, 0.1):
print i
``` |
Python decimal range() step value | 477,486 | 283 | 2009-01-25T10:20:43Z | 23,715,956 | 16 | 2014-05-17T20:50:03Z | [
"python",
"floating-point",
"range"
] | Is there a way to step between 0 and 1 by 0.1?
I thought I could do it like the following, but it failed:
```
for i in range(0, 1, 0.1):
print i
```
Instead, it says that the step argument cannot be zero, which I did not expect. | `scipy` has a built in function `arange` which generalizes Python's `range()` constructor to satisfy your requirement of float handling.
`from scipy import arange` |
Easy_install of wxpython has "setup script" error | 477,573 | 11 | 2009-01-25T11:25:02Z | 478,200 | 9 | 2009-01-25T20:19:45Z | [
"python",
"wxpython",
"easy-install"
] | I have an install of python 2.5 that fink placed in /sw/bin/. I use the easy install command
```
sudo /sw/bin/easy_install wxPython
```
to try to install wxpython and I get an error while trying to process wxPython-src-2.8.9.1.tab.bz2 that there is not setup script. Easy-install has worked for several other installations until this one. Any help on why it's busting now?
EDIT: The error occurs before dumping back to shell prompt.
*Reading <http://wxPython.org/download.php>
Best match: wxPython src-2.8.9.1
Downloading <http://downloads.sourceforge.net/wxpython/wxPython-src-2.8.9.1.tar.bz2>
Processing wxPython-src-2.8.9.1.tar.bz2
error: Couldn't find a setup script in /tmp/easy\_install-tNg6FG/wxPython-src-2.8.9.1.tar.bz2* | There is a simple reason why it's busting: there just is no setup.py in wxPython; wxPython does not use distutils for installation.
Instead, read the file README.1st.txt in source distribution for instruction on how to install wxPython. |
What's the idiomatic Python equivalent to Django's 'regroup' template tag? | 477,820 | 12 | 2009-01-25T15:29:44Z | 477,839 | 26 | 2009-01-25T15:43:26Z | [
"python",
"django",
"django-templates"
] | <http://docs.djangoproject.com/en/dev/ref/templates/builtins/#regroup>
I can think of a few ways of doing it with loops but I'd particularly like to know if there is a neat one-liner. | Combine [`itertools.groupby`](http://docs.python.org/library/itertools.html#itertools.groupby) with [`operator.itemgetter`](http://docs.python.org/library/operator.html#operator.itemgetter) to get a pretty nice solution:
```
from operator import itemgetter
from itertools import groupby
key = itemgetter('gender')
iter = groupby(sorted(people, key=key), key=key)
for gender, people in iter:
print '===', gender, '==='
for person in people:
print person
``` |
Python and os.chroot | 478,359 | 2 | 2009-01-25T21:39:22Z | 478,396 | 7 | 2009-01-25T22:02:05Z | [
"python",
"linux",
"chroot"
] | I'm writing a web-server in Python as a hobby project. The code is targeted at \*NIX machines. I'm new to developing on Linux and even newer to Python itself.
I am worried about people breaking out of the folder that I'm using to serve up the web-site. The most obvious way to do this is to filter requests for documents like /../../etc/passwd. However, I'm worried that there might be clever ways to go up the directory tree that I'm not aware of and consequentially my filter won't catch.
I'm considering adding using the os.chroot so that the root directory is the web-site itself. Is this is a safe way of protecting against these jail breaking attacks? Are there any potential pitfalls to doing this that will hurt me down the road? | Yes there are pitfalls. Security wise:
1. If you run as root, there are always ways to break out. So first chroot(), then PERMANENTLY drop privileges to an other user.
2. Put nothing which isn't absolutely required into the chroot tree. Especially no suid/sgid files, named pipes, unix domain sockets and device nodes.
Python wise your whole module loading gets screwed up. Python is simply not made for such scenarios. If your application is moderately complex you *will* run into module loading issues.
I think much more important than chrooting is running as a non privileged user and simply using the file system permissions to keep that user from reading anything of importance. |
How to send multiple input field values with same name? | 478,382 | 12 | 2009-01-25T21:55:12Z | 478,406 | 20 | 2009-01-25T22:11:21Z | [
"python",
"django",
"django-forms"
] | I have m2m field, lets say it have name 'relations', so i want to allow user to send as many relations as he wants. I add new input to html with javascript with same name, like so
```
<input type='text' name='relations' value='a' />
<input type='text' name='relations' value='b' />
```
in cleaned\_data i receive only value of second input ('b'). How to receive both? | I don't know how to do that with Forms, but if you want to grab the values in the raw way, here's how I'd do:
```
relations = request.POST.getlist('relations')
``` |
Python regular expressions with more than 100 groups? | 478,458 | 13 | 2009-01-25T22:48:56Z | 478,470 | 11 | 2009-01-25T22:54:05Z | [
"python",
"regex"
] | Is there any way to beat the 100-group limit for regular expressions in Python? Also, could someone explain why there is a limit. Thanks. | There is a limit because it would take too much memory to store the complete state machine efficiently. I'd say that if you have more than 100 groups in your re, something is wrong either in the re itself or in the way you are using them. Maybe you need to split the input and work on smaller chunks or something. |
How do I simulate biased die in python? | 479,236 | 3 | 2009-01-26T09:37:47Z | 479,299 | 17 | 2009-01-26T10:09:03Z | [
"python",
"probability"
] | I want to simulate N-sided biased die?
```
def roll(N,bias):
'''this function rolls N dimensional die with biasing provided'''
# do something
return result
>> N=6
>> bias=( 0.20,0.20,0.15,0.15,0.14,0.16,)
>> roll(N,bias)
2
``` | A little bit of math here.
A regular die will give each number 1-6 with equal probability, namely `1/6`. This is referred to as [uniform distribution](http://en.wikipedia.org/wiki/Uniform_distribution) (the discrete version of it, as opposed to the continuous version). Meaning that if `X` is a random variable describing the result of a single role then `X~U[1,6]` - meaning `X` is distributed equally against all possible results of the die roll, 1 through 6.
This is equal to choosing a number in `[0,1)` while dividing it into 6 sections: `[0,1/6)`, `[1/6,2/6)`, `[2/6,3/6)`, `[3/6,4/6)`, `[4/6,5/6)`, `[5/6,1)`.
You are requesting a different distribution, which is biased.
The easiest way to achieve this is to divide the section `[0,1)` to 6 parts depending on the bias you want. So in your case you would want to divide it into the following:
`[0,0.2)`, `[0.2,0.4)`, `[0.4,0.55)`, `0.55,0.7)`, `[0.7,0.84)`, `[0.84,1)`.
If you take a look at the [wikipedia entry](http://en.wikipedia.org/wiki/Uniform_distribution), you will see that in this case, the cumulative probability function will not be composed of 6 equal-length parts but rather of 6 parts which differ in length according to the *bias* you gave them. Same goes for the mass distribution.
Back to the question, depending on the language you are using, just translate this back to your die roll. In Python, here is a very sketchy, albeit working, example:
```
import random
sampleMassDist = (0.2, 0.1, 0.15, 0.15, 0.25, 0.15)
# assume sum of bias is 1
def roll(massDist):
randRoll = random.random() # in [0,1)
sum = 0
result = 1
for mass in massDist:
sum += mass
if randRoll < sum:
return result
result+=1
print roll(sampleMassDist)
``` |
All nodeValue fields are None when parsing XML | 479,751 | 7 | 2009-01-26T13:15:03Z | 479,766 | 16 | 2009-01-26T13:21:44Z | [
"python",
"xml",
"rss",
"minidom"
] | I'm building a simple web-based RSS reader in Python, but I'm having trouble parsing the XML. I started out by trying some stuff in the Python command line.
```
>>> from xml.dom import minidom
>>> import urllib2
>>> url ='http://www.digg.com/rss/index.xml'
>>> xmldoc = minidom.parse(urllib2.urlopen(url))
>>> channelnode = xmldoc.getElementsByTagName("channel")
>>> channelnode = xmldoc.getElementsByTagName("channel")
>>> titlenode = channelnode[0].getElementsByTagName("title")
>>> print titlenode[0]
<DOM Element: title at 0xb37440>
>>> print titlenode[0].nodeValue
None
```
I played around with this for a while, but the `nodeValue` of everything seems to be `None`. Yet if you look at the XML, there definitely are values there. What am I doing wrong? | For RSS feeds you should try the [Universal Feed Parser](http://code.google.com/p/feedparser/) library. It simplifies the handling of RSS feeds immensly.
```
import feedparser
d = feedparser.parse('http://www.digg.com/rss/index.xml')
title = d.channel.title
``` |
All nodeValue fields are None when parsing XML | 479,751 | 7 | 2009-01-26T13:15:03Z | 479,780 | 10 | 2009-01-26T13:26:05Z | [
"python",
"xml",
"rss",
"minidom"
] | I'm building a simple web-based RSS reader in Python, but I'm having trouble parsing the XML. I started out by trying some stuff in the Python command line.
```
>>> from xml.dom import minidom
>>> import urllib2
>>> url ='http://www.digg.com/rss/index.xml'
>>> xmldoc = minidom.parse(urllib2.urlopen(url))
>>> channelnode = xmldoc.getElementsByTagName("channel")
>>> channelnode = xmldoc.getElementsByTagName("channel")
>>> titlenode = channelnode[0].getElementsByTagName("title")
>>> print titlenode[0]
<DOM Element: title at 0xb37440>
>>> print titlenode[0].nodeValue
None
```
I played around with this for a while, but the `nodeValue` of everything seems to be `None`. Yet if you look at the XML, there definitely are values there. What am I doing wrong? | This is the syntax you are looking for:
```
>>> print titlenode[0].firstChild.nodeValue
digg.com: Stories / Popular
```
Note that the node value is a logical descendant of the node itself. |
How do you remove duplicates from a list in whilst preserving order? | 480,214 | 414 | 2009-01-26T15:43:58Z | 480,227 | 465 | 2009-01-26T15:47:01Z | [
"python",
"list",
"duplicates",
"unique"
] | Is there a built-in that removes duplicates from list in Python, whilst preserving order? I know that I can use a set to remove duplicates, but that destroys the original order. I also know that I can roll my own like this:
```
def uniq(input):
output = []
for x in input:
if x not in output:
output.append(x)
return output
```
(Thanks to [unwind](http://stackoverflow.com/users/28169/unwind) for that [code sample](http://stackoverflow.com/questions/479897/how-do-you-remove-duplicates-from-a-list-in-python#479921).)
But I'd like to avail myself of a built-in or a more Pythonic idiom if possible.
Related question: [In Python, what is the fastest algorithm for removing duplicates from a list so that all elements are unique *while preserving order*?](http://stackoverflow.com/questions/89178/in-python-what-is-the-fastest-algorithm-for-removing-duplicates-from-a-list-so-t) | Here you have some alternatives: <http://www.peterbe.com/plog/uniqifiers-benchmark>
Fastest one:
```
def f7(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
```
Why assign `seen.add` to `seen_add` instead of just calling `seen.add`? Python is a dynamic language, and resolving `seen.add` each iteration is more costly than resolving a local variable. `seen.add` could have changed between iterations, and the runtime isn't smart enough to rule that out. To play it safe, it has to check the object each time.
If you plan on using this function a lot on the same dataset, perhaps you would be better off with an ordered set: <http://code.activestate.com/recipes/528878/>
*O*(1) insertion, deletion and member-check per operation. |
How do you remove duplicates from a list in whilst preserving order? | 480,214 | 414 | 2009-01-26T15:43:58Z | 480,229 | 18 | 2009-01-26T15:47:14Z | [
"python",
"list",
"duplicates",
"unique"
] | Is there a built-in that removes duplicates from list in Python, whilst preserving order? I know that I can use a set to remove duplicates, but that destroys the original order. I also know that I can roll my own like this:
```
def uniq(input):
output = []
for x in input:
if x not in output:
output.append(x)
return output
```
(Thanks to [unwind](http://stackoverflow.com/users/28169/unwind) for that [code sample](http://stackoverflow.com/questions/479897/how-do-you-remove-duplicates-from-a-list-in-python#479921).)
But I'd like to avail myself of a built-in or a more Pythonic idiom if possible.
Related question: [In Python, what is the fastest algorithm for removing duplicates from a list so that all elements are unique *while preserving order*?](http://stackoverflow.com/questions/89178/in-python-what-is-the-fastest-algorithm-for-removing-duplicates-from-a-list-so-t) | ```
from itertools import groupby
[ key for key,_ in groupby(sortedList)]
```
The list doesn't even have to be *sorted*, the sufficient condition is that equal values are grouped together.
**Edit: I assumed that "preserving order" implies that the list is actually ordered. If this is not the case, then the solution from MizardX is the right one.**
Community edit: This is however the most elegant way to "compress duplicate consecutive elements into a single element". |
How do you remove duplicates from a list in whilst preserving order? | 480,214 | 414 | 2009-01-26T15:43:58Z | 15,990,766 | 28 | 2013-04-13T17:32:19Z | [
"python",
"list",
"duplicates",
"unique"
] | Is there a built-in that removes duplicates from list in Python, whilst preserving order? I know that I can use a set to remove duplicates, but that destroys the original order. I also know that I can roll my own like this:
```
def uniq(input):
output = []
for x in input:
if x not in output:
output.append(x)
return output
```
(Thanks to [unwind](http://stackoverflow.com/users/28169/unwind) for that [code sample](http://stackoverflow.com/questions/479897/how-do-you-remove-duplicates-from-a-list-in-python#479921).)
But I'd like to avail myself of a built-in or a more Pythonic idiom if possible.
Related question: [In Python, what is the fastest algorithm for removing duplicates from a list so that all elements are unique *while preserving order*?](http://stackoverflow.com/questions/89178/in-python-what-is-the-fastest-algorithm-for-removing-duplicates-from-a-list-so-t) | ```
sequence = ['1', '2', '3', '3', '6', '4', '5', '6']
unique = []
[unique.append(item) for item in sequence if item not in unique]
```
unique â `['1', '2', '3', '6', '4', '5']` |
How do you remove duplicates from a list in whilst preserving order? | 480,214 | 414 | 2009-01-26T15:43:58Z | 16,780,848 | 14 | 2013-05-27T21:37:23Z | [
"python",
"list",
"duplicates",
"unique"
] | Is there a built-in that removes duplicates from list in Python, whilst preserving order? I know that I can use a set to remove duplicates, but that destroys the original order. I also know that I can roll my own like this:
```
def uniq(input):
output = []
for x in input:
if x not in output:
output.append(x)
return output
```
(Thanks to [unwind](http://stackoverflow.com/users/28169/unwind) for that [code sample](http://stackoverflow.com/questions/479897/how-do-you-remove-duplicates-from-a-list-in-python#479921).)
But I'd like to avail myself of a built-in or a more Pythonic idiom if possible.
Related question: [In Python, what is the fastest algorithm for removing duplicates from a list so that all elements are unique *while preserving order*?](http://stackoverflow.com/questions/89178/in-python-what-is-the-fastest-algorithm-for-removing-duplicates-from-a-list-so-t) | I think if you wanna maintain the order,
## you can try this:
```
list1 = ['b','c','d','b','c','a','a']
list2 = list(set(list1))
list2.sort(key=list1.index)
print list2
```
## OR similarly you can do this:
```
list1 = ['b','c','d','b','c','a','a']
list2 = sorted(set(list1),key=list1.index)
print list2
```
## You can also do this:
```
list1 = ['b','c','d','b','c','a','a']
list2 = []
for i in list1:
if not i in list2:
list2.append(i)`
print list2
```
## It can also be written as this:
```
list1 = ['b','c','d','b','c','a','a']
list2 = []
[list2.append(i) for i in list1 if not i in list2]
print list2
``` |
How do you remove duplicates from a list in whilst preserving order? | 480,214 | 414 | 2009-01-26T15:43:58Z | 17,016,257 | 224 | 2013-06-10T02:47:13Z | [
"python",
"list",
"duplicates",
"unique"
] | Is there a built-in that removes duplicates from list in Python, whilst preserving order? I know that I can use a set to remove duplicates, but that destroys the original order. I also know that I can roll my own like this:
```
def uniq(input):
output = []
for x in input:
if x not in output:
output.append(x)
return output
```
(Thanks to [unwind](http://stackoverflow.com/users/28169/unwind) for that [code sample](http://stackoverflow.com/questions/479897/how-do-you-remove-duplicates-from-a-list-in-python#479921).)
But I'd like to avail myself of a built-in or a more Pythonic idiom if possible.
Related question: [In Python, what is the fastest algorithm for removing duplicates from a list so that all elements are unique *while preserving order*?](http://stackoverflow.com/questions/89178/in-python-what-is-the-fastest-algorithm-for-removing-duplicates-from-a-list-so-t) | **Important Edit 2015**
As [@abarnert](http://stackoverflow.com/a/19279812/1219006) notes, the [`more_itertools`](https://pythonhosted.org/more-itertools/api.html) library (`pip install more_itertools`) contains a [`unique_everseen`](https://pythonhosted.org/more-itertools/api.html#more_itertools.unique_everseen) function that is built to solve this problem without any **unreadable** (`not seen.add`) **mutations** in list comprehensions. This is also the fastest solution too:
```
>>> from more_itertools import unique_everseen
>>> items = [1, 2, 0, 1, 3, 2]
>>> list(unique_everseen(items))
[1, 2, 0, 3]
```
Just one simple library import and no hacks.
This comes from an implementation of the itertools recipe [`unique_everseen`](https://docs.python.org/3/library/itertools.html#itertools-recipes) which looks like:
```
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in filterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
```
---
In Python `2.7+` the accepted common idiom (this works but isn't optimized for speed, i would now use [`unique_everseen`](https://pythonhosted.org/more-itertools/api.html#more_itertools.unique_everseen)) for this uses [`collections.OrderedDict`](http://docs.python.org/3/library/collections.html#collections.OrderedDict):
Runtime: **O(N)**
```
>>> from collections import OrderedDict
>>> items = [1, 2, 0, 1, 3, 2]
>>> list(OrderedDict.fromkeys(items))
[1, 2, 0, 3]
```
This looks much nicer than:
```
seen = set()
[x for x in seq if x not in seen and not seen.add(x)]
```
and doesn't utilize the **ugly hack**:
```
not seen.add(x)
```
which relies on the fact that `set.add` is an in-place method that always returns `None` so `not None` evaluates to `True`.
Note however that the hack solution is faster in raw speed though it has the same runtime complexity O(N). |
How do you remove duplicates from a list in whilst preserving order? | 480,214 | 414 | 2009-01-26T15:43:58Z | 19,279,812 | 7 | 2013-10-09T18:27:09Z | [
"python",
"list",
"duplicates",
"unique"
] | Is there a built-in that removes duplicates from list in Python, whilst preserving order? I know that I can use a set to remove duplicates, but that destroys the original order. I also know that I can roll my own like this:
```
def uniq(input):
output = []
for x in input:
if x not in output:
output.append(x)
return output
```
(Thanks to [unwind](http://stackoverflow.com/users/28169/unwind) for that [code sample](http://stackoverflow.com/questions/479897/how-do-you-remove-duplicates-from-a-list-in-python#479921).)
But I'd like to avail myself of a built-in or a more Pythonic idiom if possible.
Related question: [In Python, what is the fastest algorithm for removing duplicates from a list so that all elements are unique *while preserving order*?](http://stackoverflow.com/questions/89178/in-python-what-is-the-fastest-algorithm-for-removing-duplicates-from-a-list-so-t) | For another very late answer to another very old question:
The [`itertools` recipes](http://docs.python.org/3/library/itertools.html#itertools-recipes) have a function that does this, using the `seen` set technique, but:
* Handles a standard `key` function.
* Uses no unseemly hacks.
* Optimizes the loop by pre-binding `seen.add` instead of looking it up N times. (`f7` also does this, but some versions don't.)
* Optimizes the loop by using `ifilterfalse`, so you only have to loop over the unique elements in Python, instead of all of them. (You still iterate over all of them inside `ifilterfalse`, of course, but that's in C, and much faster.)
Is it actually faster than `f7`? It depends on your data, so you'll have to test it and see. If you want a list in the end, `f7` uses a listcomp, and there's no way to do that here. (You can directly `append` instead of `yield`ing, or you can feed the generator into the `list` function, but neither one can be as fast as the LIST\_APPEND inside a listcomp.) At any rate, usually, squeezing out a few microseconds is not going to be as important as having an easily-understandable, reusable, already-written function that doesn't require DSU when you want to decorate.
As with all of the recipes, it's also available in [`more-iterools`](https://pypi.python.org/pypi/more-itertools).
If you just want the no-`key` case, you can simplify it as:
```
def unique(iterable):
seen = set()
seen_add = seen.add
for element in itertools.ifilterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
``` |
Is there a way in python to apply a list of regex patterns that are stored in a list to a single string? | 481,266 | 3 | 2009-01-26T20:43:24Z | 481,280 | 9 | 2009-01-26T20:49:24Z | [
"python",
"regex",
"list"
] | i have a list of regex patterns (stored in a list type) that I would like to apply to a string.
Does anyone know a good way to:
1. Apply every regex pattern in the list to the string
and
2. Call a different function that is associated with that pattern in the list if it matches.
I would like to do this in python if possible
thanks in advance. | ```
import re
def func1(s):
print s, "is a nice string"
def func2(s):
print s, "is a bad string"
funcs = {
r".*pat1.*": func1,
r".*pat2.*": func2
}
s = "Some string with both pat1 and pat2"
for pat, func in funcs.items():
if re.search(pat, s):
func(s)
```
The above code will call both functions for the string `s` because both patterns are matched. |
Can a lambda function call itself recursively in Python? | 481,692 | 49 | 2009-01-26T22:42:42Z | 481,755 | 48 | 2009-01-26T23:04:03Z | [
"python",
"recursion",
"lambda",
"y-combinator"
] | A regular function can contain a call to itself in its definition, no problem. I can't figure out how to do it with a lambda function though for the simple reason that the lambda function has no name to refer back to. Is there a way to do it? How? | The only way I can think of to do this amounts to giving the function a name:
```
fact = lambda x: 1 if x == 0 else x * fact(x-1)
```
or alternately, for earlier versions of python:
```
fact = lambda x: x == 0 and 1 or x * fact(x-1)
```
**Update**: using the ideas from the other answers, I was able to wedge the factorial function into a single unnamed lambda:
```
>>> map(lambda n: (lambda f, *a: f(f, *a))(lambda rec, n: 1 if n == 0 else n*rec(rec, n-1), n), range(10))
[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
```
So it's possible, but not really recommended! |
Can a lambda function call itself recursively in Python? | 481,692 | 49 | 2009-01-26T22:42:42Z | 481,790 | 21 | 2009-01-26T23:16:29Z | [
"python",
"recursion",
"lambda",
"y-combinator"
] | A regular function can contain a call to itself in its definition, no problem. I can't figure out how to do it with a lambda function though for the simple reason that the lambda function has no name to refer back to. Is there a way to do it? How? | You can't directly do it, because it has no name. But with a helper function like the Y-combinator Lemmy pointed to, you can create recursion by passing the function as a parameter to itself (as strange as that sounds):
```
# helper function
def recursive(f, *p, **kw):
return f(f, *p, **kw)
def fib(n):
# The rec parameter will be the lambda function itself
return recursive((lambda rec, n: rec(rec, n-1) + rec(rec, n-2) if n>1 else 1), n)
# using map since we already started to do black functional programming magic
print map(fib, range(10))
```
This prints the first ten Fibonacci numbers: `[1, 1, 2, 3, 5, 8, 13, 21, 34, 55]`, |
Can a lambda function call itself recursively in Python? | 481,692 | 49 | 2009-01-26T22:42:42Z | 482,200 | 9 | 2009-01-27T02:52:33Z | [
"python",
"recursion",
"lambda",
"y-combinator"
] | A regular function can contain a call to itself in its definition, no problem. I can't figure out how to do it with a lambda function though for the simple reason that the lambda function has no name to refer back to. Is there a way to do it? How? | Yes. I have two ways to do it, and one was already covered. This is my preferred way.
```
(lambda v: (lambda n: n * __import__('types').FunctionType(
__import__('inspect').stack()[0][0].f_code,
dict(__import__=__import__, dict=dict)
)(n - 1) if n > 1 else 1)(v))(5)
``` |
Can a lambda function call itself recursively in Python? | 481,692 | 49 | 2009-01-26T22:42:42Z | 8,703,135 | 35 | 2012-01-02T16:34:15Z | [
"python",
"recursion",
"lambda",
"y-combinator"
] | A regular function can contain a call to itself in its definition, no problem. I can't figure out how to do it with a lambda function though for the simple reason that the lambda function has no name to refer back to. Is there a way to do it? How? | without reduce, map, named lambdas or python internals:
```
(lambda a:lambda v:a(a,v))(lambda s,x:1 if x==0 else x*s(s,x-1))(10)
``` |
Can a lambda function call itself recursively in Python? | 481,692 | 49 | 2009-01-26T22:42:42Z | 10,144,992 | 16 | 2012-04-13T16:50:05Z | [
"python",
"recursion",
"lambda",
"y-combinator"
] | A regular function can contain a call to itself in its definition, no problem. I can't figure out how to do it with a lambda function though for the simple reason that the lambda function has no name to refer back to. Is there a way to do it? How? | Contrary to what sth said, you CAN directly do this.
```
(lambda f: (lambda x: f(lambda v: x(x)(v)))(lambda x: f(lambda v: x(x)(v))))(lambda f: (lambda i: 1 if (i == 0) else i * f(i - 1)))(n)
```
The first part is the [fixed-point combinator](http://en.wikipedia.org/wiki/Fixed-point_combinator#Example_in_Python) *Y* that facilitates recursion in lambda calculus
```
Y = (lambda f: (lambda x: f(lambda v: x(x)(v)))(lambda x: f(lambda v: x(x)(v))))
```
the second part is the factorial function *fact* defined recursively
```
fact = (lambda f: (lambda i: 1 if (i == 0) else i * f(i - 1)))
```
*Y* is applied to *fact* to form another lambda expression
```
F = Y(fact)
```
which is applied to the third part, *n*, which evaulates to the nth factorial
```
>>> n = 5
>>> F(n)
120
```
or equivalently
```
>>> (lambda f: (lambda x: f(lambda v: x(x)(v)))(lambda x: f(lambda v: x(x)(v))))(lambda f: (lambda i: 1 if (i == 0) else i * f(i - 1)))(5)
120
```
If however you prefer *fibs* to *facts* you can do that too using the same combinator
```
>>> (lambda f: (lambda x: f(lambda v: x(x)(v)))(lambda x: f(lambda v: x(x)(v))))(lambda f: (lambda i: f(i - 1) + f(i - 2) if i > 1 else 1))(5)
8
``` |
How would you do the equivalent of preprocessor directives in Python? | 482,014 | 36 | 2009-01-27T01:10:43Z | 482,022 | 21 | 2009-01-27T01:14:24Z | [
"python",
"preprocessor",
"equivalent",
"directive"
] | Is there a way to do the following preprocessor directives in Python?
```
#if DEBUG
< do some code >
#else
< do some other code >
#endif
``` | I suspect you're gonna hate this answer. The way you do that in Python is
```
# code here
if DEBUG:
#debugging code goes here
else:
# other code here.
```
Since python is an interpreter, there's no preprocessing step to be applied, and no particular advantage to having a special syntax. |
How would you do the equivalent of preprocessor directives in Python? | 482,014 | 36 | 2009-01-27T01:10:43Z | 482,027 | 9 | 2009-01-27T01:16:14Z | [
"python",
"preprocessor",
"equivalent",
"directive"
] | Is there a way to do the following preprocessor directives in Python?
```
#if DEBUG
< do some code >
#else
< do some other code >
#endif
``` | You can use the preprocessor in Python. Just run your scripts through the cpp (C-Preprocessor) in your bin directory. However I've done this with Lua and the benefits of easy interpretation have outweighed the more complex compilation IMHO. |
How would you do the equivalent of preprocessor directives in Python? | 482,014 | 36 | 2009-01-27T01:10:43Z | 482,244 | 79 | 2009-01-27T03:25:06Z | [
"python",
"preprocessor",
"equivalent",
"directive"
] | Is there a way to do the following preprocessor directives in Python?
```
#if DEBUG
< do some code >
#else
< do some other code >
#endif
``` | There's `__debug__`, which is a special value that the compiler does preprocess.
```
if __debug__:
print "If this prints, you're not running python -O."
else:
print "If this prints, you are running python -O!"
```
`__debug__` will be replaced with a constant 0 or 1 by the compiler, and the optimizer will remove any `if 0:` lines before your source is interpreted. |
How would you do the equivalent of preprocessor directives in Python? | 482,014 | 36 | 2009-01-27T01:10:43Z | 2,987,538 | 19 | 2010-06-07T06:46:27Z | [
"python",
"preprocessor",
"equivalent",
"directive"
] | Is there a way to do the following preprocessor directives in Python?
```
#if DEBUG
< do some code >
#else
< do some other code >
#endif
``` | I wrote a python preprocessor called pypreprocessor that does exactly what you're describing.
[The source and documentation is available on Google Code](http://code.google.com/p/pypreprocessor/).
[The package can also be downloaded/installed through the PYPI](http://pypi.python.org/pypi/pypreprocessor/).
Here's an example to accomplish what you're describing.
```
from pypreprocessor import pypreprocessor
pypreprocessor.parse()
#define debug
#ifdef debug
print('The source is in debug mode')
#else
print('The source is not in debug mode')
#endif
```
pypreprocessor is capable of a lot more than just on-the-fly preprocessing. To see more use case examples check out the project on Google Code.
**Update: More info on pypreprocessor**
The way I accomplish the preprocessing is simple. From the example above, the preprocessor imports a pypreprocessor object that's created in the pypreprocessor module. When you call parse() on the preprocessor it self-consumes the file that it is imported into and generates a temp copy of itself that comments out all of the preprocessor code (to avoid the preprocessor from calling itself recursively in an infinite loop) and comments out all of the unused portions.
Commenting out the lines is, as opposed to removing them, is necessary to preserve line numbers on error tracebacks if the module throws an exception or crashes. And I've even gone as far as to rewrite the error traceback to report reflect the proper file name of the module that crashed.
Then, the generated file containing the postprocessed code is executed on-the-fly.
The upside to using this method over just adding a bunch of if statements inline in the code is, there will be no execution time wasted evaluating useless statements because the commented out portions of the code will be excluded from the compiled .pyc files.
The downside (and my original reason for creating the module) is that you can't run both python 2x and python 3x in the same file because pythons interpreter runs a full syntax check before executing the code and will reject any version specific code before the preprocessor is allowed to run ::sigh::. My original goal was to be able to develop 2x and 3x code side-by-side in the same file that would create version specific bytecode depending on what it is running on.
Either way, the preprocessor module is still very useful for implementing common c-style preprocessing capabilities. As well as, the preprocessor is capable of outputting the postprocessed code to a file for later use if you want.
Also, if you want to generate a version that has all of the preprocessor directives as well as any of the #ifdefs that are excluded removed it's as simple as setting a flag in the preprocessor code before calling parse(). This makes removing unwanted code from a version specific source file a one step process (vs crawling through the code and removing if statements manually). |
Replace Nested For Loops... or not | 482,146 | 5 | 2009-01-27T02:27:28Z | 482,192 | 14 | 2009-01-27T02:46:03Z | [
"python",
"loops",
"for-loop",
"nested-loops"
] | I have a script that loops through a series of four (or less) characters strings. For example:
```
aaaa
aaab
aaac
aaad
```
If have been able to implement it with nested for loops like so:
```
chars = string.digits + string.uppercase + string.lowercase
for a in chars:
print '%s' % a
for b in chars:
print '%s%s' % (a, b)
for c in chars:
print '%s%s%s' % (a, b, c)
for d in chars:
print '%s%s%s%s' % (a, b, c, d)
```
Is this sort of loop nesting a bad thing, and if so, what would be a better way of accomplishing what I am doing? | ```
import string
import itertools
chars = string.digits + string.letters
MAX_CHARS = 4
for nletters in range(MAX_CHARS):
for word in itertools.product(chars, repeat=nletters + 1):
print (''.join(word))
```
That'll print all **`15018570`** words you're looking for. If you want more/less words just change the `MAX_CHARS` variable. It will still have just two `for`s for any number of chars, and you don't have to repeat yourself. And is pretty readable. . |
Beginner-level Python threading problems | 482,263 | 7 | 2009-01-27T03:38:45Z | 487,110 | 9 | 2009-01-28T10:13:37Z | [
"python",
"multithreading",
"pygtk"
] | As someone new to GUI development in Python (with pyGTK), I've just started learning about threading. To test out my skills, I've written a simple little GTK interface with a start/stop button. The goal is that when it is clicked, a thread starts that quickly increments a number in the text box, while keeping the GUI responsive.
I've got the GUI working just fine, but am having problems with the threading. It is probably a simple problem, but my mind is about fried for the day. Below I have pasted first the trackback from the Python interpreter, followed by the code. You can go to <http://drop.io/pxgr5id> to download it. I'm using bzr for revision control, so if you want to make a modification and re-drop it, please commit the changes. I'm also pasting the code at <http://dpaste.com/113388/> because it can have line numbers, and this markdown stuff is giving me a headache.
Update 27 January, 15:52 EST:
Slightly updated code can be found here: <http://drop.io/threagui/asset/thread-gui-rev3-tar-gz>
**Traceback**
```
crashsystems@crashsystems-laptop:~/Desktop/thread-gui$ python threadgui.pybtnStartStop clicked
Traceback (most recent call last):
File "threadgui.py", line 39, in on_btnStartStop_clicked
self.thread.stop()
File "threadgui.py", line 20, in stop
self.join()
File "/usr/lib/python2.5/threading.py", line 583, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
btnStartStop clicked
threadStop = 1
btnStartStop clicked
threadStop = 0
btnStartStop clicked
Traceback (most recent call last):
File "threadgui.py", line 36, in on_btnStartStop_clicked
self.thread.start()
File "/usr/lib/python2.5/threading.py", line 434, in start
raise RuntimeError("thread already started")
RuntimeError: thread already started
btnExit clicked
exit() called
```
**Code**
```
#!/usr/bin/bash
import gtk, threading
class ThreadLooper (threading.Thread):
def __init__ (self, sleep_interval, function, args=[], kwargs={}):
threading.Thread.__init__(self)
self.sleep_interval = sleep_interval
self.function = function
self.args = args
self.kwargs = kwargs
self.finished = threading.Event()
def stop (self):
self.finished.set()
self.join()
def run (self):
while not self.finished.isSet():
self.finished.wait(self.sleep_interval)
self.function(*self.args, **self.kwargs)
class ThreadGUI:
# Define signals
def on_btnStartStop_clicked(self, widget, data=None):
print "btnStartStop clicked"
if(self.threadStop == 0):
self.threadStop = 1
self.thread.start()
else:
self.threadStop = 0
self.thread.stop()
print "threadStop = " + str(self.threadStop)
def on_btnMessageBox_clicked(self, widget, data=None):
print "btnMessageBox clicked"
self.lblMessage.set_text("This is a message!")
self.msgBox.show()
def on_btnExit_clicked(self, widget, data=None):
print "btnExit clicked"
self.exit()
def on_btnOk_clicked(self, widget, data=None):
print "btnOk clicked"
self.msgBox.hide()
def on_mainWindow_destroy(self, widget, data=None):
print "mainWindow destroyed!"
self.exit()
def exit(self):
print "exit() called"
self.threadStop = 1
gtk.main_quit()
def threadLoop(self):
# This will run in a thread
self.txtThreadView.set_text(str(self.threadCount))
print "hello world"
self.threadCount += 1
def __init__(self):
# Connect to the xml GUI file
builder = gtk.Builder()
builder.add_from_file("threadgui.xml")
# Connect to GUI widgets
self.mainWindow = builder.get_object("mainWindow")
self.txtThreadView = builder.get_object("txtThreadView")
self.btnStartStop = builder.get_object("btnStartStop")
self.msgBox = builder.get_object("msgBox")
self.btnMessageBox = builder.get_object("btnMessageBox")
self.btnExit = builder.get_object("btnExit")
self.lblMessage = builder.get_object("lblMessage")
self.btnOk = builder.get_object("btnOk")
# Connect the signals
builder.connect_signals(self)
# This global will be used for signaling the thread to stop.
self.threadStop = 1
# The thread
self.thread = ThreadLooper(0.1, self.threadLoop, (1,0,-1))
self.threadCounter = 0
if __name__ == "__main__":
# Start GUI instance
GUI = ThreadGUI()
GUI.mainWindow.show()
gtk.main()
``` | Threading with PyGTK is bit tricky if you want to do it right. Basically, you should not update GUI from within any other thread than main thread (common limitation in GUI libs). Usually this is done in PyGTK using mechanism of queued messages (for communication between workers and GUI) which are read periodically using timeout function. Once I had a presentation on my local LUG on this topic, you can grab example code for this presentation from [Google Code repository](http://code.google.com/p/leechfm/). Have a look at `MainWindow` class in `forms/frmmain.py`, specially for method `_pulse()` and what is done in `on_entry_activate()` (thread is started there plus the idle timer is created).
```
def on_entry_activate(self, entry):
text = entry.get_text().strip()
if text:
store = entry.get_completion().get_model()
if text not in [row[0] for row in store]:
store.append((text, ))
thread = threads.RecommendationsFetcher(text, self.queue)# <- 1
self.idle_timer = gobject.idle_add(self._pulse)# <- 2
tv_results = self.widgets.get_widget('tv_results')
model = tv_results.get_model()
model.clear()
thread.setDaemon(True)# <- 3
progress_update = self.widgets.get_widget('progress_update')
progress_update.show()
thread.start()# <- 4
```
This way, application updates GUI when is "idle" (by GTK means) causing no freezes.
* 1: create thread
* 2: create idle timer
* 3: daemonize thread so the app can be closed without waiting for thread completion
* 4: start thread |
How do I convert a string to a double in Python? | 482,410 | 89 | 2009-01-27T05:40:38Z | 482,423 | 144 | 2009-01-27T05:49:18Z | [
"python"
] | I would like to know how to convert a string containing digits to a double. | ```
>>> x = "2342.34"
>>> float(x)
2342.3400000000001
```
There you go. Use float (which is almost always a C double). |
How do I convert a string to a double in Python? | 482,410 | 89 | 2009-01-27T05:40:38Z | 6,385,389 | 25 | 2011-06-17T11:56:28Z | [
"python"
] | I would like to know how to convert a string containing digits to a double. | The decimal operator might be more in line with what you are looking for:
```
>>> from decimal import Decimal
>>> x = "234243.434"
>>> print Decimal(x)
234243.434
``` |
Python reverse / invert a mapping | 483,666 | 269 | 2009-01-27T14:46:09Z | 483,680 | 18 | 2009-01-27T14:49:46Z | [
"python",
"dictionary",
"mapping",
"inverse"
] | Given a dictionary like so:
```
my_map = { 'a': 1, 'b':2 }
```
How can one invert this map to get:
```
inv_map = { 1: 'a', 2: 'b' }
```
**EDITOR NOTE: `map` changed to `my_map` to avoid conflicts with the built-in function, `map`. Some comments may be affected below.** | Try this:
```
inv_map = dict(zip(my_map.values(), my_map.keys()))
```
(Note that [the Python docs on dictionary views](https://docs.python.org/library/stdtypes.html#dictionary-view-objects) explicitly guarantee that `.keys()` and `.values()` have their elements in the same order, which allows the approach above to work.)
Alternatively:
```
inv_map = dict((my_map[k], k) for k in my_map)
```
or using python 3.0's dict comprehensions
```
inv_map = {my_map[k] : k for k in my_map}
``` |
Python reverse / invert a mapping | 483,666 | 269 | 2009-01-27T14:46:09Z | 483,685 | 144 | 2009-01-27T14:50:22Z | [
"python",
"dictionary",
"mapping",
"inverse"
] | Given a dictionary like so:
```
my_map = { 'a': 1, 'b':2 }
```
How can one invert this map to get:
```
inv_map = { 1: 'a', 2: 'b' }
```
**EDITOR NOTE: `map` changed to `my_map` to avoid conflicts with the built-in function, `map`. Some comments may be affected below.** | Assuming that the values in the dict are unique:
```
dict((v, k) for k, v in my_map.iteritems())
``` |
Python reverse / invert a mapping | 483,666 | 269 | 2009-01-27T14:46:09Z | 483,833 | 352 | 2009-01-27T15:24:56Z | [
"python",
"dictionary",
"mapping",
"inverse"
] | Given a dictionary like so:
```
my_map = { 'a': 1, 'b':2 }
```
How can one invert this map to get:
```
inv_map = { 1: 'a', 2: 'b' }
```
**EDITOR NOTE: `map` changed to `my_map` to avoid conflicts with the built-in function, `map`. Some comments may be affected below.** | For Python 2.7.x
```
inv_map = {v: k for k, v in my_map.iteritems()}
```
For Python 3+:
```
inv_map = {v: k for k, v in my_map.items()}
``` |
Python reverse / invert a mapping | 483,666 | 269 | 2009-01-27T14:46:09Z | 485,368 | 67 | 2009-01-27T21:33:26Z | [
"python",
"dictionary",
"mapping",
"inverse"
] | Given a dictionary like so:
```
my_map = { 'a': 1, 'b':2 }
```
How can one invert this map to get:
```
inv_map = { 1: 'a', 2: 'b' }
```
**EDITOR NOTE: `map` changed to `my_map` to avoid conflicts with the built-in function, `map`. Some comments may be affected below.** | If the values in `my_map` aren't unique:
```
inv_map = {}
for k, v in my_map.iteritems():
inv_map[v] = inv_map.get(v, [])
inv_map[v].append(k)
``` |
Python reverse / invert a mapping | 483,666 | 269 | 2009-01-27T14:46:09Z | 1,679,702 | 18 | 2009-11-05T10:41:30Z | [
"python",
"dictionary",
"mapping",
"inverse"
] | Given a dictionary like so:
```
my_map = { 'a': 1, 'b':2 }
```
How can one invert this map to get:
```
inv_map = { 1: 'a', 2: 'b' }
```
**EDITOR NOTE: `map` changed to `my_map` to avoid conflicts with the built-in function, `map`. Some comments may be affected below.** | ```
def inverse_mapping(f):
return f.__class__(map(reversed, f.items()))
``` |
Python reverse / invert a mapping | 483,666 | 269 | 2009-01-27T14:46:09Z | 22,047,530 | 9 | 2014-02-26T16:35:03Z | [
"python",
"dictionary",
"mapping",
"inverse"
] | Given a dictionary like so:
```
my_map = { 'a': 1, 'b':2 }
```
How can one invert this map to get:
```
inv_map = { 1: 'a', 2: 'b' }
```
**EDITOR NOTE: `map` changed to `my_map` to avoid conflicts with the built-in function, `map`. Some comments may be affected below.** | Another, more functional, way:
```
my_map = { 'a': 1, 'b':2 }
dict(map(reversed, my_map.iteritems()))
``` |
Problem with encoding in Django templates | 484,338 | 5 | 2009-01-27T17:23:20Z | 484,358 | 8 | 2009-01-27T17:29:17Z | [
"python",
"django",
"unicode",
"internationalization",
"django-templates"
] | I'm having problems using {% ifequal s1 "some text" %} to compare strings with extended characters in Django templates. When string s1 contains ascii characters >127, I get exceptions in the template rendering. What am I doing wrong? I'm using UTF-8 coding throughout the rest of application in both the data, templates and Python code without any problems.
views.py
```
def test(request):
return render_to_response("test.html", {
"s1": "dados",
"s2": "aprovação",
}
)
```
test.html
```
s1={{s1}}<br>
s2={{s2}}<br>
{% ifequal s1 "dados" %}
s1="dados" is true
{% endifequal %}
{% ifequal s1 "aprovação" %}
s1="aprovação" is true
{% endifequal %}
{% comment %}
The following two comparions cause the following exception:
Caught an exception while rendering: 'ascii' codec can't decode byte 0xc3 in position 6: ordinal not in range(128)
{% ifequal s2 "dados" %}
s2="dados" is true
{% endifequal %}
{% ifequal s2 "aprovação" %}
s2="aprovação" is true
{% endifequal %}
{% endcomment %}
{% ifequal s2 u"dados" %}
s2="dados" is true
{% endifequal %}
{% comment %}
The following comparison causes the following exception:
Caught an exception while rendering: 'ascii' codec can't encode characters in position 8-9: ordinal not in range(128)
{% ifequal s2 u"aprovação" %}
s2="aprovação" is true
{% endifequal %}
{% endcomment %}
```
Output
```
s1=dados
s2=aprovação
s1="dados" is true
``` | Sometimes there's nothing like describing a problem to someone else to help you solve it. :) I should have marked the Python strings as Unicode like this and everything works now:
```
def test(request):
return render_to_response("test.html", {
"s1": u"dados",
"s2": u"aprovação",
}
)
``` |
Why do I get TypeError: can't multiply sequence by non-int of type 'float'? | 485,789 | 33 | 2009-01-27T23:11:02Z | 485,808 | 36 | 2009-01-27T23:19:00Z | [
"python"
] | I am typing to get a sale amount (by input) to be multiplied by a defined sales tax (0.08) and then have it print the total amount (sales tax times sale amount).
I run into this error. Anyone know what could be wrong or have any suggestions?
```
salesAmount = raw_input (["Insert sale amount here \n"])
['Insert sale amount here \n']20.99
>>> salesTax = 0.08
>>> totalAmount = salesAmount * salesTax
Traceback (most recent call last):
File "<pyshell#57>", line 1, in <module>
totalAmount = salesAmount * salesTax
TypeError: can't multiply sequence by non-int of type 'float'
``` | `raw_input` returns a string (a sequence of characters). In Python, multiplying a string and a float makes no defined meaning (while multiplying a string and an integer has a meaning: `"AB" * 3` is `"ABABAB"`; how much is `"L" * 3.14` ? Please do not reply `"LLL|"`). You need to parse the string to a numerical value.
You might want to try:
```
salesAmount = float(raw_input("Insert sale amount here\n"))
``` |
Why do I get TypeError: can't multiply sequence by non-int of type 'float'? | 485,789 | 33 | 2009-01-27T23:11:02Z | 27,488,582 | 7 | 2014-12-15T16:28:02Z | [
"python"
] | I am typing to get a sale amount (by input) to be multiplied by a defined sales tax (0.08) and then have it print the total amount (sales tax times sale amount).
I run into this error. Anyone know what could be wrong or have any suggestions?
```
salesAmount = raw_input (["Insert sale amount here \n"])
['Insert sale amount here \n']20.99
>>> salesTax = 0.08
>>> totalAmount = salesAmount * salesTax
Traceback (most recent call last):
File "<pyshell#57>", line 1, in <module>
totalAmount = salesAmount * salesTax
TypeError: can't multiply sequence by non-int of type 'float'
``` | Maybe this will help others in the future - I had the same error while trying to multiple a float and a list of floats. The thing is that everyone here talked about multiplying a float with a string (but here all my element were floats all along) so the problem was actually using the \* operator on a list.
For example:
```
import math
import numpy as np
alpha = 0.2
beta=1-alpha
C = (-math.log(1-beta))/alpha
coff = [0.0,0.01,0.0,0.35,0.98,0.001,0.0]
coff *= C
```
The error:
```
coff *= C
TypeError: can't multiply sequence by non-int of type 'float'
```
The solution - convert the list to numpy array:
```
coff = np.asarray(coff) * C
``` |
Python shelve module question | 486,490 | 6 | 2009-01-28T04:32:14Z | 486,676 | 7 | 2009-01-28T06:38:26Z | [
"python",
"shelve"
] | Does the Python shelve module have any protection built in to make sure two processes aren't writing to a file at the same time? | The shelve module uses an underlying database package (such as dbm, [gdbm](http://docs.python.org/library/gdbm.html#module-gdbm) or bsddb) .
The [*restrictions* pragraph](http://docs.python.org/library/shelve.html#restrictions) says (my emphasis):
> The shelve module **does not support concurrent read/write access to shelved objects**. (Multiple simultaneous read accesses are safe.) When a program has a shelf open for writing, no other program should have it open for reading or writing. Unix file locking can be used to solve this, but this differs across Unix versions and requires knowledge about the database implementation used.
Conclusion: it depends on OS and the underlying DB. To keep things portable, do not build on concurrency. |
Ruby equivalent of virtualenv? | 486,995 | 126 | 2009-01-28T09:24:14Z | 487,045 | 15 | 2009-01-28T09:49:35Z | [
"python",
"ruby",
"virtualenv"
] | Is there something similar to the Python utility [virtualenv](http://pypi.python.org/pypi/virtualenv)?
Basically it allows you to install Python packages into a sandboxed environment, so `easy_install django` doesn't go in your system-wide site-packages directory, it would go in the virtualenv-created directory.
For example:
```
$ virtualenv test
New python executable in test/bin/python
Installing setuptools...cd .........done.
$ cd test/
$ source bin/activate
(test)$ easy_install tvnamer
Searching for tvnamer
Best match: tvnamer 0.5.1
Processing tvnamer-0.5.1-py2.5.egg
Adding tvnamer 0.5.1 to easy-install.pth file
Installing tvnamer script to /Users/dbr/test/bin
Using /Library/Python/2.5/site-packages/tvnamer-0.5.1-py2.5.egg
Processing dependencies for tvnamer
Finished processing dependencies for tvnamer
(test)$ which tvnamer
/Users/dbr/test/bin/tvnamer
```
Is there something like this for RubyGems? | I think you'll like [sandbox](http://github.com/nkryptic/sandbox/tree/master). |
Ruby equivalent of virtualenv? | 486,995 | 126 | 2009-01-28T09:24:14Z | 3,801,789 | 63 | 2010-09-27T07:20:31Z | [
"python",
"ruby",
"virtualenv"
] | Is there something similar to the Python utility [virtualenv](http://pypi.python.org/pypi/virtualenv)?
Basically it allows you to install Python packages into a sandboxed environment, so `easy_install django` doesn't go in your system-wide site-packages directory, it would go in the virtualenv-created directory.
For example:
```
$ virtualenv test
New python executable in test/bin/python
Installing setuptools...cd .........done.
$ cd test/
$ source bin/activate
(test)$ easy_install tvnamer
Searching for tvnamer
Best match: tvnamer 0.5.1
Processing tvnamer-0.5.1-py2.5.egg
Adding tvnamer 0.5.1 to easy-install.pth file
Installing tvnamer script to /Users/dbr/test/bin
Using /Library/Python/2.5/site-packages/tvnamer-0.5.1-py2.5.egg
Processing dependencies for tvnamer
Finished processing dependencies for tvnamer
(test)$ which tvnamer
/Users/dbr/test/bin/tvnamer
```
Is there something like this for RubyGems? | [RVM](http://rvm.io/) works closer to how virtualenv works since it lets you sandbox different ruby versions and their gems, etc. |
Ruby equivalent of virtualenv? | 486,995 | 126 | 2009-01-28T09:24:14Z | 12,663,154 | 14 | 2012-09-30T16:58:20Z | [
"python",
"ruby",
"virtualenv"
] | Is there something similar to the Python utility [virtualenv](http://pypi.python.org/pypi/virtualenv)?
Basically it allows you to install Python packages into a sandboxed environment, so `easy_install django` doesn't go in your system-wide site-packages directory, it would go in the virtualenv-created directory.
For example:
```
$ virtualenv test
New python executable in test/bin/python
Installing setuptools...cd .........done.
$ cd test/
$ source bin/activate
(test)$ easy_install tvnamer
Searching for tvnamer
Best match: tvnamer 0.5.1
Processing tvnamer-0.5.1-py2.5.egg
Adding tvnamer 0.5.1 to easy-install.pth file
Installing tvnamer script to /Users/dbr/test/bin
Using /Library/Python/2.5/site-packages/tvnamer-0.5.1-py2.5.egg
Processing dependencies for tvnamer
Finished processing dependencies for tvnamer
(test)$ which tvnamer
/Users/dbr/test/bin/tvnamer
```
Is there something like this for RubyGems? | No one seems to have mentioned [rbenv](http://rbenv.org/). |
Ruby equivalent of virtualenv? | 486,995 | 126 | 2009-01-28T09:24:14Z | 12,663,612 | 34 | 2012-09-30T17:57:19Z | [
"python",
"ruby",
"virtualenv"
] | Is there something similar to the Python utility [virtualenv](http://pypi.python.org/pypi/virtualenv)?
Basically it allows you to install Python packages into a sandboxed environment, so `easy_install django` doesn't go in your system-wide site-packages directory, it would go in the virtualenv-created directory.
For example:
```
$ virtualenv test
New python executable in test/bin/python
Installing setuptools...cd .........done.
$ cd test/
$ source bin/activate
(test)$ easy_install tvnamer
Searching for tvnamer
Best match: tvnamer 0.5.1
Processing tvnamer-0.5.1-py2.5.egg
Adding tvnamer 0.5.1 to easy-install.pth file
Installing tvnamer script to /Users/dbr/test/bin
Using /Library/Python/2.5/site-packages/tvnamer-0.5.1-py2.5.egg
Processing dependencies for tvnamer
Finished processing dependencies for tvnamer
(test)$ which tvnamer
/Users/dbr/test/bin/tvnamer
```
Is there something like this for RubyGems? | Neither sandbox, RVM, nor rbenv manage the versions of your app's gem dependencies. The tool for that is [bundler](http://gembundler.com/v1.2/rationale.html).
* use a [Gemfile](http://gembundler.com/v0.9/gemfile.html) as your application's dependency declaration
* use `bundle install` to install explicit versions of these dependencies into an isolated location
* use `bundle exec` to run your application |
Ruby equivalent of virtualenv? | 486,995 | 126 | 2009-01-28T09:24:14Z | 13,307,346 | 8 | 2012-11-09T11:33:15Z | [
"python",
"ruby",
"virtualenv"
] | Is there something similar to the Python utility [virtualenv](http://pypi.python.org/pypi/virtualenv)?
Basically it allows you to install Python packages into a sandboxed environment, so `easy_install django` doesn't go in your system-wide site-packages directory, it would go in the virtualenv-created directory.
For example:
```
$ virtualenv test
New python executable in test/bin/python
Installing setuptools...cd .........done.
$ cd test/
$ source bin/activate
(test)$ easy_install tvnamer
Searching for tvnamer
Best match: tvnamer 0.5.1
Processing tvnamer-0.5.1-py2.5.egg
Adding tvnamer 0.5.1 to easy-install.pth file
Installing tvnamer script to /Users/dbr/test/bin
Using /Library/Python/2.5/site-packages/tvnamer-0.5.1-py2.5.egg
Processing dependencies for tvnamer
Finished processing dependencies for tvnamer
(test)$ which tvnamer
/Users/dbr/test/bin/tvnamer
```
Is there something like this for RubyGems? | I'll mention the way I do this with Bundler (which I use with RVM - RVM to manage the rubies and a default set of global gems, Bundler to handle project specific gems)
```
bundler install --binstubs --path vendor
```
Running this command in the root of a project will install the gems listed from your Gemfile, put the libs in `./vendor`, and any executables in `./bin` and all `require`s (if you use `bundle console` or the Bundler requires) will reference these exes and libs.
Works for me. |
Ruby equivalent of virtualenv? | 486,995 | 126 | 2009-01-28T09:24:14Z | 17,413,767 | 12 | 2013-07-01T21:10:18Z | [
"python",
"ruby",
"virtualenv"
] | Is there something similar to the Python utility [virtualenv](http://pypi.python.org/pypi/virtualenv)?
Basically it allows you to install Python packages into a sandboxed environment, so `easy_install django` doesn't go in your system-wide site-packages directory, it would go in the virtualenv-created directory.
For example:
```
$ virtualenv test
New python executable in test/bin/python
Installing setuptools...cd .........done.
$ cd test/
$ source bin/activate
(test)$ easy_install tvnamer
Searching for tvnamer
Best match: tvnamer 0.5.1
Processing tvnamer-0.5.1-py2.5.egg
Adding tvnamer 0.5.1 to easy-install.pth file
Installing tvnamer script to /Users/dbr/test/bin
Using /Library/Python/2.5/site-packages/tvnamer-0.5.1-py2.5.egg
Processing dependencies for tvnamer
Finished processing dependencies for tvnamer
(test)$ which tvnamer
/Users/dbr/test/bin/tvnamer
```
Is there something like this for RubyGems? | If you only need to install gems as non-root, try setting the `GEM_HOME` environment variable. Then just run `gem`.
For example:
```
$ export GEM_HOME=$HOME/local/gems
$ gem install rhc
``` |
How do I specify input and output data types in python comments? | 487,184 | 11 | 2009-01-28T10:40:31Z | 487,203 | 13 | 2009-01-28T10:44:32Z | [
"python",
"documentation",
"comments",
"types"
] | I have seen several standards for writing comments about the kind of data a function expects and returns in Python. Is there a consensus on which one is best-practice?
Is the new functionality in <http://www.python.org/dev/peps/pep-3107/> something I should start using for this? | Function annotations are not for a specific use, they can be used for anything.
Tools can be written to extract information from the annotations and do anything you want, including checking types or generating documentation. But python itself does not do anything with the information. You could use to a completely different purpose, i.e. to provide a function that will be called on the parameter or to declare a string of possible return values.
Annotations can be any object:
```
def somefunc(param1: "string annotation",
param2: 151631,
param3: any_object): -> "some information here":
```
and you can retrieve the objects using:
```
print (somefunc.func_annotations)
{'param1': "string annotation",
'param2': 151631,
'param3': <object any_object>,
'return': "some information here"}
```
Use case suggestions provided by the PEP:
* Providing typing information
+ Type checking
+ Let IDEs show what types a function expects and returns
+ Function overloading / generic functions
+ Foreign-language bridges
+ Adaptation
+ Predicate logic functions
+ Database query mapping
+ RPC parameter marshaling
* Other information
+ Documentation for parameters and return values
Since function annotation syntax is too new, it is really not used for any production tools.
I suggest using other methods to document that. I use epydoc to generate my documentation, and it can read parameter typing information from docstrings:
```
def x_intercept(m, b):
"""
Return the x intercept of the line M{y=m*x+b}. The X{x intercept}
of a line is the point at which it crosses the x axis (M{y=0}).
This function can be used in conjuction with L{z_transform} to
find an arbitrary function's zeros.
@type m: number
@param m: The slope of the line.
@type b: number
@param b: The y intercept of the line. The X{y intercept} of a
line is the point at which it crosses the y axis (M{x=0}).
@rtype: number
@return: the x intercept of the line M{y=m*x+b}.
"""
return -b/m
```
This example is from [epydoc's website](http://epydoc.sourceforge.net/). It can generate documentation in a variety of formats, and can generate good graphs from your classes and call profiles. |
How do I specify input and output data types in python comments? | 487,184 | 11 | 2009-01-28T10:40:31Z | 487,306 | 7 | 2009-01-28T11:30:54Z | [
"python",
"documentation",
"comments",
"types"
] | I have seen several standards for writing comments about the kind of data a function expects and returns in Python. Is there a consensus on which one is best-practice?
Is the new functionality in <http://www.python.org/dev/peps/pep-3107/> something I should start using for this? | If you use [epydoc](http://epydoc.sourceforge.net/) to produce API documentation, you have three choices.
* Epytext.
* ReStructuredText, RST.
* JavaDoc notation, which looks a bit like epytext.
I recommend RST because it works well with [sphinx](http://sphinx.pocoo.org/) for generating overall documentation suite that includes API references. RST markup is defined [here](http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html). The various epydoc fields you can specify are defined [here](http://epydoc.sourceforge.net/manual-fields.html).
Example.
```
def someFunction( arg1, arg2 ):
"""Returns the average of *two* (and only two) arguments.
:param arg1: a numeric value
:type arg1: **any** numeric type
:param arg2: another numeric value
:type arg2: **any** numeric type
:return: mid-point (or arithmetic mean) between two values
:rtype: numeric type compatible with the args.
"""
return (arg1+arg2)/2
``` |
Reducing Django Memory Usage. Low hanging fruit? | 487,224 | 127 | 2009-01-28T10:52:19Z | 487,261 | 44 | 2009-01-28T11:11:14Z | [
"python",
"django",
"profiling",
"memory-management",
"mod-python"
] | My memory usage increases over time and restarting Django is not kind to users.
I am unsure how to go about profiling the memory usage but some tips on how to start measuring would be useful.
I have a feeling that there are some simple steps that could produce big gains. Ensuring 'debug' is set to 'False' is an obvious biggie.
Can anyone suggest others? How much improvement would caching on low-traffic sites?
In this case I'm running under Apache 2.x with mod\_python. I've heard mod\_wsgi is a bit leaner but it would be tricky to switch at this stage unless I know the gains would be significant.
Edit: Thanks for the tips so far. Any suggestions how to discover what's using up the memory? Are there any guides to Python memory profiling?
Also as mentioned there's a few things that will make it tricky to switch to mod\_wsgi so I'd like to have some idea of the gains I could expect before ploughing forwards in that direction.
**Edit:** Carl posted a slightly more detailed reply here that is worth reading: <http://stackoverflow.com/questions/488864/django-deployment-cutting-apaches-overhead>
**Edit:** [Graham Dumpleton's article](http://blog.dscpl.com.au/2009/03/load-spikes-and-excessive-memory-usage.html) is the best I've found on the MPM and mod\_wsgi related stuff. I am rather disappointed that no-one could provide any info on debugging the memory usage in the app itself though.
**Final Edit:** Well I have been discussing this with Webfaction to see if they could assist with recompiling Apache and this is their word on the matter:
> "I really don't think that you will get much of a benefit by switching to an MPM Worker + mod\_wsgi setup. I estimate that you might be able to save around 20MB, but probably not much more than that."
So! This brings me back to my original question (which I am still none the wiser about). How does one go about identifying where the problems lies? It's a well known maxim that you don't optimize without testing to see where you need to optimize but there is very little in the way of tutorials on measuring Python memory usage and none at all specific to Django.
Thanks for everyone's assistance but I think this question is still open!
**Another final edit ;-)**
I asked this on the django-users list and got some [*very* helpful replies](http://groups.google.com/group/django-users/browse%5Fthread/thread/1d56d48fc192ceed?fwc=1&pli=1)
**Honestly the last update ever!**
This was just released. Could be the best solution yet: [Profiling Django object size and memory usage with Pympler](http://www.rkblog.rk.edu.pl/w/p/profiling-django-object-size-and-memory-usage-pympler/) | Make sure you are not keeping global references to data. That prevents the python garbage collector from releasing the memory.
Don't use `mod_python`. It loads an interpreter inside apache. If you need to use apache, use [`mod_wsgi`](http://code.google.com/p/modwsgi/) instead. It is not tricky to switch. It is very easy. `mod_wsgi` is way easier to [configure for django](http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango) than brain-dead `mod_python`.
If you can remove apache from your requirements, that would be even better to your memory. [`spawning`](http://pypi.python.org/pypi/Spawning/0.7) seems to be the new fast scalable way to run python web applications.
**EDIT**: I don't see how switching to mod\_wsgi could be "*tricky*". It should be a very easy task. Please elaborate on the problem you are having with the switch. |
Reducing Django Memory Usage. Low hanging fruit? | 487,224 | 127 | 2009-01-28T10:52:19Z | 501,501 | 25 | 2009-02-01T20:18:57Z | [
"python",
"django",
"profiling",
"memory-management",
"mod-python"
] | My memory usage increases over time and restarting Django is not kind to users.
I am unsure how to go about profiling the memory usage but some tips on how to start measuring would be useful.
I have a feeling that there are some simple steps that could produce big gains. Ensuring 'debug' is set to 'False' is an obvious biggie.
Can anyone suggest others? How much improvement would caching on low-traffic sites?
In this case I'm running under Apache 2.x with mod\_python. I've heard mod\_wsgi is a bit leaner but it would be tricky to switch at this stage unless I know the gains would be significant.
Edit: Thanks for the tips so far. Any suggestions how to discover what's using up the memory? Are there any guides to Python memory profiling?
Also as mentioned there's a few things that will make it tricky to switch to mod\_wsgi so I'd like to have some idea of the gains I could expect before ploughing forwards in that direction.
**Edit:** Carl posted a slightly more detailed reply here that is worth reading: <http://stackoverflow.com/questions/488864/django-deployment-cutting-apaches-overhead>
**Edit:** [Graham Dumpleton's article](http://blog.dscpl.com.au/2009/03/load-spikes-and-excessive-memory-usage.html) is the best I've found on the MPM and mod\_wsgi related stuff. I am rather disappointed that no-one could provide any info on debugging the memory usage in the app itself though.
**Final Edit:** Well I have been discussing this with Webfaction to see if they could assist with recompiling Apache and this is their word on the matter:
> "I really don't think that you will get much of a benefit by switching to an MPM Worker + mod\_wsgi setup. I estimate that you might be able to save around 20MB, but probably not much more than that."
So! This brings me back to my original question (which I am still none the wiser about). How does one go about identifying where the problems lies? It's a well known maxim that you don't optimize without testing to see where you need to optimize but there is very little in the way of tutorials on measuring Python memory usage and none at all specific to Django.
Thanks for everyone's assistance but I think this question is still open!
**Another final edit ;-)**
I asked this on the django-users list and got some [*very* helpful replies](http://groups.google.com/group/django-users/browse%5Fthread/thread/1d56d48fc192ceed?fwc=1&pli=1)
**Honestly the last update ever!**
This was just released. Could be the best solution yet: [Profiling Django object size and memory usage with Pympler](http://www.rkblog.rk.edu.pl/w/p/profiling-django-object-size-and-memory-usage-pympler/) | If you are running under mod\_wsgi, and presumably spawning since it is WSGI compliant, you can use [Dozer](http://pypi.python.org/pypi/Dozer) to look at your memory usage.
Under mod\_wsgi just add this at the bottom of your WSGI script:
```
from dozer import Dozer
application = Dozer(application)
```
Then point your browser at <http://domain/_dozer/index> to see a list of all your memory allocations.
I'll also just add my voice of support for mod\_wsgi. It makes a world of difference in terms of performance and memory usage over mod\_python. Graham Dumpleton's support for mod\_wsgi is outstanding, both in terms of active development and in helping people on the mailing list to optimize their installations. David Cramer at [curse.com](http://www.curse.com/) has posted some charts (which I can't seem to find now unfortunately) showing the drastic reduction in cpu and memory usage after they switched to mod\_wsgi on that high traffic site. Several of the django devs have switched. Seriously, it's a no-brainer :) |
Reducing Django Memory Usage. Low hanging fruit? | 487,224 | 127 | 2009-01-28T10:52:19Z | 521,918 | 14 | 2009-02-06T19:55:59Z | [
"python",
"django",
"profiling",
"memory-management",
"mod-python"
] | My memory usage increases over time and restarting Django is not kind to users.
I am unsure how to go about profiling the memory usage but some tips on how to start measuring would be useful.
I have a feeling that there are some simple steps that could produce big gains. Ensuring 'debug' is set to 'False' is an obvious biggie.
Can anyone suggest others? How much improvement would caching on low-traffic sites?
In this case I'm running under Apache 2.x with mod\_python. I've heard mod\_wsgi is a bit leaner but it would be tricky to switch at this stage unless I know the gains would be significant.
Edit: Thanks for the tips so far. Any suggestions how to discover what's using up the memory? Are there any guides to Python memory profiling?
Also as mentioned there's a few things that will make it tricky to switch to mod\_wsgi so I'd like to have some idea of the gains I could expect before ploughing forwards in that direction.
**Edit:** Carl posted a slightly more detailed reply here that is worth reading: <http://stackoverflow.com/questions/488864/django-deployment-cutting-apaches-overhead>
**Edit:** [Graham Dumpleton's article](http://blog.dscpl.com.au/2009/03/load-spikes-and-excessive-memory-usage.html) is the best I've found on the MPM and mod\_wsgi related stuff. I am rather disappointed that no-one could provide any info on debugging the memory usage in the app itself though.
**Final Edit:** Well I have been discussing this with Webfaction to see if they could assist with recompiling Apache and this is their word on the matter:
> "I really don't think that you will get much of a benefit by switching to an MPM Worker + mod\_wsgi setup. I estimate that you might be able to save around 20MB, but probably not much more than that."
So! This brings me back to my original question (which I am still none the wiser about). How does one go about identifying where the problems lies? It's a well known maxim that you don't optimize without testing to see where you need to optimize but there is very little in the way of tutorials on measuring Python memory usage and none at all specific to Django.
Thanks for everyone's assistance but I think this question is still open!
**Another final edit ;-)**
I asked this on the django-users list and got some [*very* helpful replies](http://groups.google.com/group/django-users/browse%5Fthread/thread/1d56d48fc192ceed?fwc=1&pli=1)
**Honestly the last update ever!**
This was just released. Could be the best solution yet: [Profiling Django object size and memory usage with Pympler](http://www.rkblog.rk.edu.pl/w/p/profiling-django-object-size-and-memory-usage-pympler/) | These are the Python memory profiler solutions I'm aware of (not Django related):
* [Heapy](http://guppy-pe.sourceforge.net/#Heapy)
* [pysizer](http://pysizer.8325.org/) (discontinued)
* [Python Memory Validator](http://www.softwareverify.com/python/memory/index.html) (commercial)
* [Pympler](http://pypi.python.org/pypi/Pympler)
Disclaimer: I have a stake in the latter.
The individual project's documentation should give you an idea of how to use these tools to analyze memory behavior of Python applications.
The following is a nice "war story" that also gives some helpful pointers:
* [Reducing the footprint of python applications](http://wingolog.org/archives/2007/11/27/reducing-the-footprint-of-python-applications) |
Client Server programming in python? | 487,229 | 8 | 2009-01-28T10:55:51Z | 487,281 | 18 | 2009-01-28T11:19:49Z | [
"python",
"multithreading",
"client",
"sockets"
] | Here is source code for multithreaed server and client in python.
In the code client and server closes connection after the job is finished.
I want to keep the connections alive and send more data over the same connections to **avoid overhead of closing and opening sockets every time**.
Following code is from : <http://www.devshed.com/c/a/Python/Basic-Threading-in-Python/1/>
```
import pickle
import socket
import threading
# We'll pickle a list of numbers:
someList = [ 1, 2, 7, 9, 0 ]
pickledList = pickle.dumps ( someList )
# Our thread class:
class ClientThread ( threading.Thread ):
# Override Thread's __init__ method to accept the parameters needed:
def __init__ ( self, channel, details ):
self.channel = channel
self.details = details
threading.Thread.__init__ ( self )
def run ( self ):
print 'Received connection:', self.details [ 0 ]
self.channel.send ( pickledList )
for x in xrange ( 10 ):
print self.channel.recv ( 1024 )
self.channel.close()
print 'Closed connection:', self.details [ 0 ]
# Set up the server:
server = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
server.bind ( ( '', 2727 ) )
server.listen ( 5 )
# Have the server serve "forever":
while True:
channel, details = server.accept()
ClientThread ( channel, details ).start()
```
---
```
import pickle
import socket
import threading
# Here's our thread:
class ConnectionThread ( threading.Thread ):
def run ( self ):
# Connect to the server:
client = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
client.connect ( ( 'localhost', 2727 ) )
# Retrieve and unpickle the list object:
print pickle.loads ( client.recv ( 1024 ) )
# Send some messages:
for x in xrange ( 10 ):
client.send ( 'Hey. ' + str ( x ) + '\n' )
# Close the connection
client.close()
# Let's spawn a few threads:
for x in xrange ( 5 ):
ConnectionThread().start()
``` | Spawning a new thread for every connection is a **really bad** design choice.
What happens if you get hit by a lot of connections?
In fact, using threads to wait for network IO is not worth it. Your program gets really complex and you get absolutely **no benefit** since waiting for network in threads won't make you **wait faster**. You only lose by using threads in this case.
The following text is from python documentation:
> There are only two ways to have a
> program on a single processor do âmore
> than one thing at a time.â
> Multi-threaded programming is the
> simplest and most popular way to do
> it, but there is another very
> different technique, that lets you
> have nearly all the advantages of
> multi-threading, without actually
> using multiple threads. Itâs really
> only practical if your program is
> largely I/O bound. If your program is
> processor bound, then pre-emptive
> scheduled threads are probably what
> you really need. Network servers are
> rarely processor bound, however.
And if it is a processor bound server case. you could always leave another process/thread to do the processor part. Continuing:
> If your operating system supports the
> select system call in its I/O library
> (and nearly all do), then you can use
> it to juggle multiple communication
> channels at once; doing other work
> while your I/O is taking place in the
> âbackground.â Although this strategy
> can seem strange and complex,
> especially at first, it is in many
> ways easier to understand and control
> than multi-threaded programming.
So instead of using threads, use non-blocking input/output: collect the sockets in a list and use an event loop with [select.select](http://docs.python.org/library/select.html#select.select) to know which socket has data to read. Do that in a single thread.
You could choose a python asynchronous networking framework like [twisted](http://twistedmatrix.com/) to do that for you. That will save you a lot of headaches. Twisted's code has been improved for years, and covers some corner cases you'll take time to master.
**EDIT**: Any existing async IO libraries (like Twisted) are python code. You could have written it yourself, but it has already been written for you. I don't see why you wouldn't use one of those libraries and write your own **worst** code instead, since you are a beginner. Networing IO is hard to get right. |
Is there a standard way to list names of Python modules in a package? | 487,971 | 57 | 2009-01-28T15:11:32Z | 489,489 | 11 | 2009-01-28T21:35:25Z | [
"python",
"module",
"package"
] | Is there a straightforward way to list the names of all modules in a package, without using `__all__`?
For example, given this package:
```
/testpkg
/testpkg/__init__.py
/testpkg/modulea.py
/testpkg/moduleb.py
```
I'm wondering if there is a standard or built-in way to do something like this:
```
>>> package_contents("testpkg")
['modulea', 'moduleb']
```
The manual approach would be to iterate through the module search paths in order to find the package's directory. One could then list all the files in that directory, filter out the uniquely-named py/pyc/pyo files, strip the extensions, and return that list. But this seems like a fair amount of work for something the module import mechanism is already doing internally. Is that functionality exposed anywhere? | ```
import module
help(module)
``` |
Is there a standard way to list names of Python modules in a package? | 487,971 | 57 | 2009-01-28T15:11:32Z | 489,649 | 14 | 2009-01-28T22:14:44Z | [
"python",
"module",
"package"
] | Is there a straightforward way to list the names of all modules in a package, without using `__all__`?
For example, given this package:
```
/testpkg
/testpkg/__init__.py
/testpkg/modulea.py
/testpkg/moduleb.py
```
I'm wondering if there is a standard or built-in way to do something like this:
```
>>> package_contents("testpkg")
['modulea', 'moduleb']
```
The manual approach would be to iterate through the module search paths in order to find the package's directory. One could then list all the files in that directory, filter out the uniquely-named py/pyc/pyo files, strip the extensions, and return that list. But this seems like a fair amount of work for something the module import mechanism is already doing internally. Is that functionality exposed anywhere? | Maybe this will do what you're looking for?
```
import imp
import os
MODULE_EXTENSIONS = ('.py', '.pyc', '.pyo')
def package_contents(package_name):
file, pathname, description = imp.find_module(package_name)
if file:
raise ImportError('Not a package: %r', package_name)
# Use a set because some may be both source and compiled.
return set([os.path.splitext(module)[0]
for module in os.listdir(pathname)
if module.endswith(MODULE_EXTENSIONS)])
``` |
Is there a standard way to list names of Python modules in a package? | 487,971 | 57 | 2009-01-28T15:11:32Z | 1,310,912 | 124 | 2009-08-21T09:21:02Z | [
"python",
"module",
"package"
] | Is there a straightforward way to list the names of all modules in a package, without using `__all__`?
For example, given this package:
```
/testpkg
/testpkg/__init__.py
/testpkg/modulea.py
/testpkg/moduleb.py
```
I'm wondering if there is a standard or built-in way to do something like this:
```
>>> package_contents("testpkg")
['modulea', 'moduleb']
```
The manual approach would be to iterate through the module search paths in order to find the package's directory. One could then list all the files in that directory, filter out the uniquely-named py/pyc/pyo files, strip the extensions, and return that list. But this seems like a fair amount of work for something the module import mechanism is already doing internally. Is that functionality exposed anywhere? | Using [python2.3 and above](http://docs.python.org/library/pkgutil.html), you could also use the `pkgutil` module:
```
>>> import pkgutil
>>> [name for _, name, _ in pkgutil.iter_modules(['testpkg'])]
['modulea', 'moduleb']
```
**EDIT:** Note that the parameter is not a list of modules, but a list of paths, so you might want to do something like this:
```
>>> import os.path, pkgutil
>>> import testpkg
>>> pkgpath = os.path.dirname(testpkg.__file__)
>>> print [name for _, name, _ in pkgutil.iter_modules([pkgpath])]
``` |
Django: ModelMultipleChoiceField doesn't select initial choices | 488,036 | 18 | 2009-01-28T15:30:02Z | 488,113 | 8 | 2009-01-28T15:45:06Z | [
"python",
"django",
"django-models",
"django-forms"
] | ModelMultipleChoiceField doesn't select initial choices and I can't make the following fix (link below) work in my example:
<http://code.djangoproject.com/ticket/5247#comment:6>
My models and form:
```
class Company(models.Model):
company_name = models.CharField(max_length=200)
class Contact(models.Model):
company = models.ForeignKey(Company)
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
class Action(models.Model):
company = models.ForeignKey(Company, blank=True, null=True)
from_company = models.ManyToManyField(Contact, verbose_name='Participant(s) from "Company"', blank=True, null=True)
class Action_Form(ModelForm):
from_company = forms.ModelMultipleChoiceField(queryset=Contact.objects.none(), widget=forms.CheckboxSelectMultiple())
class Meta:
model = Action
```
What I do and the results:
```
>>> contacts_from_company = Contact.objects.filter(company__exact=1) # "1" for test, otherwise a variable
>>> form = Action_Form(initial={'from_company': [o.pk for o in contacts_from_company]}) # as suggested in the fix
>>> print form['from_company']
<ul>
</ul>
>>> print contacts_from_company
[<Contact: test person>, <Contact: another person>]
>>> form2 = Action_Form(initial={'from_company': contacts_from_company})
>>> print form2['from_company']
<ul>
</ul>
>>> form3 = Action_Form(initial={'from_company': Contact.objects.all()})
>>> print form3['from_company']
<ul>
</ul>
```
The way I was hoping it would work:
1. My view gets "company" from request.GET
2. It then filters all "contacts" for that "company"
3. Finally, it creates a form and passes those "contacts" as "initial={...}"
**Two questions:**
**1. [not answered yet]** How can I make ModelMultipleChoiceField take those "initial" values?
**2. [answered]** As an alternative, can I pass a variable to Action\_Form(ModelForm) so that in my ModelForm I could have:
```
from_company = forms.ModelMultipleChoiceField(queryset=Contact.objects.filter(company__exact=some_id) # where some_id comes from a view
``` | You will need to add an `__init__` method to `Action_Form` to set your initial values, remembering to call `__init__` on the base `ModelForm` class via **super**.
```
class Action_Form(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(Action_Form, self).__init__(*args, **kwargs)
self.fields['from_company'].queryset = Contact.object.filter(...
```
If you plan to pass your filter params as keyword args to `Action_Form`, you'll need to remove them prior invoking super:
```
myfilter = kwargs['myfilter']
del kwargs['myfilter']
```
or, probably better:
```
myfilter = kwargs.pop('myfilter')
```
For more information, here's another link referring to [Dynamic ModelForms in Django](http://www.rossp.org/blog/2008/dec/15/modelforms/). |
Django: ModelMultipleChoiceField doesn't select initial choices | 488,036 | 18 | 2009-01-28T15:30:02Z | 1,530,632 | 18 | 2009-10-07T10:01:37Z | [
"python",
"django",
"django-models",
"django-forms"
] | ModelMultipleChoiceField doesn't select initial choices and I can't make the following fix (link below) work in my example:
<http://code.djangoproject.com/ticket/5247#comment:6>
My models and form:
```
class Company(models.Model):
company_name = models.CharField(max_length=200)
class Contact(models.Model):
company = models.ForeignKey(Company)
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
class Action(models.Model):
company = models.ForeignKey(Company, blank=True, null=True)
from_company = models.ManyToManyField(Contact, verbose_name='Participant(s) from "Company"', blank=True, null=True)
class Action_Form(ModelForm):
from_company = forms.ModelMultipleChoiceField(queryset=Contact.objects.none(), widget=forms.CheckboxSelectMultiple())
class Meta:
model = Action
```
What I do and the results:
```
>>> contacts_from_company = Contact.objects.filter(company__exact=1) # "1" for test, otherwise a variable
>>> form = Action_Form(initial={'from_company': [o.pk for o in contacts_from_company]}) # as suggested in the fix
>>> print form['from_company']
<ul>
</ul>
>>> print contacts_from_company
[<Contact: test person>, <Contact: another person>]
>>> form2 = Action_Form(initial={'from_company': contacts_from_company})
>>> print form2['from_company']
<ul>
</ul>
>>> form3 = Action_Form(initial={'from_company': Contact.objects.all()})
>>> print form3['from_company']
<ul>
</ul>
```
The way I was hoping it would work:
1. My view gets "company" from request.GET
2. It then filters all "contacts" for that "company"
3. Finally, it creates a form and passes those "contacts" as "initial={...}"
**Two questions:**
**1. [not answered yet]** How can I make ModelMultipleChoiceField take those "initial" values?
**2. [answered]** As an alternative, can I pass a variable to Action\_Form(ModelForm) so that in my ModelForm I could have:
```
from_company = forms.ModelMultipleChoiceField(queryset=Contact.objects.filter(company__exact=some_id) # where some_id comes from a view
``` | I'm replying for 1)
> 1. How can I make ModelMultipleChoiceField take those "initial" values?
This could be done in your `Action_Form` `__init__` method using ModelMultipleChoiceField `initial` attribute.
As it says in the Django source code (*db/models/fields/related.py*)
in `def formfield(self, **kwargs)`:
```
# If initial is passed in, it's a list of related objects, but the
# MultipleChoiceField takes a list of IDs.
```
So you need to give it a list of IDs:
```
class Action_Form(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(Action_Form, self).__init__(*args, **kwargs)
self.fields['from_company'].initial = [c.pk for c in Contact.object.filter()]
``` |
Django: ModelMultipleChoiceField doesn't select initial choices | 488,036 | 18 | 2009-01-28T15:30:02Z | 3,915,048 | 8 | 2010-10-12T13:13:25Z | [
"python",
"django",
"django-models",
"django-forms"
] | ModelMultipleChoiceField doesn't select initial choices and I can't make the following fix (link below) work in my example:
<http://code.djangoproject.com/ticket/5247#comment:6>
My models and form:
```
class Company(models.Model):
company_name = models.CharField(max_length=200)
class Contact(models.Model):
company = models.ForeignKey(Company)
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
class Action(models.Model):
company = models.ForeignKey(Company, blank=True, null=True)
from_company = models.ManyToManyField(Contact, verbose_name='Participant(s) from "Company"', blank=True, null=True)
class Action_Form(ModelForm):
from_company = forms.ModelMultipleChoiceField(queryset=Contact.objects.none(), widget=forms.CheckboxSelectMultiple())
class Meta:
model = Action
```
What I do and the results:
```
>>> contacts_from_company = Contact.objects.filter(company__exact=1) # "1" for test, otherwise a variable
>>> form = Action_Form(initial={'from_company': [o.pk for o in contacts_from_company]}) # as suggested in the fix
>>> print form['from_company']
<ul>
</ul>
>>> print contacts_from_company
[<Contact: test person>, <Contact: another person>]
>>> form2 = Action_Form(initial={'from_company': contacts_from_company})
>>> print form2['from_company']
<ul>
</ul>
>>> form3 = Action_Form(initial={'from_company': Contact.objects.all()})
>>> print form3['from_company']
<ul>
</ul>
```
The way I was hoping it would work:
1. My view gets "company" from request.GET
2. It then filters all "contacts" for that "company"
3. Finally, it creates a form and passes those "contacts" as "initial={...}"
**Two questions:**
**1. [not answered yet]** How can I make ModelMultipleChoiceField take those "initial" values?
**2. [answered]** As an alternative, can I pass a variable to Action\_Form(ModelForm) so that in my ModelForm I could have:
```
from_company = forms.ModelMultipleChoiceField(queryset=Contact.objects.filter(company__exact=some_id) # where some_id comes from a view
``` | If previous answer wasn't straight-forward enough, I try to answer 1) again:
> 1. How can I make ModelMultipleChoiceField take those "initial" values?
You can leave `Action_Form` as it was in the original question, and just use this to render exactly what you want:
```
>>> form4 = Action_Form(initial={'from_company': Contact.objects.all().values_list('id',flat=True)})
>>> print form4['from_company']
``` |
How do I make environment variable changes stick in Python? | 488,366 | 10 | 2009-01-28T16:36:47Z | 488,713 | 7 | 2009-01-28T18:16:58Z | [
"python",
"environment-variables"
] | From what I've read, any changes to the environment variables in a Python instance are only available within that instance, and disappear once the instance is closed. Is there any way to make them stick by committing them to the system?
The reason I need to do this is because at the studio where I work, tools like Maya rely heavily on environment variables to configure paths across multiple platforms.
My test code is
```
import os
os.environ['FAKE'] = 'C:\\'
```
Opening another instance of Python and requesting `os.environ['FAKE']` yields a `KeyError`.
**NOTE:** Portability will be an issue, but the small API I'm writing will be able to check OS version and trigger different commands if necessary.
That said, I've gone the route of using the Windows registry technique and will simply write alternative methods that will call shell scripts on other platforms as they become requirements. | > make them stick by committing them to
> the system?
I think you are a bit confused here. There is no 'system' environment. Each process has their own environment as part its memory. A process can only change its own environment. A process can set the initial environment for processes it creates.
If you really do think you need to set environment variables for the system you will need to look at changing them in the location they get initially loaded from like the registry on windows or your shell configuration file on Linux. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.