title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
What is a metaclass in Python? | 100,003 | 3,219 | 2008-09-19T06:10:46Z | 31,930,795 | 38 | 2015-08-10T23:28:09Z | [
"python",
"oop",
"metaclass",
"python-datamodel"
] | What are metaclasses? What do you use them for? | > # What are metaclasses? What do you use them for?
A class is to an instance as a metaclass is to a class.
Put another way, a class is an instance of a metaclass.
Put a third way, a metaclass is a class's class.
Still hopelessly confused? So was I, until I learned the following and demonstrated how one can actually use metaclasses:
# You use a metaclass every time you create a class:
When you create a class definition, for example, like this,
```
class Foo(object): 'demo'
```
it's the same as functionally calling `type` with the appropriate arguments and assigning the result to a variable of that name:
```
name = 'Foo'
bases = (object,)
namespace = {'__doc__': 'demo'}
Foo = type(name, bases, namespace)
```
Note, some things automatically get added to the `__dict__`, i.e., the namespace:
```
>>> Foo.__dict__
dict_proxy({'__dict__': <attribute '__dict__' of 'Foo' objects>, '__module__': '__main__', '__weakref__': <attribute '__weakref__' of 'Foo' objects>, '__doc__': 'demo'})
```
The *metaclass* of the object we created, in both cases, is `type`.
# We can extend `type` just like any other class definition:
Here's the default `__repr__` of classes:
```
>>> Foo
<class '__main__.Foo'>
```
One of the most valuable things we can do by default in writing a Python object is to provide it with a good `__repr__`. When we call `help(repr)` we learn that there's a good test for a `__repr__` that also requires a test for equality - `obj == eval(repr(obj))`. The following simple implementation of `__repr__` and `__eq__` for class instances of our type class provides us with a demonstration that may improve on the default `__repr__` of classes:
```
class Type(type):
def __repr__(cls):
"""
>>> Baz
Type('Baz', (Foo, Bar,), {'__module__': '__main__', '__doc__': None})
>>> eval(repr(Baz))
Type('Baz', (Foo, Bar,), {'__module__': '__main__', '__doc__': None})
"""
metaname = type(cls).__name__
name = cls.__name__
parents = ', '.join(b.__name__ for b in cls.__bases__)
if parents:
parents += ','
namespace = ', '.join(': '.join(
(repr(k), repr(v) if not isinstance(v, type) else v.__name__))
for k, v in cls.__dict__.items())
return '{0}(\'{1}\', ({2}), {{{3}}})'.format(metaname, name, parents, namespace)
def __eq__(cls, other):
"""
>>> Baz == eval(repr(Baz))
True
"""
return (cls.__name__, cls.__bases__, cls.__dict__) == (
other.__name__, other.__bases__, other.__dict__)
```
So now when we create an object with this metaclass, the `__repr__` echoed on the command line provides a much less ugly sight than the default:
```
>>> class Bar(object): pass
>>> Baz = Type('Baz', (Foo, Bar,), {'__module__': '__main__', '__doc__': None})
>>> Baz
Type('Baz', (Foo, Bar,), {'__module__': '__main__', '__doc__': None})
```
With a `__repr__` defined for the class instance, we have a stronger ability to debug our code.
# An expected usage: `__prepare__` a namespace
If, for example, we want to know in what order a class's methods are created in, we could provide an ordered dict as the namespace of the class. We would do this with `__prepare__` which [returns the namespace dict for the class if it is implemented in Python 3](https://docs.python.org/3/reference/datamodel.html#preparing-the-class-namespace):
```
from collections import OrderedDict
class OrderedType(Type):
@classmethod
def __prepare__(metacls, name, bases, **kwargs):
return OrderedDict()
def __new__(cls, name, bases, namespace, **kwargs):
result = Type.__new__(cls, name, bases, dict(namespace))
result.members = tuple(namespace)
return result
```
And usage:
```
class OrderedMethodsObject(object, metaclass=OrderedType):
def method1(self): pass
def method2(self): pass
def method3(self): pass
def method4(self): pass
```
And now we have a record of the order in which these methods (and other class attributes) were created:
```
>>> OrderedMethodsObject.members
('__module__', '__qualname__', 'method1', 'method2', 'method3', 'method4')
```
Note, this example was adapted from the [docs](https://docs.python.org/3/reference/datamodel.html#metaclass-example).
So what we did was instantiate a metaclass by creating a class. We can also treat the metaclass as we would any other class. It has a method resolution order:
```
>>> inspect.getmro(OrderedType)
(<class '__main__.OrderedType'>, <class '__main__.Type'>, <class 'type'>, <class 'object'>)
```
And it has approximately the correct `repr` (which we can no longer eval unless we can find a way to represent our functions.):
```
>>> OrderedMethodsObject
OrderedType('OrderedMethodsObject', (object,), {'method1': <function OrderedMethodsObject.method1 at 0x0000000002DB01E0>, 'members': ('__module__', '__qualname__', 'method1', 'method2', 'method3', 'method4'), 'method3': <function OrderedMet
hodsObject.method3 at 0x0000000002DB02F0>, 'method2': <function OrderedMethodsObject.method2 at 0x0000000002DB0268>, '__module__': '__main__', '__weakref__': <attribute '__weakref__' of 'OrderedMethodsObject' objects>, '__doc__': None, '__d
ict__': <attribute '__dict__' of 'OrderedMethodsObject' objects>, 'method4': <function OrderedMethodsObject.method4 at 0x0000000002DB0378>})
``` |
What is a metaclass in Python? | 100,003 | 3,219 | 2008-09-19T06:10:46Z | 35,732,111 | 23 | 2016-03-01T19:48:34Z | [
"python",
"oop",
"metaclass",
"python-datamodel"
] | What are metaclasses? What do you use them for? | **Python 3 update**
There are (at this point) two key methods in a metaclass:
* `__prepare__`, and
* `__new__`
`__prepare__` lets you supply a custom mapping (such as an `OrderedDict`) to be used as the namespace while the class is being created. You must return an instance of whatever namespace you choose. If you don't implement `__prepare__` a normal `dict` is used.
`__new__` is responsible for the actual creation/modification of the final class.
A bare-bones, do-nothing-extra metaclass would like:
```
class Meta(type):
def __prepare__(metaclass, cls, bases):
return dict()
def __new__(metacls, cls, bases, clsdict):
return super().__new__(metacls, cls, bases, clsdict)
```
A simple example:
Say you want some simple validation code to run on your attributes -- like it must always be an `int` or a `str`. Without a metaclass, your class would look something like:
```
class Person:
weight = ValidateType('weight', int)
age = ValidateType('age', int)
name = ValidateType('name', str)
```
As you can see, you have to repeat the name of the attribute twice. This makes typos possible along with irritating bugs.
A simple metaclass can address that problem:
```
class Person(metaclass=Validator):
weight = ValidateType(int)
age = ValidateType(int)
name = ValidateType(str)
```
This is what the metaclass would look like (not using `__prepare__` since it is not needed):
```
class Validator(type):
def __new__(metacls, cls, bases, clsdict):
# search clsdict looking for ValidateType descriptors
for name, attr in clsdict.items():
if isinstance(attr, ValidateType):
attr.name = name
attr.attr = '_' + name
# create final class and return it
return super().__new__(metacls, cls, bases, clsdict)
```
A sample run of:
```
p = Person()
p.weight = 9
print(p.weight)
p.weight = '9'
```
produces:
```
9
Traceback (most recent call last):
File "simple_meta.py", line 36, in <module>
p.weight = '9'
File "simple_meta.py", line 24, in __set__
(self.name, self.type, value))
TypeError: weight must be of type(s) <class 'int'> (got '9')
```
---
**Note**: This example is simple enough it could have also been accomplished with a class decorator, but presumably an actual metaclass would be doing much more.
The 'ValidateType' class for reference:
```
class ValidateType:
def __init__(self, type):
self.name = None # will be set by metaclass
self.attr = None # will be set by metaclass
self.type = type
def __get__(self, inst, cls):
if inst is None:
return self
else:
return inst.__dict__[self.attr]
def __set__(self, inst, value):
if not isinstance(value, self.type):
raise TypeError('%s must be of type(s) %s (got %r)' %
(self.name, self.type, value))
else:
inst.__dict__[self.attr] = value
``` |
What is the standard way to add N seconds to datetime.time in Python? | 100,210 | 188 | 2008-09-19T07:19:36Z | 100,345 | 281 | 2008-09-19T07:54:39Z | [
"python",
"datetime",
"time",
"math"
] | Given a `datetime.time` value in Python, is there a standard way to add an integer number of seconds to it, so that `11:34:59` + 3 = `11:35:02`, for example?
These obvious ideas don't work:
```
>>> datetime.time(11, 34, 59) + 3
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'int'
>>> datetime.time(11, 34, 59) + datetime.timedelta(0, 3)
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.timedelta'
>>> datetime.time(11, 34, 59) + datetime.time(0, 0, 3)
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.time'
```
In the end I have written functions like this:
```
def add_secs_to_time(timeval, secs_to_add):
secs = timeval.hour * 3600 + timeval.minute * 60 + timeval.second
secs += secs_to_add
return datetime.time(secs // 3600, (secs % 3600) // 60, secs % 60)
```
I can't help thinking that I'm missing an easier way to do this though.
### Related
* [python time + timedelta equivalent](http://stackoverflow.com/questions/656297/python-time-timedelta-equivalent) | You can use full `datetime` variables with `timedelta`, and by providing a dummy date then using `time` to just get the time value.
For example:
```
import datetime
a = datetime.datetime(100,1,1,11,34,59)
b = a + datetime.timedelta(0,3) # days, seconds, then other fields.
print a.time()
print b.time()
```
results in the two values, three seconds apart:
```
11:34:59
11:35:02
```
You could also opt for the more readable
```
b = a + datetime.timedelta(seconds=3)
```
if you're so inclined.
---
If you're after a function that can do this, you can look into using `addSecs` below:
```
import datetime
def addSecs(tm, secs):
fulldate = datetime.datetime(100, 1, 1, tm.hour, tm.minute, tm.second)
fulldate = fulldate + datetime.timedelta(seconds=secs)
return fulldate.time()
a = datetime.datetime.now().time()
b = addSecs(a, 300)
print a
print b
```
This outputs:
```
09:11:55.775695
09:16:55
``` |
What is the standard way to add N seconds to datetime.time in Python? | 100,210 | 188 | 2008-09-19T07:19:36Z | 100,404 | 9 | 2008-09-19T08:08:10Z | [
"python",
"datetime",
"time",
"math"
] | Given a `datetime.time` value in Python, is there a standard way to add an integer number of seconds to it, so that `11:34:59` + 3 = `11:35:02`, for example?
These obvious ideas don't work:
```
>>> datetime.time(11, 34, 59) + 3
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'int'
>>> datetime.time(11, 34, 59) + datetime.timedelta(0, 3)
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.timedelta'
>>> datetime.time(11, 34, 59) + datetime.time(0, 0, 3)
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.time'
```
In the end I have written functions like this:
```
def add_secs_to_time(timeval, secs_to_add):
secs = timeval.hour * 3600 + timeval.minute * 60 + timeval.second
secs += secs_to_add
return datetime.time(secs // 3600, (secs % 3600) // 60, secs % 60)
```
I can't help thinking that I'm missing an easier way to do this though.
### Related
* [python time + timedelta equivalent](http://stackoverflow.com/questions/656297/python-time-timedelta-equivalent) | One little thing, might add clarity to override the default value for seconds
```
>>> b = a + datetime.timedelta(seconds=3000)
>>> b
datetime.datetime(1, 1, 1, 12, 24, 59)
``` |
What is the standard way to add N seconds to datetime.time in Python? | 100,210 | 188 | 2008-09-19T07:19:36Z | 100,776 | 7 | 2008-09-19T09:40:06Z | [
"python",
"datetime",
"time",
"math"
] | Given a `datetime.time` value in Python, is there a standard way to add an integer number of seconds to it, so that `11:34:59` + 3 = `11:35:02`, for example?
These obvious ideas don't work:
```
>>> datetime.time(11, 34, 59) + 3
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'int'
>>> datetime.time(11, 34, 59) + datetime.timedelta(0, 3)
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.timedelta'
>>> datetime.time(11, 34, 59) + datetime.time(0, 0, 3)
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.time'
```
In the end I have written functions like this:
```
def add_secs_to_time(timeval, secs_to_add):
secs = timeval.hour * 3600 + timeval.minute * 60 + timeval.second
secs += secs_to_add
return datetime.time(secs // 3600, (secs % 3600) // 60, secs % 60)
```
I can't help thinking that I'm missing an easier way to do this though.
### Related
* [python time + timedelta equivalent](http://stackoverflow.com/questions/656297/python-time-timedelta-equivalent) | Thanks to @[Pax Diablo](#100345), @bvmou and @Arachnid for the suggestion of using full datetimes throughout. If I have to accept datetime.time objects from an external source, then this seems to be an alternative `add_secs_to_time()` function:
```
def add_secs_to_time(timeval, secs_to_add):
dummy_date = datetime.date(1, 1, 1)
full_datetime = datetime.datetime.combine(dummy_date, timeval)
added_datetime = full_datetime + datetime.timedelta(seconds=secs_to_add)
return added_datetime.time()
```
This verbose code can be compressed to this one-liner:
```
(datetime.datetime.combine(datetime.date(1, 1, 1), timeval) + datetime.timedelta(seconds=secs_to_add)).time()
```
but I think I'd want to wrap that up in a function for code clarity anyway. |
What is the standard way to add N seconds to datetime.time in Python? | 100,210 | 188 | 2008-09-19T07:19:36Z | 101,947 | 34 | 2008-09-19T13:47:29Z | [
"python",
"datetime",
"time",
"math"
] | Given a `datetime.time` value in Python, is there a standard way to add an integer number of seconds to it, so that `11:34:59` + 3 = `11:35:02`, for example?
These obvious ideas don't work:
```
>>> datetime.time(11, 34, 59) + 3
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'int'
>>> datetime.time(11, 34, 59) + datetime.timedelta(0, 3)
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.timedelta'
>>> datetime.time(11, 34, 59) + datetime.time(0, 0, 3)
TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.time'
```
In the end I have written functions like this:
```
def add_secs_to_time(timeval, secs_to_add):
secs = timeval.hour * 3600 + timeval.minute * 60 + timeval.second
secs += secs_to_add
return datetime.time(secs // 3600, (secs % 3600) // 60, secs % 60)
```
I can't help thinking that I'm missing an easier way to do this though.
### Related
* [python time + timedelta equivalent](http://stackoverflow.com/questions/656297/python-time-timedelta-equivalent) | As others here have stated, you can just use full datetime objects throughout:
```
sometime = get_some_time() # the time to which you want to add 3 seconds
later = (datetime.combine(date.today(), sometime) + timedelta(seconds=3)).time()
```
However, I think it's worth explaining why full datetime objects are required. Consider what would happen if I added 2 hours to 11pm. What's the correct behavior? An exception, because you can't have a time larger than 11:59pm? Should it wrap back around?
Different programmers will expect different things, so whichever result they picked would surprise a lot of people. Worse yet, programmers would write code that worked just fine when they tested it initially, and then have it break later by doing something unexpected. This is very bad, which is why you're not allowed to add timedelta objects to time objects. |
How can I analyze Python code to identify problematic areas? | 100,298 | 91 | 2008-09-19T07:40:22Z | 100,394 | 17 | 2008-09-19T08:05:48Z | [
"python",
"static-analysis",
"cyclomatic-complexity"
] | I have a large source repository split across multiple projects. I would like to produce a report about the health of the source code, identifying problem areas that need to be addressed.
Specifically, I'd like to call out routines with a high cyclomatic complexity, identify repetition, and perhaps run some lint-like static analysis to spot suspicious (and thus likely erroneous) constructs.
How might I go about constructing such a report? | For static analysis there is [pylint](http://www.logilab.org/857) and [pychecker](http://pychecker.sourceforge.net/). Personally I use pylint as it seems to be more comprehensive than pychecker.
For cyclomatic complexity you can try [this perl program](http://www.journyx.com/curt/complexity.html), or this [article](http://www.traceback.org/2008/03/31/measuring-cyclomatic-complexity-of-python-code/) which introduces a python program to do the same |
How can I analyze Python code to identify problematic areas? | 100,298 | 91 | 2008-09-19T07:40:22Z | 105,473 | 29 | 2008-09-19T20:44:22Z | [
"python",
"static-analysis",
"cyclomatic-complexity"
] | I have a large source repository split across multiple projects. I would like to produce a report about the health of the source code, identifying problem areas that need to be addressed.
Specifically, I'd like to call out routines with a high cyclomatic complexity, identify repetition, and perhaps run some lint-like static analysis to spot suspicious (and thus likely erroneous) constructs.
How might I go about constructing such a report? | For measuring cyclomatic complexity, there's a nice tool available at [traceback.org](http://www.traceback.org/2008/03/31/measuring-cyclomatic-complexity-of-python-code/). The page also gives a good overview of how to interpret the results.
+1 for [pylint](http://www.logilab.org/project/pylint). It is great at verifying adherence to coding standards (be it [PEP8](http://www.python.org/dev/peps/pep-0008/) or your own organization's variant), which can in the end help to reduce cyclomatic complexity. |
How can I analyze Python code to identify problematic areas? | 100,298 | 91 | 2008-09-19T07:40:22Z | 2,799,127 | 11 | 2010-05-09T20:50:34Z | [
"python",
"static-analysis",
"cyclomatic-complexity"
] | I have a large source repository split across multiple projects. I would like to produce a report about the health of the source code, identifying problem areas that need to be addressed.
Specifically, I'd like to call out routines with a high cyclomatic complexity, identify repetition, and perhaps run some lint-like static analysis to spot suspicious (and thus likely erroneous) constructs.
How might I go about constructing such a report? | Pycana works like charm when you need to understand a new project!
> [PyCAna](http://sourceforge.net/projects/pycana/) (Python Code Analyzer) is
> a fancy name for a simple code
> analyzer for python that creates a
> class diagram after executing your
> code.
See how it works:
<http://pycana.sourceforge.net/>
output:
 |
How can I analyze Python code to identify problematic areas? | 100,298 | 91 | 2008-09-19T07:40:22Z | 14,793,812 | 24 | 2013-02-10T01:47:31Z | [
"python",
"static-analysis",
"cyclomatic-complexity"
] | I have a large source repository split across multiple projects. I would like to produce a report about the health of the source code, identifying problem areas that need to be addressed.
Specifically, I'd like to call out routines with a high cyclomatic complexity, identify repetition, and perhaps run some lint-like static analysis to spot suspicious (and thus likely erroneous) constructs.
How might I go about constructing such a report? | For cyclomatic complexity you can use `radon`: <https://github.com/rubik/radon>
(Use `pip` to install it: `pip install radon`)
Additionally it also has these features:
* raw metrics (these include SLOC, comment lines, blank lines, &c.)
* Halstead metrics (all of them)
* Maintainability Index (the one used in Visual Studio) |
Python on Windows - how to wait for multiple child processes? | 100,624 | 20 | 2008-09-19T09:02:43Z | 100,886 | 12 | 2008-09-19T10:09:47Z | [
"python",
"windows",
"asynchronous"
] | How to wait for multiple child processes in Python on Windows, without active wait (polling)? Something like this *almost* works for me:
```
proc1 = subprocess.Popen(['python','mytest.py'])
proc2 = subprocess.Popen(['python','mytest.py'])
proc1.wait()
print "1 finished"
proc2.wait()
print "2 finished"
```
The problem is that when `proc2` finishes before `proc1`, the parent process will still wait for `proc1`. On Unix one would use `waitpid(0)` in a loop to get the child processes' return codes as they finish - how to achieve something like this in Python on Windows? | It might seem overkill, but, here it goes:
```
import Queue, thread, subprocess
results= Queue.Queue()
def process_waiter(popen, description, que):
try: popen.wait()
finally: que.put( (description, popen.returncode) )
process_count= 0
proc1= subprocess.Popen( ['python', 'mytest.py'] )
thread.start_new_thread(process_waiter,
(proc1, "1 finished", results))
process_count+= 1
proc2= subprocess.Popen( ['python', 'mytest.py'] )
thread.start_new_thread(process_waiter,
(proc2, "2 finished", results))
process_count+= 1
# etc
while process_count > 0:
description, rc= results.get()
print "job", description, "ended with rc =", rc
process_count-= 1
``` |
Why is "if not someobj:" better than "if someobj == None:" in Python? | 100,732 | 82 | 2008-09-19T09:29:19Z | 100,762 | 139 | 2008-09-19T09:38:10Z | [
"python"
] | I've seen several examples of code like this:
```
if not someobj:
#do something
```
But I'm wondering why not doing:
```
if someobj == None:
#do something
```
Is there any difference? Does one have an advantage over the other? | In the first test, Python try to convert the object to a `bool` value if it is not already one. Roughly, **we are asking the object : are you meaningful or not ?** This is done using the following algorithm :
1. If the object has a `__nonzero__` special method (as do numeric built-ins, `int` and `float`), it calls this method. It must either return a `bool` value which is then directly used, or an `int` value that is considered `False` if equal to zero.
2. Otherwise, if the object has a `__len__` special method (as do container built-ins, `list`, `dict`, `set`, `tuple`, ...), it calls this method, considering a container `False` if it is empty (length is zero).
3. Otherwise, the object is considered `True` unless it is `None` in which case, it is considered `False`.
In the second test, the object is compared for equality to `None`. Here, **we are asking the object, "Are you equal to this other value?"** This is done using the following algorithm :
1. If the object has a `__eq__` method, it is called, and the return value is then converted to a `bool`value and used to determine the outcome of the `if`.
2. Otherwise, if the object has a `__cmp__` method, it is called. This function must return an `int` indicating the order of the two object (`-1` if `self < other`, `0` if `self == other`, `+1` if `self > other`).
3. Otherwise, the object are compared for identity (ie. they are reference to the same object, as can be tested by the `is` operator).
There is another test possible using the `is` operator. **We would be asking the object, "Are you this particular object?"**
Generally, I would recommend to use the first test with non-numerical values, to use the test for equality when you want to compare objects of the same nature (two strings, two numbers, ...) and to check for identity only when using sentinel values (`None` meaning not initialized for a member field for exemple, or when using the `getattr` or the `__getitem__` methods).
To summarize, we have :
```
>>> class A(object):
... def __repr__(self):
... return 'A()'
... def __nonzero__(self):
... return False
>>> class B(object):
... def __repr__(self):
... return 'B()'
... def __len__(self):
... return 0
>>> class C(object):
... def __repr__(self):
... return 'C()'
... def __cmp__(self, other):
... return 0
>>> class D(object):
... def __repr__(self):
... return 'D()'
... def __eq__(self, other):
... return True
>>> for obj in ['', (), [], {}, 0, 0., A(), B(), C(), D(), None]:
... print '%4s: bool(obj) -> %5s, obj == None -> %5s, obj is None -> %5s' % \
... (repr(obj), bool(obj), obj == None, obj is None)
'': bool(obj) -> False, obj == None -> False, obj is None -> False
(): bool(obj) -> False, obj == None -> False, obj is None -> False
[]: bool(obj) -> False, obj == None -> False, obj is None -> False
{}: bool(obj) -> False, obj == None -> False, obj is None -> False
0: bool(obj) -> False, obj == None -> False, obj is None -> False
0.0: bool(obj) -> False, obj == None -> False, obj is None -> False
A(): bool(obj) -> False, obj == None -> False, obj is None -> False
B(): bool(obj) -> False, obj == None -> False, obj is None -> False
C(): bool(obj) -> True, obj == None -> True, obj is None -> False
D(): bool(obj) -> True, obj == None -> True, obj is None -> False
None: bool(obj) -> False, obj == None -> True, obj is None -> True
``` |
Why is "if not someobj:" better than "if someobj == None:" in Python? | 100,732 | 82 | 2008-09-19T09:29:19Z | 100,764 | 30 | 2008-09-19T09:38:26Z | [
"python"
] | I've seen several examples of code like this:
```
if not someobj:
#do something
```
But I'm wondering why not doing:
```
if someobj == None:
#do something
```
Is there any difference? Does one have an advantage over the other? | Because `None` is not the only thing that is considered false.
```
if not False:
print "False is false."
if not 0:
print "0 is false."
if not []:
print "An empty list is false."
if not ():
print "An empty tuple is false."
if not {}:
print "An empty dict is false."
if not "":
print "An empty string is false."
```
`False`, `0`, `()`, `[]`, `{}` and `""` are all different from `None`, so your two code snippets are *not* equivalent.
Moreover, consider the following:
```
>>> False == 0
True
>>> False == ()
False
```
`if object:` is *not* an equality check. `0`, `()`, `[]`, `None`, `{}`, etc. *are* all different from each other, but they all *evaluate* to False.
This is the "magic" behind short circuiting expressions like:
```
foo = bar and spam or eggs
```
which is shorthand for:
```
if bar:
foo = spam
else:
foo = eggs
```
although you really should write:
```
foo = spam if bar else egg
``` |
Why is "if not someobj:" better than "if someobj == None:" in Python? | 100,732 | 82 | 2008-09-19T09:29:19Z | 100,903 | 39 | 2008-09-19T10:12:34Z | [
"python"
] | I've seen several examples of code like this:
```
if not someobj:
#do something
```
But I'm wondering why not doing:
```
if someobj == None:
#do something
```
Is there any difference? Does one have an advantage over the other? | These are actually both poor practices. Once upon a time, it was considered OK to casually treat None and False as similar. However, since Python 2.2 this is not the best policy.
First, when you do an `if x` or `if not x` kind of test, Python has to implicitly convert `x` to boolean. The rules for the `bool` function describe a raft of things which are False; everything else is True. If the value of x wasn't properly boolean to begin with, this implicit conversion isn't really the clearest way to say things.
Before Python 2.2, there was no bool function, so it was even less clear.
Second, you shouldn't really test with `== None`. You should use `is None` and `is not None`.
See PEP 8, [Style Guide for Python Code](http://www.python.org/dev/peps/pep-0008/).
> ```
> - Comparisons to singletons like None should always be done with
> 'is' or 'is not', never the equality operators.
>
> Also, beware of writing "if x" when you really mean "if x is not None"
> -- e.g. when testing whether a variable or argument that defaults to
> None was set to some other value. The other value might have a type
> (such as a container) that could be false in a boolean context!
> ```
How many singletons are there? Five: `None`, `True`, `False`, `NotImplemented` and `Ellipsis`. Since you're really unlikely to use `NotImplemented` or `Ellipsis`, and you would never say `if x is True` (because simply `if x` is a lot clearer), you'll only ever test `None`. |
Building Python C extension modules for Windows | 101,061 | 10 | 2008-09-19T10:53:12Z | 101,087 | 13 | 2008-09-19T10:57:36Z | [
"python",
"windows"
] | I have a C extension module and it would be nice to distribute built binaries. Setuptools makes it easy to build extensions modules on OS X and GNU/Linux, since those OSs come with GCC, but I don't know how to do it in Windows.
Would I need to buy a copy of Visual Studio, or does Visual Studio Express work? Can I just use Cygwin or MinGW? | You can use both MinGW and VC++ Express (free, no need to buy it).
See:
1. <http://eli.thegreenplace.net/2008/06/28/compiling-python-extensions-with-distutils-and-mingw/>
2. <http://eli.thegreenplace.net/2008/06/27/creating-python-extension-modules-in-c/> |
How do I read text from the (windows) clipboard from python? | 101,128 | 44 | 2008-09-19T11:09:27Z | 101,167 | 47 | 2008-09-19T11:20:29Z | [
"python",
"windows"
] | How do I read text from the (windows) clipboard from python? | You can use the module called [win32clipboard](http://docs.activestate.com/activepython/2.5/pywin32/win32clipboard.html), which is part of [pywin32](http://sourceforge.net/projects/pywin32/).
Here is an example that first sets the clipboard data then gets it:
```
import win32clipboard
# set clipboard data
win32clipboard.OpenClipboard()
win32clipboard.EmptyClipboard()
win32clipboard.SetClipboardText('testing 123')
win32clipboard.CloseClipboard()
# get clipboard data
win32clipboard.OpenClipboard()
data = win32clipboard.GetClipboardData()
win32clipboard.CloseClipboard()
print data
```
An important reminder from the documentation:
> When the window has finished examining or changing the clipboard,
> close the clipboard by calling CloseClipboard. This enables other
> windows to access the clipboard. Do not place an object on the
> clipboard after calling CloseClipboard. |
How do I read text from the (windows) clipboard from python? | 101,128 | 44 | 2008-09-19T11:09:27Z | 8,039,424 | 18 | 2011-11-07T16:27:31Z | [
"python",
"windows"
] | How do I read text from the (windows) clipboard from python? | I've seen many suggestions to use the win32 module, but Tkinter provides the shortest and easiest method I've seen, as in this post: [How do I copy a string to the clipboard on Windows using Python?](http://stackoverflow.com/questions/579687/how-do-i-copy-a-string-to-the-clipboard-on-windows-using-python/4203897#4203897)
Plus, Tkinter is in the python standard library. |
How do I read text from the (windows) clipboard from python? | 101,128 | 44 | 2008-09-19T11:09:27Z | 11,096,779 | 10 | 2012-06-19T08:00:02Z | [
"python",
"windows"
] | How do I read text from the (windows) clipboard from python? | The most upvoted answer above is weird in a way that it simply clears the Clipboard and then gets the content (which is then empty). One could clear the clipboard to be sure that some clipboard content type like "formated text" does not "cover" your plain text content you want to save in the clipboard.
The following piece of code replaces all newlines in the clipboard by spaces, then removes all double spaces and finally saves the content back to the clipboard:
```
import win32clipboard
win32clipboard.OpenClipboard()
c = win32clipboard.GetClipboardData()
win32clipboard.EmptyClipboard()
c = c.replace('\n', ' ')
c = c.replace('\r', ' ')
while c.find(' ') != -1:
c = c.replace(' ', ' ')
win32clipboard.SetClipboardText(c)
win32clipboard.CloseClipboard()
``` |
How do I read text from the (windows) clipboard from python? | 101,128 | 44 | 2008-09-19T11:09:27Z | 23,844,754 | 10 | 2014-05-24T11:58:36Z | [
"python",
"windows"
] | How do I read text from the (windows) clipboard from python? | you can easily get this done through the built-in module [Tkinter](https://docs.python.org/2/library/tkinter.html) which is basically a GUI library. This code creates a blank widget to get the clipboard content from OS.
```
#from tkinter import Tk # Python 3
from Tkinter import Tk
Tk().clipboard_get()
``` |
Is there a zip-like method in .Net? | 101,174 | 12 | 2008-09-19T11:21:57Z | 101,222 | 25 | 2008-09-19T11:34:43Z | [
".net",
"python",
"list",
"iteration"
] | In Python there is a really neat function called `zip` which can be used to iterate through two lists at the same time:
```
list1 = [1, 2, 3]
list2 = ["a", "b", "c"]
for v1, v2 in zip(list1, list2):
print v1 + " " + v2
```
The above code shoul produce the following:
```
1 a
2 b
3 c
```
I wonder if there is a method like it available in .Net? I'm thinking about writing it myself, but there is no point if it's already available. | Update: It is built-in in C# 4 as [System.Linq.Enumerable.Zip Method](https://msdn.microsoft.com/en-us/library/vstudio/dd267698(v=vs.110).aspx)
Here is a C# 3 version:
```
IEnumerable<TResult> Zip<TResult,T1,T2>
(IEnumerable<T1> a,
IEnumerable<T2> b,
Func<T1,T2,TResult> combine)
{
using (var f = a.GetEnumerator())
using (var s = b.GetEnumerator())
{
while (f.MoveNext() && s.MoveNext())
yield return combine(f.Current, s.Current);
}
}
```
Dropped the C# 2 version as it was showing its age. |
Is there a zip-like method in .Net? | 101,174 | 12 | 2008-09-19T11:21:57Z | 101,227 | 8 | 2008-09-19T11:35:28Z | [
".net",
"python",
"list",
"iteration"
] | In Python there is a really neat function called `zip` which can be used to iterate through two lists at the same time:
```
list1 = [1, 2, 3]
list2 = ["a", "b", "c"]
for v1, v2 in zip(list1, list2):
print v1 + " " + v2
```
The above code shoul produce the following:
```
1 a
2 b
3 c
```
I wonder if there is a method like it available in .Net? I'm thinking about writing it myself, but there is no point if it's already available. | As far as I know there is not. I wrote one for myself (as well as a few other useful extensions and put them in a project called [NExtension](http://www.codeplex.com/nextension) on Codeplex.
Apparently the Parallel extensions for .NET have a Zip function.
Here's a simplified version from NExtension (but please check it out for more useful extension methods):
```
public static IEnumerable<TResult> Zip<T1, T2, TResult>(this IEnumerable<T1> source1, IEnumerable<T2> source2, Func<T1, T2, TResult> combine)
{
using (IEnumerator<T1> data1 = source1.GetEnumerator())
using (IEnumerator<T2> data2 = source2.GetEnumerator())
while (data1.MoveNext() && data2.MoveNext())
{
yield return combine(data1.Current, data2.Current);
}
}
```
Usage:
```
int[] list1 = new int[] {1, 2, 3};
string[] list2 = new string[] {"a", "b", "c"};
foreach (var result in list1.Zip(list2, (i, s) => i.ToString() + " " + s))
Console.WriteLine(result);
``` |
How do you access an authenticated Google App Engine service from a (non-web) python client? | 101,742 | 49 | 2008-09-19T13:19:09Z | 102,509 | 38 | 2008-09-19T14:55:24Z | [
"python",
"web-services",
"google-app-engine",
"authentication"
] | I have a Google App Engine app - <http://mylovelyapp.appspot.com/>
It has a page - mylovelypage
For the moment, the page just does `self.response.out.write('OK')`
If I run the following Python at my computer:
```
import urllib2
f = urllib2.urlopen("http://mylovelyapp.appspot.com/mylovelypage")
s = f.read()
print s
f.close()
```
it prints "OK"
the problem is if I add `login:required` to this page in the app's yaml
then this prints out the HTML of the Google Accounts login page
I've tried "normal" authentication approaches. e.g.
```
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(None,
uri='http://mylovelyapp.appspot.com/mylovelypage',
user='[email protected]',
passwd='billybobspasswd')
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
```
But it makes no difference - I still get the login page's HTML back.
I've tried [Google's ClientLogin auth API](http://code.google.com/apis/accounts/docs/AuthForInstalledApps.html), but I can't get it to work.
```
h = httplib2.Http()
auth_uri = 'https://www.google.com/accounts/ClientLogin'
headers = {'Content-Type': 'application/x-www-form-urlencoded'}
myrequest = "Email=%s&Passwd=%s&service=ah&source=DALELANE-0.0" % ("[email protected]", "billybobspassword")
response, content = h.request(auth_uri, 'POST', body=myrequest, headers=headers)
if response['status'] == '200':
authtok = re.search('Auth=(\S*)', content).group(1)
headers = {}
headers['Authorization'] = 'GoogleLogin auth=%s' % authtok.strip()
headers['Content-Length'] = '0'
response, content = h.request("http://mylovelyapp.appspot.com/mylovelypage",
'POST',
body="",
headers=headers)
while response['status'] == "302":
response, content = h.request(response['location'], 'POST', body="", headers=headers)
print content
```
I do seem to be able to get some token correctly, but attempts to use it in the header when I call 'mylovelypage' still just return me the login page's HTML. :-(
Can anyone help, please?
Could I use the [GData client library](http://code.google.com/p/gdata-python-client/) to do this sort of thing? From
what I've read, I think it should be able to access App Engine apps,
but I haven't been any more successful at getting the authentication working for App Engine stuff there either
Any pointers to samples, articles, or even just keywords I should be
searching for to get me started, would be very much appreciated.
Thanks! | appcfg.py, the tool that uploads data to App Engine has to do exactly this to authenticate itself with the App Engine server. The relevant functionality is abstracted into appengine\_rpc.py. In a nutshell, the solution is:
1. Use the [Google ClientLogin API](http://code.google.com/apis/accounts/docs/AuthForInstalledApps.html) to obtain an authentication token. appengine\_rpc.py does this in [\_GetAuthToken](http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/tools/appengine%5Frpc.py#180)
2. Send the auth token to a special URL on your App Engine app. That page then returns a cookie and a 302 redirect. Ignore the redirect and store the cookie. appcfg.py does this in [\_GetAuthCookie](http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/tools/appengine%5Frpc.py#228)
3. Use the returned cookie in all future requests.
You may also want to look at [\_Authenticate](http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/tools/appengine%5Frpc.py#253), to see how appcfg handles the various return codes from ClientLogin, and [\_GetOpener](http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/tools/appengine%5Frpc.py#397), to see how appcfg creates a urllib2 OpenerDirector that doesn't follow HTTP redirects. Or you could, in fact, just use the AbstractRpcServer and HttpRpcServer classes wholesale, since they do pretty much everything you need. |
How do you access an authenticated Google App Engine service from a (non-web) python client? | 101,742 | 49 | 2008-09-19T13:19:09Z | 103,410 | 34 | 2008-09-19T16:22:06Z | [
"python",
"web-services",
"google-app-engine",
"authentication"
] | I have a Google App Engine app - <http://mylovelyapp.appspot.com/>
It has a page - mylovelypage
For the moment, the page just does `self.response.out.write('OK')`
If I run the following Python at my computer:
```
import urllib2
f = urllib2.urlopen("http://mylovelyapp.appspot.com/mylovelypage")
s = f.read()
print s
f.close()
```
it prints "OK"
the problem is if I add `login:required` to this page in the app's yaml
then this prints out the HTML of the Google Accounts login page
I've tried "normal" authentication approaches. e.g.
```
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(None,
uri='http://mylovelyapp.appspot.com/mylovelypage',
user='[email protected]',
passwd='billybobspasswd')
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
```
But it makes no difference - I still get the login page's HTML back.
I've tried [Google's ClientLogin auth API](http://code.google.com/apis/accounts/docs/AuthForInstalledApps.html), but I can't get it to work.
```
h = httplib2.Http()
auth_uri = 'https://www.google.com/accounts/ClientLogin'
headers = {'Content-Type': 'application/x-www-form-urlencoded'}
myrequest = "Email=%s&Passwd=%s&service=ah&source=DALELANE-0.0" % ("[email protected]", "billybobspassword")
response, content = h.request(auth_uri, 'POST', body=myrequest, headers=headers)
if response['status'] == '200':
authtok = re.search('Auth=(\S*)', content).group(1)
headers = {}
headers['Authorization'] = 'GoogleLogin auth=%s' % authtok.strip()
headers['Content-Length'] = '0'
response, content = h.request("http://mylovelyapp.appspot.com/mylovelypage",
'POST',
body="",
headers=headers)
while response['status'] == "302":
response, content = h.request(response['location'], 'POST', body="", headers=headers)
print content
```
I do seem to be able to get some token correctly, but attempts to use it in the header when I call 'mylovelypage' still just return me the login page's HTML. :-(
Can anyone help, please?
Could I use the [GData client library](http://code.google.com/p/gdata-python-client/) to do this sort of thing? From
what I've read, I think it should be able to access App Engine apps,
but I haven't been any more successful at getting the authentication working for App Engine stuff there either
Any pointers to samples, articles, or even just keywords I should be
searching for to get me started, would be very much appreciated.
Thanks! | thanks to Arachnid for the answer - it worked as suggested
here is a simplified copy of the code, in case it is helpful to the next person to try!
```
import os
import urllib
import urllib2
import cookielib
users_email_address = "[email protected]"
users_password = "billybobspassword"
target_authenticated_google_app_engine_uri = 'http://mylovelyapp.appspot.com/mylovelypage'
my_app_name = "yay-1.0"
# we use a cookie to authenticate with Google App Engine
# by registering a cookie handler here, this will automatically store the
# cookie returned when we use urllib2 to open http://currentcost.appspot.com/_ah/login
cookiejar = cookielib.LWPCookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
urllib2.install_opener(opener)
#
# get an AuthToken from Google accounts
#
auth_uri = 'https://www.google.com/accounts/ClientLogin'
authreq_data = urllib.urlencode({ "Email": users_email_address,
"Passwd": users_password,
"service": "ah",
"source": my_app_name,
"accountType": "HOSTED_OR_GOOGLE" })
auth_req = urllib2.Request(auth_uri, data=authreq_data)
auth_resp = urllib2.urlopen(auth_req)
auth_resp_body = auth_resp.read()
# auth response includes several fields - we're interested in
# the bit after Auth=
auth_resp_dict = dict(x.split("=")
for x in auth_resp_body.split("\n") if x)
authtoken = auth_resp_dict["Auth"]
#
# get a cookie
#
# the call to request a cookie will also automatically redirect us to the page
# that we want to go to
# the cookie jar will automatically provide the cookie when we reach the
# redirected location
# this is where I actually want to go to
serv_uri = target_authenticated_google_app_engine_uri
serv_args = {}
serv_args['continue'] = serv_uri
serv_args['auth'] = authtoken
full_serv_uri = "http://mylovelyapp.appspot.com/_ah/login?%s" % (urllib.urlencode(serv_args))
serv_req = urllib2.Request(full_serv_uri)
serv_resp = urllib2.urlopen(serv_req)
serv_resp_body = serv_resp.read()
# serv_resp_body should contain the contents of the
# target_authenticated_google_app_engine_uri page - as we will have been
# redirected to that page automatically
#
# to prove this, I'm just gonna print it out
print serv_resp_body
``` |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 101,787 | 20 | 2008-09-19T13:26:53Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | Not at the moment and you would be lucky to get Jython to work soon. If you're planning to start your development now you would be better off with just sticking to Java for now on. |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 256,069 | 40 | 2008-11-01T20:29:44Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | As a [Python](http://en.wikipedia.org/wiki/Python_%28programming_language%29) lover and Android programmer, I am sad to say this is not really a good way to go. There are two problems.
One problem is that there is a lot more than just a programming language to the Android development tools. A lot of the Android graphics involve XML files to configure the display, similar to HTML. The built-in java objects are really integrated with this XML layout, and it's a lot easier than writing your own code to go from logic to bitmap.
The other problem is that the G1 (and probably other Android devices for the near future) are really not that fast. 200 MHz processors, and RAM is very limited. Even in Java you have to do a decent amount of rewriting-to-avoid-more-object-creation if you want to make your app perfectly smooth. Python is going to be too slow for a while still on mobile devices. |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 383,473 | 37 | 2008-12-20T16:56:57Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | I just posted some [directions for cross compiling Python 2.4.5 for Android](http://www.damonkohler.com/2008/12/python-on-android.html). It takes some patching, and not all modules are supported, but the basics are there. |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 973,765 | 143 | 2009-06-10T05:13:13Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | [YES!](http://google-opensource.blogspot.com/2009/06/introducing-android-scripting.html)
An example [via Matt Cutts](http://www.mattcutts.com/blog/android-barcode-scanner/) -- "hereâs a barcode scanner written in six lines of Python code:
```
import android
droid = android.Android()
code = droid.scanBarcode()
isbn = int(code['result']['SCAN_RESULT'])
url = "http://books.google.com?q=%d" % isbn
droid.startActivity('android.intent.action.VIEW', url)
``` |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 973,786 | 248 | 2009-06-10T05:24:29Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | There is also the new [Android Scripting Environment](http://www.talkandroid.com/1225-android-scripting-environment/) (ASE) project. It looks awesome, and it has some integration with native Android components. |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 4,381,935 | 16 | 2010-12-07T21:46:18Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | Check out the blog post <http://www.saffirecorp.com/?p=113> that explains how to install and run [Python](http://en.wikipedia.org/wiki/Python_%28programming_language%29) and a simple webserver written in Python on Android. |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 4,828,127 | 54 | 2011-01-28T12:18:47Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | *"The [Pygame Subset for Android](http://www.renpy.org/pygame/) is a port of a subset of Pygame functionality to the Android platform. The goal of the project is to allow the creation of Android-specific games, and to ease the porting of games from PC-like platforms to Android."*
The examples include a complete game packaged in an APK, which is pretty interesting. |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 5,475,949 | 7 | 2011-03-29T16:42:06Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | There's also python-on-a-chip possibly running mosync: [google group](http://groups.google.com/group/python-on-a-chip/browse_thread/thread/df1c837bae2200f2/02992219b9c0003e?lnk=gst&q=mosync#02992219b9c0003e) |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 6,136,305 | 54 | 2011-05-26T09:21:31Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | There's also [SL4A](https://github.com/damonkohler/sl4a) written by a Google employee. |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 7,741,114 | 50 | 2011-10-12T13:49:09Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | I've posted instructions and a patch for cross compiling Python 2.7.2 for Android, you can get it at my blog here: <http://mdqinc.com/blog/2011/09/cross-compiling-python-for-android/>
EDIT: I've open sourced [Ignifuga](http://ignifuga.org), my 2D Game Engine, it's Python/SDL based and it cross compiles for Android. Even if you don't use it for games, you might get useful ideas from the code and the builder utility (named Schafer, after Tim...you know who). |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 8,189,603 | 539 | 2011-11-18T21:49:45Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | One way is to use [Kivy](http://kivy.org/):
> Open source Python library for rapid development of applications
> that make use of innovative user interfaces, such as multi-touch apps.
> Kivy runs on Linux, Windows, OS X, Android and iOS. You can run the same [python] code on all supported platforms.
[Kivy Showcase app](https://play.google.com/store/apps/details?id=org.kivy.showcase) |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 8,759,409 | 18 | 2012-01-06T14:34:25Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | Using SL4A (which has already been mentioned by itself in other answers) you can [run](http://groups.google.com/group/web2py/browse_thread/thread/f227e93fe802a902) a full-blown [web2py](http://web2py.com/) instance (other [python web frameworks](http://wiki.python.org/moin/WebFrameworks) are likely candidates as well). SL4A doesn't allow you to do native UI components (buttons, scroll bars, and the like), but it does support [WebViews](http://code.google.com/p/android-scripting/wiki/UsingWebView). A WebView is basically nothing more than a striped down web browser pointed at a fixed address. I believe the native Gmail app uses a WebView instead of going the regular widget route.
This route would have some interesting features:
* In the case of most python web frameworks, you could actually develop and test without using an android device or android emulator.
* Whatever Python code you end up writing for the phone could also be put on a public webserver with very little (if any) modification.
* You could take advantage of all of the crazy web stuff out there: query, HTML5, CSS3, etc. |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 8,784,038 | 12 | 2012-01-09T04:46:53Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | From the [Python for android](https://github.com/kivy/python-for-android) site:
> Python for android is a project to create your own Python distribution including the modules you want, and create an apk including python, libs, and your application. |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 9,773,282 | 34 | 2012-03-19T15:45:27Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | ### SL4A
[Scripting Layer for Android](https://github.com/damonkohler/sl4a) does what you want. You can easily install it directly onto your device from their site, and do not need root.
It supports a range of languages; Python is the most mature. By default, it uses Python 2.6, but there is a [3.2 port](https://code.google.com/p/python-for-android/wiki/Python3) you can use instead. I have used that port for all kinds of things on a Galaxy S2 and it worked fine.
### API
SL4A provides a port of their `android` library for each supported language. The library provides an interface to the underlying Android API through a single `Android` object.
```
import android
droid = android.Android()
# example using the text to speech facade
droid.ttsSpeak('hello world')
```
Each language has pretty much the same API. You can even use the JavaScript API inside webviews.
```
var droid = new Android();
droid.ttsSpeak('hello from js');
```
### User Interfaces
For user interfaces, you have three options:
* You can easily use the generic, native dialogues and menus through the
API. This is good for confirmation dialogues and other basic user inputs.
* You can also open a webview from inside a Python script, then use HTML5
for the user interface. When you use webviews from Python, you can pass
messages back and forth, between the webview and the Python process that
spawned it. The UI will not be native, but it is still a good option to
have.
* There is *some* support for native Android user interfaces, but I am not
sure how well it works; I just haven't ever used it.
You can mix options, so you can have a webview for the main interface, and still use native dialogues.
### QPython
There is a third party project named [QPython](http://qpython.com). It builds on SL4A, and throws in some other useful stuff.
QPython gives you a nicer UI to manage your installation, and includes a little, touchscreen code editor, a Python shell, and a PIP shell for package management. They also have a Python 3 port. Both versions are available from the Play Store, free of charge. QPython also bundles libraries from a bunch of Python on Android projects, including Kivy, so it is not just SL4A.
Note that QPython still develop their fork of SL4A (though, not much to be honest). The main SL4A project itself is pretty much dead.
---
* SL4A Project (now on GitHub): <https://github.com/damonkohler/sl4a>
* SL4A Python 3 Port: <https://code.google.com/p/python-for-android/wiki/Python3>
* QPython Project: <http://qpython.com> |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 10,519,481 | 12 | 2012-05-09T15:44:49Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | Yet another attempt: <https://code.google.com/p/android-python27/>
This one embed directly the Python interpretter in your app apk. |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 15,335,213 | 8 | 2013-03-11T09:36:58Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | You can run your Python code using [sl4a](http://code.google.com/p/android-scripting/). sl4a supports Python, [Perl](http://en.wikipedia.org/wiki/Perl), [JRuby](http://en.wikipedia.org/wiki/JRuby), [Lua](http://en.wikipedia.org/wiki/Lua_%28programming_language%29), BeanShell, JavaScript, [Tcl](http://en.wikipedia.org/wiki/Tcl), and shell script.
You can learn sl4a [Python Examples](http://code.google.com/p/android-scripting/wiki/Tutorials). |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 17,073,989 | 14 | 2013-06-12T19:46:35Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | I use the QPython application. It has an editor, a console, and you can run your Python programs with it. The application is free, and the link is <http://qpython.com/>. |
Is there a way to run Python on Android? | 101,754 | 1,360 | 2008-09-19T13:21:12Z | 27,913,916 | 13 | 2015-01-13T02:08:57Z | [
"python",
"android",
"ase",
"android-scripting"
] | We are working on an [S60](http://en.wikipedia.org/wiki/S60_%28software_platform%29) version and this platform has a nice Python API.
However, there is nothing official about Python on Android, but since [Jython](http://en.wikipedia.org/wiki/Jython) exists, is there a way to let the snake and the robot work together? | # Kivy
---
I want to post this as an extension to what **@JohnMudd** has already answered (*but please bare with me as English isn't my first language*)
It has been years since then, and **Kivy** has also *evoluted* to **v1.9-dev**, the biggest selling point of **Kivy** in my opinion is its cross-platform compatibility, you can code and test under your local environment **(Windows/\*nix etc.)**, you can also build, debug and package your app to run in your **Android/iOS/Mac/Windows** devices.
With **Kivy**'s own **[KV](http://kivy.org/docs/guide/lang.html#kv-language)** language, one can easily code and build the GUI interface easily (it's just like Java XML, but rather than TextView etc., **KV** has its own **ui.widgets** for the similar translation), which is in my opinion quite easy to adopt.
Currently **[Buildozer](https://github.com/kivy/buildozer)** and **[python-for-android](http://python-for-android.readthedocs.org/en/latest/prerequisites/)** are most recommended tools to build/package your apps. Having tried them both and I can firmly say that they make building Android apps with Python a breeze. Users who feel comfortable in their console/terminal/command prompt should have no problems using them, and their guides are well documented, too.
Futhermore, **iOS** is another big selling point of Kivy, provided that you can use the same code base with little changes required to test-run on your **iOS** device, via [kivy-ios](http://kivy.org/docs/guide/packaging-ios.html#create-a-package-for-ios) Homebrew tools, although **Xcode** are required for the build before running on their devices (AFAIK iOS Simulator in Xcode currently doesn't work for the x86-architecture build). There are also some dependency issues which required manually compiled and fiddled around in Xcode to have a successful build, but wouldn't be too difficult to resolve and people in [Kivy Google Group](https://groups.google.com/forum/#!forum/kivy-users) are really helpful too.
With all being said, users with good Python knowledge should have no problem picking up the basics in weeks (if not days) to build some simple apps.
Also worth mentioning is that you can **bundle (build recipes)** your Python modules with the build so users can really make use of many existing libraries Python bring us, like [Requests](http://docs.python-requests.org/en/latest/) & [PIL](http://www.pythonware.com/products/pil/) etc. through [Kivy's extension support](http://kivy.org/docs/api-kivy.ext.html#module-kivy.ext).
> Sometimes your application requires functionality that is beyond the
> scope of what Kivy can deliver. In those cases it is necessary to
> resort to external software libraries. Given the richness of the
> Python ecosystem, there is already a great number of software
> libraries that you can simply import and use right away.
The last but no the least, if you are going to use **Kivy** for more serious/commercial projects, you may find existing modules not satisfactory to what are expected. There are some workable solutions too, with the "work in progress" of **[pyjnius](https://github.com/kivy/pyjnius)** for Andoird, and **[pyobjus](https://github.com/kivy/pyobjus)**, users can now access to Java/Objective-C classes through those modules to control some of the native APIs.
My experience in Kivy is that it will find its best fit with seasonal Python programmers and some serious programmer who wants rapid development or simple code base maintenance. It runs well in multiple platforms, albeit not really at the level of **native** feeling.
**I do hope more Python/app programmers find my little information useful and start taking a look of Kivy, it can only get better (with more supports and libraries/modules get ported) if there are great interests from the community.**
*P.S.I have no relationship with Kivy whatsoever, I'm merely a programmer who really likes the idea of bringing Python coding fun to mobile/cross-platform development.* |
Sorting a dict on __iter__ | 102,394 | 4 | 2008-09-19T14:43:24Z | 102,443 | 8 | 2008-09-19T14:49:06Z | [
"python",
"optimization",
"refactoring"
] | I am trying to sort a dict based on its key and return an iterator to the values from within an overridden iter method in a class. Is there a nicer and more efficient way of doing this than creating a new list, inserting into the list as I sort through the keys? | How about something like this:
```
def itersorted(d):
for key in sorted(d):
yield d[key]
``` |
What can you use Python generator functions for? | 102,535 | 157 | 2008-09-19T14:58:49Z | 102,612 | 34 | 2008-09-19T15:07:13Z | [
"python",
"generator"
] | I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving. | See the "Motivation" section in [PEP 255](http://www.python.org/dev/peps/pep-0255/).
A non-obvious use of generators is creating interruptible functions, which lets you do things like update UI or run several jobs "simultaneously" (interleaved, actually) while not using threads. |
What can you use Python generator functions for? | 102,535 | 157 | 2008-09-19T14:58:49Z | 102,632 | 183 | 2008-09-19T15:09:25Z | [
"python",
"generator"
] | I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving. | Generators give you lazy evaluation. You use them by iterating over them, either explicitly with 'for' or implicitly by passing it to any function or construct that iterates. You can think of generators as returning multiple items, as if they return a list, but instead of returning them all at once they return them one-by-one, and the generator function is paused until the next item is requested.
Generators are good for calculating large sets of results (in particular calculations involving loops themselves) where you don't know if you are going to need all results, or where you don't want to allocate the memory for all results at the same time. Or for situations where the generator uses *another* generator, or consumes some other resource, and it's more convenient if that happened as late as possible.
Another use for generators (that is really the same) is to replace callbacks with iteration. In some situations you want a function to do a lot of work and occasionally report back to the caller. Traditionally you'd use a callback function for this. You pass this callback to the work-function and it would periodically call this callback. The generator approach is that the work-function (now a generator) knows nothing about the callback, and merely yields whenever it wants to report something. The caller, instead of writing a separate callback and passing that to the work-function, does all the reporting work in a little 'for' loop around the generator.
For example, say you wrote a 'filesystem search' program. You could perform the search in its entirety, collect the results and then display them one at a time. All of the results would have to be collected before you showed the first, and all of the results would be in memory at the same time. Or you could display the results while you find them, which would be more memory efficient and much friendlier towards the user. The latter could be done by passing the result-printing function to the filesystem-search function, or it could be done by just making the search function a generator and iterating over the result.
If you want to see an example of the latter two approaches, see os.path.walk() (the old filesystem-walking function with callback) and os.walk() (the new filesystem-walking generator.) Of course, if you really wanted to collect all results in a list, the generator approach is trivial to convert to the big-list approach:
```
big_list = list(the_generator)
``` |
What can you use Python generator functions for? | 102,535 | 157 | 2008-09-19T14:58:49Z | 102,634 | 71 | 2008-09-19T15:09:28Z | [
"python",
"generator"
] | I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving. | One of the reasons to use generator is to make the solution clearer for some kind of solutions.
The other is to treat results one at a time, avoiding building huge lists of results that you would process separated anyway.
If you have a fibonacci-up-to-n function like this:
```
# function version
def fibon(n):
a = b = 1
result = []
for i in xrange(n):
result.append(a)
a, b = b, a + b
return result
```
You can more easily write the function as this:
```
# generator version
def fibon(n):
a = b = 1
for i in xrange(n):
yield a
a, b = b, a + b
```
The function is clearer. And if you use the function like this:
```
for x in fibon(1000000):
print x,
```
in this example, if using the generator version, the whole 1000000 item list won't be created at all, just one value at a time. That would not be the case when using the list version, where a list would be created first. |
What can you use Python generator functions for? | 102,535 | 157 | 2008-09-19T14:58:49Z | 102,667 | 12 | 2008-09-19T15:13:16Z | [
"python",
"generator"
] | I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving. | My favorite uses are "filter" and "reduce" operations.
Let's say we're reading a file, and only want the lines which begin with "##".
```
def filter2sharps( aSequence ):
for l in aSequence:
if l.startswith("##"):
yield l
```
We can then use the generator function in a proper loop
```
source= file( ... )
for line in filter2sharps( source.readlines() ):
print line
source.close()
```
The reduce example is similar. Let's say we have a file where we need to locate blocks of `<Location>...</Location>` lines. [Not HTML tags, but lines that happen to look tag-like.]
```
def reduceLocation( aSequence ):
keep= False
block= None
for line in aSequence:
if line.startswith("</Location"):
block.append( line )
yield block
block= None
keep= False
elif line.startsWith("<Location"):
block= [ line ]
keep= True
elif keep:
block.append( line )
else:
pass
if block is not None:
yield block # A partial block, icky
```
Again, we can use this generator in a proper for loop.
```
source = file( ... )
for b in reduceLocation( source.readlines() ):
print b
source.close()
```
The idea is that a generator function allows us to filter or reduce a sequence, producing a another sequence one value at a time. |
What can you use Python generator functions for? | 102,535 | 157 | 2008-09-19T14:58:49Z | 102,674 | 26 | 2008-09-19T15:14:10Z | [
"python",
"generator"
] | I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving. | Buffering. When it is efficient to fetch data in large chunks, but process it in small chunks, then a generator might help:
```
def bufferedFetch():
while True:
buffer = getBigChunkOfData()
# insert some code to break on 'end of data'
for i in buffer:
yield i
```
The above lets you easily separate buffering from processing. The consumer function can now just get the values one by one without worrying about buffering. |
What can you use Python generator functions for? | 102,535 | 157 | 2008-09-19T14:58:49Z | 102,682 | 17 | 2008-09-19T15:15:03Z | [
"python",
"generator"
] | I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving. | The simple explanation:
Consider a `for` statement
```
for item in iterable:
do_stuff()
```
A lot of the time, all the items in `iterable` doesn't need to be there from the start, but can be generated on the fly as they're required. This can be a lot more efficient in both
* space (you never need to store all the items simultaneously) and
* time (the iteration may finish before all the items are needed).
Other times, you don't even know all the items ahead of time. For example:
```
for command in user_input():
do_stuff_with(command)
```
You have no way of knowing all the user's commands beforehand, but you can use a nice loop like this if you have a generator handing you commands:
```
def user_input():
while True:
wait_for_command()
cmd = get_command()
yield cmd
```
With generators you can also have iteration over infinite sequences, which is of course not possible when iterating over containers. |
What can you use Python generator functions for? | 102,535 | 157 | 2008-09-19T14:58:49Z | 740,763 | 17 | 2009-04-11T20:55:59Z | [
"python",
"generator"
] | I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving. | I have found that generators are very helpful in cleaning up your code and by giving you a very unique way to encapsulate and modularize code. In a situation where you need something to constantly spit out values based on its own internal processing and when that something needs to be called from anywhere in your code (and not just within a loop or a block for example), generators are *the* feature to use.
An abstract example would be a fibonacci number generator that does not live within a loop and when it is called from anywhere will always return the next number in sequence:
```
def fib():
first=0
second=1
yield first
yield second
while 1:
next=first+second
yield next
first=second
second=next
fibgen1=fib()
fibgen2=fib()
```
Now you have two fibonacci number generator objects which you can call from anywhere in your code and they will always return ever larger fibonacci numbers in sequence as follows:
```
>>> fibgen1.next(); fibgen1.next(); fibgen1.next(); fibgen1.next()
0
1
1
2
>>> fibgen2.next(); fibgen2.next()
0
1
>>> fibgen1.next(); fibgen1.next()
3
5
```
The lovely thing about generators is that they encapsulate state without having to go through the hoops of creating objects. One way of thinking about them is as "functions" which remember their internal state.
I got the fibonacci example from <http://www.neotitans.com/resources/python/python-generators-tutorial.html> and with a little imagination, you can come up with a lot of other situations where generators make for a great alternative to for-loops and other traditional iteration constructs. |
What can you use Python generator functions for? | 102,535 | 157 | 2008-09-19T14:58:49Z | 14,394,854 | 22 | 2013-01-18T08:17:56Z | [
"python",
"generator"
] | I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving. | I find this explanation which clears my doubt. Because there is a possibility that person who don't know `Generators` also don't know about `yield`
**Return**
The return statement is where all the local variables are destroyed and the resulting value is given back (returned) to the caller. Should the same function be called some time later, the function will get a fresh new set of variables.
**Yield**
But what if the local variables aren't thrown away when we exit a function? This implies that we can `resume the function` where we left off. This is where the concept of `generators` are introduced and the `yield` statement resumes where the `function` left off.
```
def generate_integers(N):
for i in range(N):
yield i
```
---
```
In [1]: gen = generate_integers(3)
In [2]: gen
<generator object at 0x8117f90>
In [3]: gen.next()
0
In [4]: gen.next()
1
In [5]: gen.next()
```
So that's the difference between return and yield statements in Python.
**Yield statement is what makes a function a generator function.**
So Generators are a simple and powerful tool for creating iterators. They are written like regular functions but use the yield statement whenever they want to return data. Each time next() is called, the generator resumes where it left-off (it remembers all the data values and which statement was last executed). |
What can you use Python generator functions for? | 102,535 | 157 | 2008-09-19T14:58:49Z | 23,530,101 | 17 | 2014-05-07T23:20:11Z | [
"python",
"generator"
] | I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving. | ## Real World Example
Lets say you have 100 million domains in your MySQL table and you would like to update alexa rank for each domain.
First thing you need is to select your domain names from the database.
Lets say your database name is `domains` and table name is `domain`
If you use `SELECT domain FROM domains` its going to return 100 million rows which is going to consume lot of memory. So your server might crash
So you decided to run the program in batches. Let say our batch size is 1000.
In our first batch we will query the first 1000 rows, check alexa rank for each domain and update the database row.
In our second batch we will work on the next 1000 rows. In our third batch it will be from 2001 to 3000 and so on.
Now we need a generator function which generates our batches.
Here is our generator function
```
def ResultGenerator(cursor, batchsize=1000):
while True:
results = cursor.fetchmany(batchsize)
if not results:
break
for result in results:
yield result
```
As you can see our function keep `yield`ing the results. If you used the keyword `return` instead of `yield`, then the whole function will be ended once it reaches return
```
return - returns only once
yield - returns multiple times
```
If a function uses the keyword `yield` then its a generator.
Now you can iterate like this
```
db = MySQLdb.connect(host="localhost", user="root", passwd="root", db="domains")
cursor = db.cursor()
cursor.execute("SELECT domain FROM domains")
for result in ResultGenerator(cursor):
doSomethingWith(result)
db.close()
``` |
What can you use Python generator functions for? | 102,535 | 157 | 2008-09-19T14:58:49Z | 26,074,771 | 7 | 2014-09-27T12:40:27Z | [
"python",
"generator"
] | I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving. | A practical example where you could make use of a generator is if you have some kind of shape and you want to iterate over its corners, edges or whatever. For my own project (source code [here](https://github.com/Pithikos/python-rectangles)) I had a rectangle:
```
class Rect():
def __init__(self, x, y, width, height):
self.l_top = (x, y)
self.r_top = (x+width, y)
self.r_bot = (x+width, y+height)
self.l_bot = (x, y+height)
def __iter__(self):
yield self.l_top
yield self.r_top
yield self.r_bot
yield self.l_bot
```
Now I can create a rectangle and loop over its corners:
```
myrect=Rect(50, 50, 100, 100)
for corner in myrect:
print(corner)
```
Instead of `__iter__` you could have a method `iter_corners` and call that with `for corner in myrect.iter_corners()`. It's just more elegant to use `__iter__` since then we can use the class instance name directly in the `for` expression. |
How do I merge a 2D array in Python into one string with List Comprehension? | 103,844 | 14 | 2008-09-19T17:21:53Z | 103,895 | 22 | 2008-09-19T17:28:14Z | [
"python",
"list-comprehension"
] | List Comprehension for me seems to be like the opaque block of granite that regular expressions are for me. I need pointers.
Say, I have a 2D list:
```
li = [[0,1,2],[3,4,5],[6,7,8]]
```
I would like to merge this either into one long list
```
li2 = [0,1,2,3,4,5,6,7,8]
```
or into a string with separators:
```
s = "0,1,2,3,4,5,6,7,8"
```
Really, I'd like to know how to do both. | Like so:
```
[ item for innerlist in outerlist for item in innerlist ]
```
Turning that directly into a string with separators:
```
','.join(str(item) for innerlist in outerlist for item in innerlist)
```
Yes, the order of 'for innerlist in outerlist' and 'for item in innerlist' is correct. Even though the "body" of the loop is at the start of the listcomp, the order of nested loops (and 'if' clauses) is still the same as when you would write the loop out:
```
for innerlist in outerlist:
for item in innerlist:
...
``` |
How to generate all permutations of a list in Python | 104,420 | 271 | 2008-09-19T18:41:03Z | 104,426 | 15 | 2008-09-19T18:41:52Z | [
"python",
"algorithm",
"permutation",
"combinatorics",
"python-2.5"
] | How do you generate all the permutations of a list in Python, independently of the type of elements in that list?
For example:
```
permutations([])
[]
permutations([1])
[1]
permutations([1, 2])
[1, 2]
[2, 1]
permutations([1, 2, 3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
```
EDIT:
Eliben pointed to a solution that's similar to mine although simpler, so I'm choosing it as the accepted answer, although Python 2.6+ has a builtin solution in the **itertools** module:
```
import itertools
itertools.permutations([1, 2, 3])
``` | This solution implements a generator, to avoid holding all the permutations on memory:
```
def permutations (orig_list):
if not isinstance(orig_list, list):
orig_list = list(orig_list)
yield orig_list
if len(orig_list) == 1:
return
for n in sorted(orig_list):
new_list = orig_list[:]
pos = new_list.index(n)
del(new_list[pos])
new_list.insert(0, n)
for resto in permutations(new_list[1:]):
if new_list[:1] + resto <> orig_list:
yield new_list[:1] + resto
``` |
How to generate all permutations of a list in Python | 104,420 | 271 | 2008-09-19T18:41:03Z | 104,436 | 190 | 2008-09-19T18:43:09Z | [
"python",
"algorithm",
"permutation",
"combinatorics",
"python-2.5"
] | How do you generate all the permutations of a list in Python, independently of the type of elements in that list?
For example:
```
permutations([])
[]
permutations([1])
[1]
permutations([1, 2])
[1, 2]
[2, 1]
permutations([1, 2, 3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
```
EDIT:
Eliben pointed to a solution that's similar to mine although simpler, so I'm choosing it as the accepted answer, although Python 2.6+ has a builtin solution in the **itertools** module:
```
import itertools
itertools.permutations([1, 2, 3])
``` | **Starting with Python 2.6** (and if you're on Python 3) you have a **standard-library** tool for this: [`itertools.permutations`](https://docs.python.org/2/library/itertools.html#itertools.permutations).
---
If you're using an **older Python (<2.6)** for some reason or are just curious to know how it works, here's one nice approach, taken from <http://code.activestate.com/recipes/252178/>:
```
def all_perms(elements):
if len(elements) <=1:
yield elements
else:
for perm in all_perms(elements[1:]):
for i in range(len(elements)):
# nb elements[0:1] works in both string and list contexts
yield perm[:i] + elements[0:1] + perm[i:]
```
A couple of alternative approaches are listed in the documentation of `itertools.permutations`. Here's one:
```
def permutations(iterable, r=None):
# permutations('ABCD', 2) --> AB AC AD BA BC BD CA CB CD DA DB DC
# permutations(range(3)) --> 012 021 102 120 201 210
pool = tuple(iterable)
n = len(pool)
r = n if r is None else r
if r > n:
return
indices = range(n)
cycles = range(n, n-r, -1)
yield tuple(pool[i] for i in indices[:r])
while n:
for i in reversed(range(r)):
cycles[i] -= 1
if cycles[i] == 0:
indices[i:] = indices[i+1:] + indices[i:i+1]
cycles[i] = n - i
else:
j = cycles[i]
indices[i], indices[-j] = indices[-j], indices[i]
yield tuple(pool[i] for i in indices[:r])
break
else:
return
```
And another, based on `itertools.product`:
```
def permutations(iterable, r=None):
pool = tuple(iterable)
n = len(pool)
r = n if r is None else r
for indices in product(range(n), repeat=r):
if len(set(indices)) == r:
yield tuple(pool[i] for i in indices)
``` |
How to generate all permutations of a list in Python | 104,420 | 271 | 2008-09-19T18:41:03Z | 104,471 | 255 | 2008-09-19T18:48:48Z | [
"python",
"algorithm",
"permutation",
"combinatorics",
"python-2.5"
] | How do you generate all the permutations of a list in Python, independently of the type of elements in that list?
For example:
```
permutations([])
[]
permutations([1])
[1]
permutations([1, 2])
[1, 2]
[2, 1]
permutations([1, 2, 3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
```
EDIT:
Eliben pointed to a solution that's similar to mine although simpler, so I'm choosing it as the accepted answer, although Python 2.6+ has a builtin solution in the **itertools** module:
```
import itertools
itertools.permutations([1, 2, 3])
``` | And in [Python 2.6](http://docs.python.org/dev/whatsnew/2.6.html) onwards:
```
import itertools
itertools.permutations([1,2,3])
```
(returned as a generator. Use `list(permutations(l))` to return as a list.) |
How to generate all permutations of a list in Python | 104,420 | 271 | 2008-09-19T18:41:03Z | 108,651 | 10 | 2008-09-20T16:32:47Z | [
"python",
"algorithm",
"permutation",
"combinatorics",
"python-2.5"
] | How do you generate all the permutations of a list in Python, independently of the type of elements in that list?
For example:
```
permutations([])
[]
permutations([1])
[1]
permutations([1, 2])
[1, 2]
[2, 1]
permutations([1, 2, 3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
```
EDIT:
Eliben pointed to a solution that's similar to mine although simpler, so I'm choosing it as the accepted answer, although Python 2.6+ has a builtin solution in the **itertools** module:
```
import itertools
itertools.permutations([1, 2, 3])
``` | The following code is an in-place permutation of a given list, implemented as a generator. Since it only returns references to the list, the list should not be modified outside the generator.
The solution is non-recursive, so uses low memory. Work well also with multiple copies of elements in the input list.
```
def permute_in_place(a):
a.sort()
yield list(a)
if len(a) <= 1:
return
first = 0
last = len(a)
while 1:
i = last - 1
while 1:
i = i - 1
if a[i] < a[i+1]:
j = last - 1
while not (a[i] < a[j]):
j = j - 1
a[i], a[j] = a[j], a[i] # swap the values
r = a[i+1:last]
r.reverse()
a[i+1:last] = r
yield list(a)
break
if i == first:
a.reverse()
return
if __name__ == '__main__':
for n in range(5):
for a in permute_in_place(range(1, n+1)):
print a
print
for a in permute_in_place([0, 0, 1, 1, 1]):
print a
print
``` |
How to generate all permutations of a list in Python | 104,420 | 271 | 2008-09-19T18:41:03Z | 170,248 | 177 | 2008-10-04T12:18:56Z | [
"python",
"algorithm",
"permutation",
"combinatorics",
"python-2.5"
] | How do you generate all the permutations of a list in Python, independently of the type of elements in that list?
For example:
```
permutations([])
[]
permutations([1])
[1]
permutations([1, 2])
[1, 2]
[2, 1]
permutations([1, 2, 3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
```
EDIT:
Eliben pointed to a solution that's similar to mine although simpler, so I'm choosing it as the accepted answer, although Python 2.6+ has a builtin solution in the **itertools** module:
```
import itertools
itertools.permutations([1, 2, 3])
``` | *The following code with Python 2.6 and above ONLY*
First, import `itertools`:
```
import itertools
```
### Permutation (order matters):
```
print list(itertools.permutations([1,2,3,4], 2))
[(1, 2), (1, 3), (1, 4),
(2, 1), (2, 3), (2, 4),
(3, 1), (3, 2), (3, 4),
(4, 1), (4, 2), (4, 3)]
```
### Combination (order does NOT matter):
```
print list(itertools.combinations('123', 2))
[('1', '2'), ('1', '3'), ('2', '3')]
```
### Cartesian product (with several iterables):
```
print list(itertools.product([1,2,3], [4,5,6]))
[(1, 4), (1, 5), (1, 6),
(2, 4), (2, 5), (2, 6),
(3, 4), (3, 5), (3, 6)]
```
### Cartesian product (with one iterable and itself):
```
print list(itertools.product([1,2], repeat=3))
[(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2),
(2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2)]
``` |
How to generate all permutations of a list in Python | 104,420 | 271 | 2008-09-19T18:41:03Z | 5,501,066 | 8 | 2011-03-31T13:58:40Z | [
"python",
"algorithm",
"permutation",
"combinatorics",
"python-2.5"
] | How do you generate all the permutations of a list in Python, independently of the type of elements in that list?
For example:
```
permutations([])
[]
permutations([1])
[1]
permutations([1, 2])
[1, 2]
[2, 1]
permutations([1, 2, 3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
```
EDIT:
Eliben pointed to a solution that's similar to mine although simpler, so I'm choosing it as the accepted answer, although Python 2.6+ has a builtin solution in the **itertools** module:
```
import itertools
itertools.permutations([1, 2, 3])
``` | A quite obvious way in my opinion might be also:
```
def permutList(l):
if not l:
return [[]]
res = []
for e in l:
temp = l[:]
temp.remove(e)
res.extend([[e] + r for r in permutList(temp)])
return res
``` |
How to generate all permutations of a list in Python | 104,420 | 271 | 2008-09-19T18:41:03Z | 7,140,205 | 7 | 2011-08-21T18:28:44Z | [
"python",
"algorithm",
"permutation",
"combinatorics",
"python-2.5"
] | How do you generate all the permutations of a list in Python, independently of the type of elements in that list?
For example:
```
permutations([])
[]
permutations([1])
[1]
permutations([1, 2])
[1, 2]
[2, 1]
permutations([1, 2, 3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
```
EDIT:
Eliben pointed to a solution that's similar to mine although simpler, so I'm choosing it as the accepted answer, although Python 2.6+ has a builtin solution in the **itertools** module:
```
import itertools
itertools.permutations([1, 2, 3])
``` | ```
list2Perm = [1, 2.0, 'three']
listPerm = [[a, b, c]
for a in list2Perm
for b in list2Perm
for c in list2Perm
if ( a != b and b != c and a != c )
]
print listPerm
```
Output:
```
[
[1, 2.0, 'three'],
[1, 'three', 2.0],
[2.0, 1, 'three'],
[2.0, 'three', 1],
['three', 1, 2.0],
['three', 2.0, 1]
]
``` |
How to generate all permutations of a list in Python | 104,420 | 271 | 2008-09-19T18:41:03Z | 7,733,966 | 23 | 2011-10-12T00:14:09Z | [
"python",
"algorithm",
"permutation",
"combinatorics",
"python-2.5"
] | How do you generate all the permutations of a list in Python, independently of the type of elements in that list?
For example:
```
permutations([])
[]
permutations([1])
[1]
permutations([1, 2])
[1, 2]
[2, 1]
permutations([1, 2, 3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
```
EDIT:
Eliben pointed to a solution that's similar to mine although simpler, so I'm choosing it as the accepted answer, although Python 2.6+ has a builtin solution in the **itertools** module:
```
import itertools
itertools.permutations([1, 2, 3])
``` | ```
def permutations(head, tail=''):
if len(head) == 0: print tail
else:
for i in range(len(head)):
permutations(head[0:i] + head[i+1:], tail+head[i])
```
called as:
```
permutations('abc')
``` |
How to generate all permutations of a list in Python | 104,420 | 271 | 2008-09-19T18:41:03Z | 11,962,517 | 11 | 2012-08-14T23:58:40Z | [
"python",
"algorithm",
"permutation",
"combinatorics",
"python-2.5"
] | How do you generate all the permutations of a list in Python, independently of the type of elements in that list?
For example:
```
permutations([])
[]
permutations([1])
[1]
permutations([1, 2])
[1, 2]
[2, 1]
permutations([1, 2, 3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
```
EDIT:
Eliben pointed to a solution that's similar to mine although simpler, so I'm choosing it as the accepted answer, although Python 2.6+ has a builtin solution in the **itertools** module:
```
import itertools
itertools.permutations([1, 2, 3])
``` | ```
#!/usr/bin/env python
def perm(a,k=0):
if(k==len(a)):
print a
else:
for i in xrange(k,len(a)):
a[k],a[i] = a[i],a[k]
perm(a, k+1)
a[k],a[i] = a[i],a[k]
perm([1,2,3])
```
Output:
```
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 2, 1]
[3, 1, 2]
```
As I'm swapping the content of the list it's required a mutable sequence type as input. E.g. `perm(list("ball"))` will work and `perm("ball")` won't because you can't change a string.
This Python implementation is inspired by the algorithm presented in the book *Computer Algorithms by Horowitz, Sahni and Rajasekeran*. |
How to generate all permutations of a list in Python | 104,420 | 271 | 2008-09-19T18:41:03Z | 17,391,851 | 7 | 2013-06-30T15:17:16Z | [
"python",
"algorithm",
"permutation",
"combinatorics",
"python-2.5"
] | How do you generate all the permutations of a list in Python, independently of the type of elements in that list?
For example:
```
permutations([])
[]
permutations([1])
[1]
permutations([1, 2])
[1, 2]
[2, 1]
permutations([1, 2, 3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
```
EDIT:
Eliben pointed to a solution that's similar to mine although simpler, so I'm choosing it as the accepted answer, although Python 2.6+ has a builtin solution in the **itertools** module:
```
import itertools
itertools.permutations([1, 2, 3])
``` | In a functional style
```
def addperm(x,l):
return [ l[0:i] + [x] + l[i:] for i in range(len(l)+1) ]
def perm(l):
if len(l) == 0:
return [[]]
return [x for y in perm(l[1:]) for x in addperm(l[0],y) ]
print perm([ i for i in range(3)])
```
The result:
```
[[0, 1, 2], [1, 0, 2], [1, 2, 0], [0, 2, 1], [2, 0, 1], [2, 1, 0]]
``` |
What is "thread local storage" in Python, and why do I need it? | 104,983 | 62 | 2008-09-19T19:53:19Z | 105,025 | 48 | 2008-09-19T19:59:40Z | [
"python",
"multithreading",
"thread-local"
] | In Python specifically, how do variables get shared between threads?
Although I have used `threading.Thread` before I never really understood or saw examples of how variables got shared. Are they shared between the main thread and the children or only among the children? When would I need to use thread local storage to avoid this sharing?
I have seen many warnings about synchronizing access to shared data among threads by using locks but I have yet to see a really good example of the problem.
Thanks in advance! | In Python, everything is shared, except for function-local variables (because each function call gets its own set of locals, and threads are always separate function calls.) And even then, only the variables themselves (the names that refer to objects) are local to the function; objects themselves are always global, and anything can refer to them.
The `Thread` object for a particular thread is not a special object in this regard. If you store the `Thread` object somewhere all threads can access (like a global variable) then all threads can access that one `Thread` object. If you want to atomically modify *anything* that you didn't just create in this very same thread, and did not store anywhere another thread can get at it, you have to protect it by a lock. And all threads must of course share this very same lock, or it wouldn't be very effective.
If you want actual thread-local storage, that's where `threading.local` comes in. Attributes of `threading.local` are not shared between threads; each thread sees only the attributes it itself placed in there. If you're curious about its implementation, the source is in `_threading_local.py` in the standard library. |
What is "thread local storage" in Python, and why do I need it? | 104,983 | 62 | 2008-09-19T19:53:19Z | 106,560 | 13 | 2008-09-20T00:31:24Z | [
"python",
"multithreading",
"thread-local"
] | In Python specifically, how do variables get shared between threads?
Although I have used `threading.Thread` before I never really understood or saw examples of how variables got shared. Are they shared between the main thread and the children or only among the children? When would I need to use thread local storage to avoid this sharing?
I have seen many warnings about synchronizing access to shared data among threads by using locks but I have yet to see a really good example of the problem.
Thanks in advance! | You can create thread local storage using `threading.local()`.
```
>>> tls = threading.local()
>>> tls.x = 4
>>> tls.x
4
```
Data stored to the tls will be unique to each thread which will help ensure that unintentional sharing does not occur. |
What is "thread local storage" in Python, and why do I need it? | 104,983 | 62 | 2008-09-19T19:53:19Z | 1,894,371 | 44 | 2009-12-12T18:58:50Z | [
"python",
"multithreading",
"thread-local"
] | In Python specifically, how do variables get shared between threads?
Although I have used `threading.Thread` before I never really understood or saw examples of how variables got shared. Are they shared between the main thread and the children or only among the children? When would I need to use thread local storage to avoid this sharing?
I have seen many warnings about synchronizing access to shared data among threads by using locks but I have yet to see a really good example of the problem.
Thanks in advance! | Consider the following code:
```
#/usr/bin/env python
from time import sleep
from random import random
from threading import Thread, local
data = local()
def bar():
print "I'm called from", data.v
def foo():
bar()
class T(Thread):
def run(self):
sleep(random())
data.v = self.getName() # Thread-1 and Thread-2 accordingly
sleep(1)
foo()
```
```
>> T().start(); T().start()
I'm called from Thread-2
I'm called from Thread-1
```
Here threading.local() is used as a quick and dirty way to pass some data from run() to bar() without changing the interface of foo().
Note that using global variables won't do the trick:
```
#/usr/bin/env python
from time import sleep
from random import random
from threading import Thread
def bar():
global v
print "I'm called from", v
def foo():
bar()
class T(Thread):
def run(self):
global v
sleep(random())
v = self.getName() # Thread-1 and Thread-2 accordingly
sleep(1)
foo()
```
```
>> T().start(); T().start()
I'm called from Thread-2
I'm called from Thread-2
```
Meanwhile, if you could afford passing this data through as an argument of foo() - it would be a more elegant and well-designed way:
```
from threading import Thread
def bar(v):
print "I'm called from", v
def foo(v):
bar(v)
class T(Thread):
def run(self):
foo(self.getName())
```
But this is not always possible when using third-party or poorly designed code. |
Are locks unnecessary in multi-threaded Python code because of the GIL? | 105,095 | 51 | 2008-09-19T20:07:37Z | 105,145 | 20 | 2008-09-19T20:13:54Z | [
"python",
"multithreading",
"locking"
] | If you are relying on an implementation of Python that has a Global Interpreter Lock (i.e. CPython) and writing multithreaded code, do you really need locks at all?
If the GIL doesn't allow multiple instructions to be executed in parallel, wouldn't shared data be unnecessary to protect?
sorry if this is a dumb question, but it is something I have always wondered about Python on multi-processor/core machines.
same thing would apply to any other language implementation that has a GIL. | No - the GIL just protects python internals from multiple threads altering their state. This is a very low-level of locking, sufficient only to keep python's own structures in a consistent state. It doesn't cover the *application* level locking you'll need to do to cover thread safety in your own code.
The essence of locking is to ensure that a particular *block* of code is only executed by one thread. The GIL enforces this for blocks the size of a single bytecode, but usually you want the lock to span a larger block of code than this. |
Are locks unnecessary in multi-threaded Python code because of the GIL? | 105,095 | 51 | 2008-09-19T20:07:37Z | 105,272 | 7 | 2008-09-19T20:24:50Z | [
"python",
"multithreading",
"locking"
] | If you are relying on an implementation of Python that has a Global Interpreter Lock (i.e. CPython) and writing multithreaded code, do you really need locks at all?
If the GIL doesn't allow multiple instructions to be executed in parallel, wouldn't shared data be unnecessary to protect?
sorry if this is a dumb question, but it is something I have always wondered about Python on multi-processor/core machines.
same thing would apply to any other language implementation that has a GIL. | The Global Interpreter Lock prevents threads from accessing the *interpreter* simultaneously (thus CPython only ever uses one core). However, as I understand it, the threads are still interrupted and scheduled *preemptively*, which means you still need locks on shared data structures, lest your threads stomp on each other's toes.
The answer I've encountered time and time again is that multithreading in Python is rarely worth the overhead, because of this. I've heard good things about the [PyProcessing](http://pyprocessing.berlios.de/) project, which makes running multiple processes as "simple" as multithreading, with shared data structures, queues, etc. (PyProcessing will be introduced into the standard library of the upcoming Python 2.6 as the [multiprocessing](http://www.python.org/dev/peps/pep-0371/) module.) This gets you around the GIL, as each process has its own interpreter. |
Are locks unnecessary in multi-threaded Python code because of the GIL? | 105,095 | 51 | 2008-09-19T20:07:37Z | 105,369 | 57 | 2008-09-19T20:34:40Z | [
"python",
"multithreading",
"locking"
] | If you are relying on an implementation of Python that has a Global Interpreter Lock (i.e. CPython) and writing multithreaded code, do you really need locks at all?
If the GIL doesn't allow multiple instructions to be executed in parallel, wouldn't shared data be unnecessary to protect?
sorry if this is a dumb question, but it is something I have always wondered about Python on multi-processor/core machines.
same thing would apply to any other language implementation that has a GIL. | You will still need locks if you share state between threads. The GIL only protects the interpreter internally. You can still have inconsistent updates in your own code.
For example:
```
#!/usr/bin/env python
import threading
shared_balance = 0
class Deposit(threading.Thread):
def run(self):
for _ in xrange(1000000):
global shared_balance
balance = shared_balance
balance += 100
shared_balance = balance
class Withdraw(threading.Thread):
def run(self):
for _ in xrange(1000000):
global shared_balance
balance = shared_balance
balance -= 100
shared_balance = balance
threads = [Deposit(), Withdraw()]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
print shared_balance
```
Here, your code can be interrupted between reading the shared state (`balance = shared_balance`) and writing the changed result back (`shared_balance = balance`), causing a lost update. The result is a random value for the shared state.
To make the updates consistent, run methods would need to lock the shared state around the read-modify-write sections (inside the loops) or have [some way to detect when the shared state had changed since it was read](http://en.wikipedia.org/wiki/Software_transactional_memory). |
Are locks unnecessary in multi-threaded Python code because of the GIL? | 105,095 | 51 | 2008-09-19T20:07:37Z | 241,814 | 7 | 2008-10-28T00:16:12Z | [
"python",
"multithreading",
"locking"
] | If you are relying on an implementation of Python that has a Global Interpreter Lock (i.e. CPython) and writing multithreaded code, do you really need locks at all?
If the GIL doesn't allow multiple instructions to be executed in parallel, wouldn't shared data be unnecessary to protect?
sorry if this is a dumb question, but it is something I have always wondered about Python on multi-processor/core machines.
same thing would apply to any other language implementation that has a GIL. | Adding to the discussion:
Because the GIL exists, some operations are atomic in Python and do not need a lock.
<http://www.python.org/doc/faq/library/#what-kinds-of-global-value-mutation-are-thread-safe>
As stated by the other answers, however, you *still* need to use locks whenever the application logic requires them (such as in a Producer/Consumer problem). |
Making a beta code for a public django site | 105,702 | 11 | 2008-09-19T21:17:20Z | 106,212 | 18 | 2008-09-19T22:43:40Z | [
"python",
"django",
"authentication",
"django-authentication"
] | I'm about to put a beta version of the site I'm working on up on the web. It needs to have a beta code to restrict access. The site is written in django.
I don't want to change the fundamental Auth system to accommodate a beta code, and I don't care particularly that the security of the beta code is iron-clad, just that it's a significant stumbling block.
How should I do this? It's a fairly large project, so adding code to every view is far from ideal.
---
That solution works well. The Middleware Class I ended up with this this:
```
from django.http import HttpResponseRedirect
class BetaMiddleware(object):
"""
Require beta code session key in order to view any page.
"""
def process_request(self, request):
if request.path != '/beta/' and not request.session.get('in_beta'):
return HttpResponseRedirect('%s?next=%s' % ('/beta/', request.path))
``` | Start with [this Django snippet](http://www.djangosnippets.org/snippets/136/), but modify it to check `request.session['has_beta_access']`. If they don't have it, then have it return a redirect to a "enter beta code" page that, when posted to with the right code, sets that session variable to `True`.
Making it a public beta then just consists of removing that middleware from your `MIDDLEWARE_CLASSES` setting. |
Why unicode() uses str() on my object only with no encoding given? | 106,630 | 5 | 2008-09-20T00:53:24Z | 106,709 | 7 | 2008-09-20T01:32:09Z | [
"python",
"unicode",
"encoding"
] | I start by creating a string variable with some **non-ascii** *utf-8* encoded data on it:
```
>>> text = 'á'
>>> text
'\xc3\xa1'
>>> text.decode('utf-8')
u'\xe1'
```
Using `unicode()` on it raises errors...
```
>>> unicode(text)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0:
ordinal not in range(128)
```
...but if I know the encoding I can use it as second parameter:
```
>>> unicode(text, 'utf-8')
u'\xe1'
>>> unicode(text, 'utf-8') == text.decode('utf-8')
True
```
Now if I have a class that returns this text in the `__str__()` method:
```
>>> class ReturnsEncoded(object):
... def __str__(self):
... return text
...
>>> r = ReturnsEncoded()
>>> str(r)
'\xc3\xa1'
```
`unicode(r)` seems to use `str()` on it, since it raises the same error as `unicode(text)` above:
```
>>> unicode(r)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0:
ordinal not in range(128)
```
Until now everything is as planned!
**But as no one would ever expect, `unicode(r, 'utf-8')` won't even try:**
```
>>> unicode(r, 'utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: coercing to Unicode: need string or buffer, ReturnsEncoded found
```
Why? Why this inconsistent behavior? Is it a bug? is it intended? Very awkward. | The behaviour does seem confusing, but intensional. I reproduce here the entirety of the unicode documentation from the [Python Built-In Functions documentation](http://docs.python.org/lib/built-in-funcs.html) (for version 2.5.2, as I write this):
> **unicode([object[, encoding [, errors]]])**
>
> Return the Unicode string version of object using one of the following modes:
>
> If encoding and/or errors are given, unicode() will decode the
> object which can either be an 8-bit string or a character buffer
> using the codec for encoding. The encoding parameter is a string
> giving the name of an encoding; if the encoding is not known,
> LookupError is raised. Error handling is done according to
> errors; this specifies the treatment of characters which are
> invalid in the input encoding. If errors is 'strict' (the
> default), a ValueError is raised on errors, while a value of
> 'ignore' causes errors to be silently ignored, and a value of
> 'replace' causes the official Unicode replacement character,
> U+FFFD, to be used to replace input characters which cannot be
> decoded. See also the [codecs](http://docs.python.org/lib/module-codecs.html) module.
>
> If no optional parameters are given, unicode() will mimic the
> behaviour of str() except that it returns Unicode strings
> instead of 8-bit strings. More precisely, if object is a Unicode
> string or subclass it will return that Unicode string without
> any additional decoding applied.
>
> For objects which provide a \_\_unicode\_\_() method, it will call
> this method without arguments to create a Unicode string. For
> all other objects, the 8-bit string version or representation is
> requested and then converted to a Unicode string using the codec
> for the default encoding in 'strict' mode.
>
> New in version 2.0. Changed in version 2.2: Support for \_\_unicode\_\_() added.
So, when you call `unicode(r, 'utf-8')`, it requires an 8-bit string or a character buffer as the first argument, so it coerces your object using the `__str__()` method, and attempts to decode that using the `utf-8` codec. Without the `utf-8`, the `unicode()` function looks for a for a `__unicode__()` method on your object, and not finding it, calls the `__str__()` method, as you suggested, attempting to use the default codec to convert to unicode. |
How to bundle a Python application including dependencies? | 106,725 | 43 | 2008-09-20T01:39:48Z | 106,730 | 25 | 2008-09-20T01:41:20Z | [
"python",
"tkinter",
"packaging"
] | I need to package my python application, its dependencies and python into a single MSI installer. The end result should desirably be:
* Python is installed in the standard location
* the package and its dependencies are installed in a separate directory (possibly site-packages)
* the installation directory should contain the python uncompressed and a standalone executable is not required | Kind of a dup of this question about [how to make a python into an executable](http://stackoverflow.com/questions/2933/an-executable-python-app).
It boils down to:
[py2exe](http://www.py2exe.org/) on windows, [Freeze](http://wiki.python.org/moin/Freeze) on Linux, and
[py2app](http://svn.pythonmac.org/py2app/py2app/trunk/doc/index.html) on Mac. |
How to bundle a Python application including dependencies? | 106,725 | 43 | 2008-09-20T01:39:48Z | 106,756 | 13 | 2008-09-20T01:52:20Z | [
"python",
"tkinter",
"packaging"
] | I need to package my python application, its dependencies and python into a single MSI installer. The end result should desirably be:
* Python is installed in the standard location
* the package and its dependencies are installed in a separate directory (possibly site-packages)
* the installation directory should contain the python uncompressed and a standalone executable is not required | I use [PyInstaller](http://pyinstaller.python-hosting.com/) (the svn version) to create a stand-alone version of my program that includes Python and all the dependencies. It takes a little fiddling to get it to work right and include everything (as does py2exe and other similar programs, see [this question](http://stackoverflow.com/questions/2933/an-executable-python-app)), but then it works very well.
You then need to create an installer. [NSIS](http://nsis.sourceforge.net/Main_Page) Works great for that and is free, but it creates .exe files not .msi. If .msi is not necessary, I highly recommend it. Otherwise check out the answers to [this](http://stackoverflow.com/questions/3767/what-is-the-best-choice-for-building-windows-installers) question for other options. |
Unit Testing File Modifications | 106,766 | 22 | 2008-09-20T01:56:25Z | 111,199 | 14 | 2008-09-21T15:16:34Z | [
"python",
"linux",
"unit-testing"
] | A common task in programs I've been working on lately is modifying a text file in some way. (Hey, I'm on Linux. Everything's a file. And I do large-scale system admin.)
But the file the code modifies may not exist on my desktop box. And I probably don't want to modify it if it IS on my desktop.
I've read about unit testing in Dive Into Python, and it's pretty clear what I want to do when testing an app that converts decimal to Roman Numerals (the example in DintoP). The testing is nicely self-contained. You don't need to verify that the program PRINTS the right thing, you just need to verify that the functions are returning the right output to a given input.
In my case, however, we need to test that the program is modifying its environment correctly. Here's what I've come up with:
1) Create the "original" file in a standard location, perhaps /tmp.
2) Run the function that modifies the file, passing it the path to the file in /tmp.
3) Verify that the file in /tmp was changed correctly; pass/fail unit test accordingly.
This seems kludgy to me. (Gets even kludgier if you want to verify that backup copies of the file are created properly, etc.) Has anyone come up with a better way? | You're talking about testing too much at once. If you start trying to attack a testing problem by saying "Let's verify that it modifies its environment correctly", you're doomed to failure. Environments have dozens, maybe even millions of potential variations.
Instead, look at the pieces ("units") of your program. For example, are you going to have a function that determines where the files are that have to be written? What are the inputs to that function? Perhaps an environment variable, perhaps some values read from a config file? Test that function, and don't actually do anything that modifies the filesystem. Don't pass it "realistic" values, pass it values that are easy to verify against. Make a temporary directory, populate it with files in your test's `setUp` method.
Then test the code that writes the files. Just make sure it's writing the right contents file contents. Don't even write to a real filesystem! You don't need to make "fake" file objects for this, just use Python's handy `StringIO` modules; they're "real" implementations of the "file" interface, they're just not the ones that your program is actually going to be writing to.
Ultimately you will have to test the final, everything-is-actually-hooked-up-for-real top-level function that passes the real environment variable and the real config file and puts everything together. But don't worry about that to get started. For one thing, you will start picking up tricks as you write individual tests for smaller functions and creating test mocks, fakes, and stubs will become second nature to you. For another: even if you can't quite figure out how to test that one function call, you will have a very high level of confidence that everything which it is calling works perfectly. Also, you'll notice that test-driven development forces you to make your APIs clearer and more flexible. For example: it's much easier to test something that calls an `open()` method on an object that came from somewhere abstract, than to test something that calls `os.open` on a string that you pass it. The `open` method is flexible; it can be faked, it can be implemented differently, but a string is a string and `os.open` doesn't give you any leeway to catch what methods are called on it.
You can also build testing tools to make repetitive tasks easy. For example, twisted provides facilities for creating temporary files for testing [built right into its testing tool](http://twistedmatrix.com/documents/8.1.0/api/twisted.trial.unittest.TestCase.html#mktemp). It's not uncommon for testing tools or larger projects with their own test libraries to have functionality like this. |
How do you send a HEAD HTTP request in Python 2? | 107,405 | 97 | 2008-09-20T06:38:38Z | 107,427 | 94 | 2008-09-20T06:45:45Z | [
"python",
"python-2.7",
"http",
"http-headers",
"content-type"
] | What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if `http://somedomain/foo/` will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this? | **edit**: This answer works, but nowadays you should just use the [requests](http://docs.python-requests.org/en/latest/index.html) library as mentioned by other answers below.
---
Use [httplib](https://docs.python.org/2/library/httplib.html).
```
>>> import httplib
>>> conn = httplib.HTTPConnection("www.google.com")
>>> conn.request("HEAD", "/index.html")
>>> res = conn.getresponse()
>>> print res.status, res.reason
200 OK
>>> print res.getheaders()
[('content-length', '0'), ('expires', '-1'), ('server', 'gws'), ('cache-control', 'private, max-age=0'), ('date', 'Sat, 20 Sep 2008 06:43:36 GMT'), ('content-type', 'text/html; charset=ISO-8859-1')]
```
There's also a `getheader(name)` to get a specific header. |
How do you send a HEAD HTTP request in Python 2? | 107,405 | 97 | 2008-09-20T06:38:38Z | 2,070,916 | 99 | 2010-01-15T10:50:52Z | [
"python",
"python-2.7",
"http",
"http-headers",
"content-type"
] | What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if `http://somedomain/foo/` will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this? | [urllib2](https://docs.python.org/2/library/urllib2.html) can be used to perform a HEAD request. This is a little nicer than using httplib since urllib2 parses the URL for you instead of requiring you to split the URL into host name and path.
```
>>> import urllib2
>>> class HeadRequest(urllib2.Request):
... def get_method(self):
... return "HEAD"
...
>>> response = urllib2.urlopen(HeadRequest("http://google.com/index.html"))
```
Headers are available via response.info() as before. Interestingly, you can find the URL that you were redirected to:
```
>>> print response.geturl()
http://www.google.com.au/index.html
``` |
How do you send a HEAD HTTP request in Python 2? | 107,405 | 97 | 2008-09-20T06:38:38Z | 4,421,712 | 15 | 2010-12-12T12:45:54Z | [
"python",
"python-2.7",
"http",
"http-headers",
"content-type"
] | What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if `http://somedomain/foo/` will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this? | Just:
```
import urllib2
request = urllib2.Request('http://localhost:8080')
request.get_method = lambda : 'HEAD'
response = urllib2.urlopen(request)
response.info().gettype()
```
Edit: I've just came to realize there is httplib2 :D
```
import httplib2
h = httplib2.Http()
resp = h.request("http://www.google.com", 'HEAD')
assert resp[0]['status'] == 200
assert resp[0]['content-type'] == 'text/html'
...
```
[link text](http://httplib2.googlecode.com/hg/doc/html/libhttplib2.html) |
How do you send a HEAD HTTP request in Python 2? | 107,405 | 97 | 2008-09-20T06:38:38Z | 7,387,509 | 33 | 2011-09-12T12:02:47Z | [
"python",
"python-2.7",
"http",
"http-headers",
"content-type"
] | What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if `http://somedomain/foo/` will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this? | I believe the [Requests](http://docs.python-requests.org/en/latest/index.html) library should be mentioned as well. |
How do you send a HEAD HTTP request in Python 2? | 107,405 | 97 | 2008-09-20T06:38:38Z | 12,997,216 | 38 | 2012-10-21T11:00:49Z | [
"python",
"python-2.7",
"http",
"http-headers",
"content-type"
] | What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if `http://somedomain/foo/` will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this? | Obligatory [`Requests`](http://docs.python-requests.org/en/latest/) way:
```
import requests
resp = requests.head("http://www.google.com")
print resp.status_code, resp.text, resp.headers
``` |
XML-RPC: best way to handle 64-bit values? | 107,616 | 3 | 2008-09-20T08:27:11Z | 108,032 | 7 | 2008-09-20T12:29:16Z | [
"c++",
"python",
"64bit",
"xml-rpc"
] | So the official XML-RPC standard doesn't support 64-bit values. But in these modern times, 64-bit values are increasingly common.
How do you handle these? What XML-RPC extensions are the most common? What language bindings are there? I'm especially interested in Python and C++, but all information is appreciated. | Some libraries support 64 bits extensions, indeed, but there doesn't seem to be a standard. [xmlrpc-c](http://xmlrpc-c.sourceforge.net/), for example, has a so called i8 but it doesn't work with python (at least not by default).
I would recommend to either:
* Convert the integer to string by hand and send it as such. XMLRPC will convert it to string anyway, so I would say this is reasonable.
* Break it in two 32 bits integers and send it as such. |
How can I unit test responses from the webapp WSGI application in Google App Engine? | 107,675 | 10 | 2008-09-20T08:56:59Z | 114,449 | 10 | 2008-09-22T12:02:08Z | [
"python",
"unit-testing",
"google-app-engine"
] | I'd like to unit test responses from the Google App Engine webapp.WSGIApplication, for example request the url '/' and test that the responses status code is 200, using [GAEUnit](http://code.google.com/p/gaeunit). How can I do this?
I'd like to use the webapp framework and GAEUnit, which runs within the App Engine sandbox (unfortunately [WebTest](http://pythonpaste.org/webtest/) does not work within the sandbox). | I have added a [sample application](http://code.google.com/p/gaeunit/source/browse/#svn/trunk/sample_app) to the GAEUnit project which demonstrates how to write and execute a web test using GAEUnit. The sample includes a slightly modified version of the '[webtest](http://pythonpaste.org/webtest/index.html)' module ('import webbrowser' is commented out, as recommended by David Coffin).
Here's the 'web\_tests.py' file from the sample application 'test' directory:
```
import unittest
from webtest import TestApp
from google.appengine.ext import webapp
import index
class IndexTest(unittest.TestCase):
def setUp(self):
self.application = webapp.WSGIApplication([('/', index.IndexHandler)], debug=True)
def test_default_page(self):
app = TestApp(self.application)
response = app.get('/')
self.assertEqual('200 OK', response.status)
self.assertTrue('Hello, World!' in response)
def test_page_with_param(self):
app = TestApp(self.application)
response = app.get('/?name=Bob')
self.assertEqual('200 OK', response.status)
self.assertTrue('Hello, Bob!' in response)
``` |
Disable output buffering | 107,705 | 285 | 2008-09-20T09:17:20Z | 107,717 | 251 | 2008-09-20T09:24:31Z | [
"python",
"stdout",
"buffered"
] | Is output buffering enabled by default in Python's interpreter for `sys.stdout`?
If the answer is positive, what are all the ways to disable it?
Suggestions so far:
1. Use the `-u` command line switch
2. Wrap `sys.stdout` in an object that flushes after every write
3. Set `PYTHONUNBUFFERED` env var
4. `sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)`
Is there any other way to set some global flag in `sys`/`sys.stdout` programmatically during execution? | From [Magnus Lycka answer on a mailing list](http://mail.python.org/pipermail/tutor/2003-November/026645.html):
> You can skip buffering for a whole
> python process using "python -u"
> (or#!/usr/bin/env python -u etc) or by
> setting the environment variable
> PYTHONUNBUFFERED.
>
> You could also replace sys.stdout with
> some other stream like wrapper which
> does a flush after every call.
>
> ```
> class Unbuffered(object):
> def __init__(self, stream):
> self.stream = stream
> def write(self, data):
> self.stream.write(data)
> self.stream.flush()
> def __getattr__(self, attr):
> return getattr(self.stream, attr)
>
> import sys
> sys.stdout = Unbuffered(sys.stdout)
> print 'Hello'
> ``` |
Disable output buffering | 107,705 | 285 | 2008-09-20T09:17:20Z | 107,720 | 24 | 2008-09-20T09:25:36Z | [
"python",
"stdout",
"buffered"
] | Is output buffering enabled by default in Python's interpreter for `sys.stdout`?
If the answer is positive, what are all the ways to disable it?
Suggestions so far:
1. Use the `-u` command line switch
2. Wrap `sys.stdout` in an object that flushes after every write
3. Set `PYTHONUNBUFFERED` env var
4. `sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)`
Is there any other way to set some global flag in `sys`/`sys.stdout` programmatically during execution? | Yes, it is.
You can disable it on the commandline with the "-u" switch.
Alternatively, you could call .flush() on sys.stdout on every write (or wrap it with an object that does this automatically) |
Disable output buffering | 107,705 | 285 | 2008-09-20T09:17:20Z | 181,654 | 58 | 2008-10-08T07:23:10Z | [
"python",
"stdout",
"buffered"
] | Is output buffering enabled by default in Python's interpreter for `sys.stdout`?
If the answer is positive, what are all the ways to disable it?
Suggestions so far:
1. Use the `-u` command line switch
2. Wrap `sys.stdout` in an object that flushes after every write
3. Set `PYTHONUNBUFFERED` env var
4. `sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)`
Is there any other way to set some global flag in `sys`/`sys.stdout` programmatically during execution? | ```
# reopen stdout file descriptor with write mode
# and 0 as the buffer size (unbuffered)
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
```
Credits: "Sebastian", somewhere on the Python mailing list. |
Disable output buffering | 107,705 | 285 | 2008-09-20T09:17:20Z | 3,678,114 | 10 | 2010-09-09T15:37:53Z | [
"python",
"stdout",
"buffered"
] | Is output buffering enabled by default in Python's interpreter for `sys.stdout`?
If the answer is positive, what are all the ways to disable it?
Suggestions so far:
1. Use the `-u` command line switch
2. Wrap `sys.stdout` in an object that flushes after every write
3. Set `PYTHONUNBUFFERED` env var
4. `sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)`
Is there any other way to set some global flag in `sys`/`sys.stdout` programmatically during execution? | ```
def disable_stdout_buffering():
# Appending to gc.garbage is a way to stop an object from being
# destroyed. If the old sys.stdout is ever collected, it will
# close() stdout, which is not good.
gc.garbage.append(sys.stdout)
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
# Then this will give output in the correct order:
disable_stdout_buffering()
print "hello"
subprocess.call(["echo", "bye"])
```
Without saving the old sys.stdout, disable\_stdout\_buffering() isn't idempotent, and multiple calls will result in an error like this:
```
Traceback (most recent call last):
File "test/buffering.py", line 17, in <module>
print "hello"
IOError: [Errno 9] Bad file descriptor
close failed: [Errno 9] Bad file descriptor
```
Another possibility is:
```
def disable_stdout_buffering():
fileno = sys.stdout.fileno()
temp_fd = os.dup(fileno)
sys.stdout.close()
os.dup2(temp_fd, fileno)
os.close(temp_fd)
sys.stdout = os.fdopen(fileno, "w", 0)
```
(Appending to gc.garbage is not such a good idea because it's where unfreeable cycles get put, and you might want to check for those.) |
Disable output buffering | 107,705 | 285 | 2008-09-20T09:17:20Z | 14,729,823 | 31 | 2013-02-06T13:05:23Z | [
"python",
"stdout",
"buffered"
] | Is output buffering enabled by default in Python's interpreter for `sys.stdout`?
If the answer is positive, what are all the ways to disable it?
Suggestions so far:
1. Use the `-u` command line switch
2. Wrap `sys.stdout` in an object that flushes after every write
3. Set `PYTHONUNBUFFERED` env var
4. `sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)`
Is there any other way to set some global flag in `sys`/`sys.stdout` programmatically during execution? | I would rather put my answer in [How to flush output of Python print?](http://stackoverflow.com/questions/230751/how-to-flush-output-of-python-print) or in [Python's print function that flushes the buffer when it's called?](http://stackoverflow.com/questions/3895481/pythons-print-function-that-flushes-the-buffer-when-its-called), but since they were marked as duplicates of this one (what I do not agree), I'll answer it here.
Since Python 3.3 print() supports the keyword argument "flush" ([see documentation](http://docs.python.org/3/library/functions.html?highlight=print#print)):
```
print('Hello World!', flush=True)
``` |
Union and Intersect in Django | 108,193 | 30 | 2008-09-20T13:46:52Z | 108,404 | 19 | 2008-09-20T15:01:12Z | [
"python",
"django",
"django-models",
"django-views",
"tagging"
] | ```
class Tag(models.Model):
name = models.CharField(maxlength=100)
class Blog(models.Model):
name = models.CharField(maxlength=100)
tags = models.ManyToManyField(Tag)
```
Simple models just to ask my question.
I wonder how can i query blogs using tags in two different ways.
* Blog entries that are tagged with "tag1" or "tag2":
`Blog.objects.filter(tags_in=[1,2]).distinct()`
* Blog objects that are tagged with "tag1" and "tag2" : **?**
* Blog objects that are tagged with exactly "tag1" and "tag2" and nothing else : **??**
---
Tag and Blog is just used for an example. | You could use Q objects for #1:
```
# Blogs who have either hockey or django tags.
from django.db.models import Q
Blog.objects.filter(
Q(tags__name__iexact='hockey') | Q(tags__name__iexact='django')
)
```
Unions and intersections, I believe, are a bit outside the scope of the Django ORM, but its possible to to these. The following examples are from a Django application called called [django-tagging](http://code.google.com/p/django-tagging/) that provides the functionality. [Line 346 of models.py](http://code.google.com/p/django-tagging/source/browse/trunk/tagging/models.py#346):
For part two, you're looking for a union of two queries, basically
```
def get_union_by_model(self, queryset_or_model, tags):
"""
Create a ``QuerySet`` containing instances of the specified
model associated with *any* of the given list of tags.
"""
tags = get_tag_list(tags)
tag_count = len(tags)
queryset, model = get_queryset_and_model(queryset_or_model)
if not tag_count:
return model._default_manager.none()
model_table = qn(model._meta.db_table)
# This query selects the ids of all objects which have any of
# the given tags.
query = """
SELECT %(model_pk)s
FROM %(model)s, %(tagged_item)s
WHERE %(tagged_item)s.content_type_id = %(content_type_id)s
AND %(tagged_item)s.tag_id IN (%(tag_id_placeholders)s)
AND %(model_pk)s = %(tagged_item)s.object_id
GROUP BY %(model_pk)s""" % {
'model_pk': '%s.%s' % (model_table, qn(model._meta.pk.column)),
'model': model_table,
'tagged_item': qn(self.model._meta.db_table),
'content_type_id': ContentType.objects.get_for_model(model).pk,
'tag_id_placeholders': ','.join(['%s'] * tag_count),
}
cursor = connection.cursor()
cursor.execute(query, [tag.pk for tag in tags])
object_ids = [row[0] for row in cursor.fetchall()]
if len(object_ids) > 0:
return queryset.filter(pk__in=object_ids)
else:
return model._default_manager.none()
```
For part #3 I believe you're looking for an intersection. See [line 307 of models.py](http://code.google.com/p/django-tagging/source/browse/trunk/tagging/models.py#307)
```
def get_intersection_by_model(self, queryset_or_model, tags):
"""
Create a ``QuerySet`` containing instances of the specified
model associated with *all* of the given list of tags.
"""
tags = get_tag_list(tags)
tag_count = len(tags)
queryset, model = get_queryset_and_model(queryset_or_model)
if not tag_count:
return model._default_manager.none()
model_table = qn(model._meta.db_table)
# This query selects the ids of all objects which have all the
# given tags.
query = """
SELECT %(model_pk)s
FROM %(model)s, %(tagged_item)s
WHERE %(tagged_item)s.content_type_id = %(content_type_id)s
AND %(tagged_item)s.tag_id IN (%(tag_id_placeholders)s)
AND %(model_pk)s = %(tagged_item)s.object_id
GROUP BY %(model_pk)s
HAVING COUNT(%(model_pk)s) = %(tag_count)s""" % {
'model_pk': '%s.%s' % (model_table, qn(model._meta.pk.column)),
'model': model_table,
'tagged_item': qn(self.model._meta.db_table),
'content_type_id': ContentType.objects.get_for_model(model).pk,
'tag_id_placeholders': ','.join(['%s'] * tag_count),
'tag_count': tag_count,
}
cursor = connection.cursor()
cursor.execute(query, [tag.pk for tag in tags])
object_ids = [row[0] for row in cursor.fetchall()]
if len(object_ids) > 0:
return queryset.filter(pk__in=object_ids)
else:
return model._default_manager.none()
``` |
Union and Intersect in Django | 108,193 | 30 | 2008-09-20T13:46:52Z | 108,500 | 15 | 2008-09-20T15:33:55Z | [
"python",
"django",
"django-models",
"django-views",
"tagging"
] | ```
class Tag(models.Model):
name = models.CharField(maxlength=100)
class Blog(models.Model):
name = models.CharField(maxlength=100)
tags = models.ManyToManyField(Tag)
```
Simple models just to ask my question.
I wonder how can i query blogs using tags in two different ways.
* Blog entries that are tagged with "tag1" or "tag2":
`Blog.objects.filter(tags_in=[1,2]).distinct()`
* Blog objects that are tagged with "tag1" and "tag2" : **?**
* Blog objects that are tagged with exactly "tag1" and "tag2" and nothing else : **??**
---
Tag and Blog is just used for an example. | I've tested these out with Django 1.0:
The "or" queries:
```
Blog.objects.filter(tags__name__in=['tag1', 'tag2']).distinct()
```
or you could use the Q class:
```
Blog.objects.filter(Q(tags__name='tag1') | Q(tags__name='tag2')).distinct()
```
The "and" query:
```
Blog.objects.filter(tags__name='tag1').filter(tags__name='tag2')
```
I'm not sure about the third one, you'll probably need to drop to SQL to do it. |
Union and Intersect in Django | 108,193 | 30 | 2008-09-20T13:46:52Z | 110,437 | 9 | 2008-09-21T07:03:07Z | [
"python",
"django",
"django-models",
"django-views",
"tagging"
] | ```
class Tag(models.Model):
name = models.CharField(maxlength=100)
class Blog(models.Model):
name = models.CharField(maxlength=100)
tags = models.ManyToManyField(Tag)
```
Simple models just to ask my question.
I wonder how can i query blogs using tags in two different ways.
* Blog entries that are tagged with "tag1" or "tag2":
`Blog.objects.filter(tags_in=[1,2]).distinct()`
* Blog objects that are tagged with "tag1" and "tag2" : **?**
* Blog objects that are tagged with exactly "tag1" and "tag2" and nothing else : **??**
---
Tag and Blog is just used for an example. | Please don't reinvent the wheel and use [django-tagging application](http://code.google.com/p/django-tagging/) which was made exactly for your use case. It can do all queries you describe, and much more.
If you need to add custom fields to your Tag model, you can also take a look at [my branch of django-tagging](http://www.bitbucket.org/zuber/django-newtagging). |
Delete all data for a kind in Google App Engine | 108,822 | 40 | 2008-09-20T17:34:24Z | 108,939 | 9 | 2008-09-20T18:34:42Z | [
"python",
"google-app-engine"
] | I would like to wipe out all data for a specific kind in Google App Engine. What is the
best way to do this?
I wrote a delete script (hack), but since there is so much data is
timeout's out after a few hundred records. | Presumably your hack was something like this:
```
# Deleting all messages older than "earliest_date"
q = db.GqlQuery("SELECT * FROM Message WHERE create_date < :1", earliest_date)
results = q.fetch(1000)
while results:
db.delete(results)
results = q.fetch(1000, len(results))
```
As you say, if there's sufficient data, you're going to hit the request timeout before it gets through all the records. You'd have to re-invoke this request multiple times from outside to ensure all the data was erased; easy enough to do, but hardly ideal.
The admin console doesn't seem to offer any help, as (from my own experience with it), it seems to only allow entities of a given type to be listed and then deleted on a page-by-page basis.
When testing, I've had to purge my database on startup to get rid of existing data.
I would infer from this that Google operates on the principle that disk is cheap, and so data is typically orphaned (indexes to redundant data replaced), rather than deleted. Given there's a fixed amount of data available to each app at the moment (0.5 GB), that's not much help for non-Google App Engine users. |
Delete all data for a kind in Google App Engine | 108,822 | 40 | 2008-09-20T17:34:24Z | 291,819 | 9 | 2008-11-14T23:58:35Z | [
"python",
"google-app-engine"
] | I would like to wipe out all data for a specific kind in Google App Engine. What is the
best way to do this?
I wrote a delete script (hack), but since there is so much data is
timeout's out after a few hundred records. | Try using [App Engine Console](http://con.appspot.com/console/help/integration) then you dont even have to deploy any special code |
Delete all data for a kind in Google App Engine | 108,822 | 40 | 2008-09-20T17:34:24Z | 323,041 | 7 | 2008-11-27T06:00:44Z | [
"python",
"google-app-engine"
] | I would like to wipe out all data for a specific kind in Google App Engine. What is the
best way to do this?
I wrote a delete script (hack), but since there is so much data is
timeout's out after a few hundred records. | I've tried db.delete(results) and App Engine Console, and none of them seems to be working for me. Manually removing entries from Data Viewer (increased limit up to 200) didn't work either since I have uploaded more than 10000 entries. I ended writing this script
```
from google.appengine.ext import db
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
import wsgiref.handlers
from mainPage import YourData #replace this with your data
class CleanTable(webapp.RequestHandler):
def get(self, param):
txt = self.request.get('table')
q = db.GqlQuery("SELECT * FROM "+txt)
results = q.fetch(10)
self.response.headers['Content-Type'] = 'text/plain'
#replace yourapp and YouData your app info below.
self.response.out.write("""
<html>
<meta HTTP-EQUIV="REFRESH" content="5; url=http://yourapp.appspot.com/cleanTable?table=YourData">
<body>""")
try:
for i in range(10):
db.delete(results)
results = q.fetch(10, len(results))
self.response.out.write("<p>10 removed</p>")
self.response.out.write("""
</body>
</html>""")
except Exception, ints:
self.response.out.write(str(inst))
def main():
application = webapp.WSGIApplication([
('/cleanTable(.*)', CleanTable),
])
wsgiref.handlers.CGIHandler().run(application)
```
The trick was to include redirect in html instead of using self.redirect. I'm ready to wait overnight to get rid of all the data in my table. Hopefully, GAE team will make it easier to drop tables in the future. |
Delete all data for a kind in Google App Engine | 108,822 | 40 | 2008-09-20T17:34:24Z | 1,023,729 | 27 | 2009-06-21T11:41:24Z | [
"python",
"google-app-engine"
] | I would like to wipe out all data for a specific kind in Google App Engine. What is the
best way to do this?
I wrote a delete script (hack), but since there is so much data is
timeout's out after a few hundred records. | I am currently deleting the entities by their key, and it seems to be faster.
```
from google.appengine.ext import db
class bulkdelete(webapp.RequestHandler):
def get(self):
self.response.headers['Content-Type'] = 'text/plain'
try:
while True:
q = db.GqlQuery("SELECT __key__ FROM MyModel")
assert q.count()
db.delete(q.fetch(200))
time.sleep(0.5)
except Exception, e:
self.response.out.write(repr(e)+'\n')
pass
```
from the terminal, I run curl -N http://... |
Delete all data for a kind in Google App Engine | 108,822 | 40 | 2008-09-20T17:34:24Z | 1,882,697 | 10 | 2009-12-10T17:41:07Z | [
"python",
"google-app-engine"
] | I would like to wipe out all data for a specific kind in Google App Engine. What is the
best way to do this?
I wrote a delete script (hack), but since there is so much data is
timeout's out after a few hundred records. | If I were a paranoid person, I would say Google App Engine (GAE) has not made it easy for us to remove data if we want to. I am going to skip discussion on index sizes and how they translate a 6 GB of data to 35 GB of storage (being billed for). That's another story, but they do have ways to work around that - limit number of properties to create index on (automatically generated indexes) et cetera.
The reason I decided to write this post is that I need to "nuke" all my Kinds in a sandbox. I read about it and finally came up with this code:
```
package com.intillium.formshnuker;
import java.io.IOException;
import java.util.ArrayList;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.google.appengine.api.datastore.Key;
import com.google.appengine.api.datastore.Query;
import com.google.appengine.api.datastore.Entity;
import com.google.appengine.api.datastore.FetchOptions;
import com.google.appengine.api.datastore.DatastoreService;
import com.google.appengine.api.datastore.DatastoreServiceFactory;
import com.google.appengine.api.labs.taskqueue.QueueFactory;
import com.google.appengine.api.labs.taskqueue.TaskOptions.Method;
import static com.google.appengine.api.labs.taskqueue.TaskOptions.Builder.url;
@SuppressWarnings("serial")
public class FormsnukerServlet extends HttpServlet {
public void doGet(final HttpServletRequest request, final HttpServletResponse response) throws IOException {
response.setContentType("text/plain");
final String kind = request.getParameter("kind");
final String passcode = request.getParameter("passcode");
if (kind == null) {
throw new NullPointerException();
}
if (passcode == null) {
throw new NullPointerException();
}
if (!passcode.equals("LONGSECRETCODE")) {
response.getWriter().println("BAD PASSCODE!");
return;
}
System.err.println("*** deleting entities form " + kind);
final long start = System.currentTimeMillis();
int deleted_count = 0;
boolean is_finished = false;
final DatastoreService dss = DatastoreServiceFactory.getDatastoreService();
while (System.currentTimeMillis() - start < 16384) {
final Query query = new Query(kind);
query.setKeysOnly();
final ArrayList<Key> keys = new ArrayList<Key>();
for (final Entity entity: dss.prepare(query).asIterable(FetchOptions.Builder.withLimit(128))) {
keys.add(entity.getKey());
}
keys.trimToSize();
if (keys.size() == 0) {
is_finished = true;
break;
}
while (System.currentTimeMillis() - start < 16384) {
try {
dss.delete(keys);
deleted_count += keys.size();
break;
} catch (Throwable ignore) {
continue;
}
}
}
System.err.println("*** deleted " + deleted_count + " entities form " + kind);
if (is_finished) {
System.err.println("*** deletion job for " + kind + " is completed.");
} else {
final int taskcount;
final String tcs = request.getParameter("taskcount");
if (tcs == null) {
taskcount = 0;
} else {
taskcount = Integer.parseInt(tcs) + 1;
}
QueueFactory.getDefaultQueue().add(
url("/formsnuker?kind=" + kind + "&passcode=LONGSECRETCODE&taskcount=" + taskcount).method(Method.GET));
System.err.println("*** deletion task # " + taskcount + " for " + kind + " is queued.");
}
response.getWriter().println("OK");
}
}
```
I have over 6 million records. That's a lot. I have no idea what the cost will be to delete the records (maybe more economical not to delete them). Another alternative would be to request a deletion for the entire application (sandbox). But that's not realistic in most cases.
I decided to go with smaller groups of records (in easy query). I know I could go for 500 entities, but then I started receiving very high rates of failure (re delete function).
***My request from GAE team: please add a feature to delete all entities of a kind in a single transaction.*** |
Delete all data for a kind in Google App Engine | 108,822 | 40 | 2008-09-20T17:34:24Z | 7,464,545 | 20 | 2011-09-18T21:17:52Z | [
"python",
"google-app-engine"
] | I would like to wipe out all data for a specific kind in Google App Engine. What is the
best way to do this?
I wrote a delete script (hack), but since there is so much data is
timeout's out after a few hundred records. | You can now use the Datastore Admin for that: <https://developers.google.com/appengine/docs/adminconsole/datastoreadmin#Deleting_Entities_in_Bulk> |
Python Music Library? | 108,848 | 36 | 2008-09-20T17:42:53Z | 108,936 | 7 | 2008-09-20T18:33:16Z | [
"python",
"audio",
"music"
] | I'm looking at writing a little drum machine in Python for fun. I've googled some and found the python pages on [music](http://wiki.python.org/moin/PythonInMusic) and [basic audio](http://wiki.python.org/moin/Audio/) as well as a StackOverflow question on [generating audio files](http://stackoverflow.com/questions/45385/good-python-library-for-generating-audio-files), but ***what I'm looking for is a decent library for music creation***. Has anyone on here tried to do something like this before? If so, what was your solution? What, either of the ones I've found, or something I haven't found, would be a decent library for audio manipulation?
Minimally, I'd like to be able to do something similar to [Audacity's](http://audacity.sourceforge.net/) scope within python, but if anyone knows of a library that can do more... I'm all ears. | I had to do this years ago. I used pymedia. I am not sure if it is still around any way here is some test code I wrote when I was playing with it. It is about 3 years old though.
**Edit:** The sample code plays an MP3 file
```
import pymedia
import time
demuxer = pymedia.muxer.Demuxer('mp3') #this thing decodes the multipart file i call it a demucker
f = open(r"path to \song.mp3", 'rb')
spot = f.read()
frames = demuxer.parse(spot)
print 'read it has %i frames' % len(frames)
decoder = pymedia.audio.acodec.Decoder(demuxer.streams[0]) #this thing does the actual decoding
frame = decoder.decode(spot)
print dir(frame)
#sys.exit(1)
sound = pymedia.audio.sound
print frame.bitrate, frame.sample_rate
song = sound.Output( frame.sample_rate, frame.channels, 16 ) #this thing handles playing the song
while len(spot) > 0:
try:
if frame: song.play(frame.data)
spot = f.read(512)
frame = decoder.decode(spot)
except:
pass
while song.isPlaying(): time.sleep(.05)
print 'well done'
``` |
Python Music Library? | 108,848 | 36 | 2008-09-20T17:42:53Z | 109,147 | 13 | 2008-09-20T19:51:39Z | [
"python",
"audio",
"music"
] | I'm looking at writing a little drum machine in Python for fun. I've googled some and found the python pages on [music](http://wiki.python.org/moin/PythonInMusic) and [basic audio](http://wiki.python.org/moin/Audio/) as well as a StackOverflow question on [generating audio files](http://stackoverflow.com/questions/45385/good-python-library-for-generating-audio-files), but ***what I'm looking for is a decent library for music creation***. Has anyone on here tried to do something like this before? If so, what was your solution? What, either of the ones I've found, or something I haven't found, would be a decent library for audio manipulation?
Minimally, I'd like to be able to do something similar to [Audacity's](http://audacity.sourceforge.net/) scope within python, but if anyone knows of a library that can do more... I'm all ears. | Take a close look at [cSounds](http://www.csounds.com/). There are Python bindings allow you to do pretty flexible digital synthesis. There are some pretty complete packages available, too.
See <http://www.csounds.com/node/188> for a package.
See <http://www.csounds.com/journal/issue6/pythonOpcodes.html> for information on Python scripting within cSounds. |
Is there something like 'autotest' for Python unittests? | 108,892 | 39 | 2008-09-20T18:07:40Z | 482,668 | 16 | 2009-01-27T08:42:41Z | [
"python",
"testing"
] | Basically, growl notifications (or other callbacks) when tests break or pass. **Does anything like this exist?**
If not, it should be pretty easy to write.. Easiest way would be to..
1. run `python-autotest myfile1.py myfile2.py etc.py`
2. Check if files-to-be-monitored have been modified (possibly just if they've been saved).
3. Run any tests in those files.
4. If a test fails, but in the previous run it passed, generate a growl alert. Same with tests that fail then pass.
5. Wait, and repeat steps 2-5.
The problem I can see there is if the tests are in a different file. The simple solution would be to run all the tests after each save.. but with slower tests, this might take longer than the time between saves, and/or could use a lot of CPU power etc..
The best way to do it would be to actually see what bits of code have changed, if function abc() has changed, only run tests that interact with this.. While this would be great, I think it'd be extremely complex to implement?
To summarise:
* Is there anything like the Ruby tool `autotest` (part of the [ZenTest package](http://www.zenspider.com/ZSS/Products/ZenTest/)), but for Python code?
* How do you check which functions have changed between two revisions of a script?
* Is it possible to determine which functions a command will call? (Somewhat like a reverse traceback) | [autonose](http://github.com/gfxmonk/autonose/tree/master) created by [gfxmonk](http://gfxmonk.net/):
> Autonose is an autotest-like tool for python, using the excellent nosetest library.
>
> autotest tracks filesystem changes and automatically re-run any changed tests or dependencies whenever a file is added, removed or updated. A file counts as changed if it has iself been modified, or if any file it `import`s has changed.
>
> ...
>
> Autonose currently has a native GUI for OSX and GTK. If neither of those are available to you, you can instead run the console version (with the --console option). |
Is there something like 'autotest' for Python unittests? | 108,892 | 39 | 2008-09-20T18:07:40Z | 9,461,979 | 24 | 2012-02-27T08:28:14Z | [
"python",
"testing"
] | Basically, growl notifications (or other callbacks) when tests break or pass. **Does anything like this exist?**
If not, it should be pretty easy to write.. Easiest way would be to..
1. run `python-autotest myfile1.py myfile2.py etc.py`
2. Check if files-to-be-monitored have been modified (possibly just if they've been saved).
3. Run any tests in those files.
4. If a test fails, but in the previous run it passed, generate a growl alert. Same with tests that fail then pass.
5. Wait, and repeat steps 2-5.
The problem I can see there is if the tests are in a different file. The simple solution would be to run all the tests after each save.. but with slower tests, this might take longer than the time between saves, and/or could use a lot of CPU power etc..
The best way to do it would be to actually see what bits of code have changed, if function abc() has changed, only run tests that interact with this.. While this would be great, I think it'd be extremely complex to implement?
To summarise:
* Is there anything like the Ruby tool `autotest` (part of the [ZenTest package](http://www.zenspider.com/ZSS/Products/ZenTest/)), but for Python code?
* How do you check which functions have changed between two revisions of a script?
* Is it possible to determine which functions a command will call? (Somewhat like a reverse traceback) | I found [autonose](https://github.com/gfxmonk/autonose) to be pretty unreliable but [sniffer](http://pypi.python.org/pypi/sniffer/0.2.3) seems to work very well.
```
$ pip install sniffer
$ cd myproject
```
Then instead of running "nosetests", you run:
```
$ sniffer
```
Or instead of `nosetests --verbose --with-doctest`, you run:
```
$ sniffer -x--verbose -x--with-doctest
```
As described in the [readme](http://pypi.python.org/pypi/sniffer/0.2.3), it's a good idea to install one of the platform-specific filesystem-watching libraries, `pyinotify`, `pywin32` or `MacFSEvents` (all installable via `pip` etc) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.