title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 232,791 | 160 | 2008-10-24T07:59:15Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | I feel like I post a link to this presentation every day: David M. Beazly's [Generator Tricks for Systems Programmers](http://www.dabeaz.com/generators/). If you're a Python programmer and you're not extremely familiar with generators, you should read this. It's a very clear explanation of what generators are, how they work, what the yield statement does, and it answers the question "Do you really want to mess around with this obscure language feature?"
SPOILER ALERT. The answer is: Yes. Yes, you do. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 232,853 | 121 | 2008-10-24T08:44:08Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | There's one extra thing to mention: a function that yields doesn't actually have to terminate. I've written code like this:
```
def fib():
last, cur = 0, 1
while True:
yield cur
last, cur = cur, last + cur
```
Then I can use it in other code like this:
```
for f in fib():
if some_condition: break
coolfuncs(f);
```
It really helps simplify some problems, and makes some things easier to work with. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 237,028 | 1,095 | 2008-10-25T21:22:30Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | ## Shortcut to [Grokking](http://en.wikipedia.org/wiki/Grok) `yield`
When you see a function with `yield` statements, apply this easy trick to understand what will happen:
1. Insert a line `result = []` at the start of the function.
2. Replace each `yield expr` with `result.append(expr)`.
3. Insert a line `return result` at the bottom of the function.
4. Yay - no more `yield` statements! Read and figure out code.
5. Compare function to original definition.
This trick may give you an idea of the logic behind the function, but what actually happens with `yield` is significantly different that what happens in the list based approach. In many cases the yield approach will be a lot more memory efficient and faster too. In other cases this trick will get you stuck in an infinite loop, even though the original function works just fine. Read on to learn more...
## Don't confuse your Iterables, Iterators and Generators
First, the **iterator protocol** - when you write
```
for x in mylist:
...loop body...
```
Python performs the following two steps:
1. Gets an iterator for `mylist`:
Call `iter(mylist)` -> this returns an object with a `next()` method (or `__next__()` in Python 3).
[This is the step most people forget to tell you about]
2. Uses the iterator to loop over items:
Keep calling the `next()` method on the iterator returned from step 1. The return value from `next()` is assigned to `x` and the loop body is executed. If an exception `StopIteration` is raised from within `next()`, it means there are no more values in the iterator and the loop is exited.
The truth is Python performs the above two steps anytime it wants to *loop over* the contents of an object - so it could be a for loop, but it could also be code like `otherlist.extend(mylist)` (where `otherlist` is a Python list).
Here `mylist` is an *iterable* because it implements the iterator protocol. In a user defined class, you can implement the `__iter__()` method to make instances of your class iterable. This method should return an *iterator*. An iterator is an object with a `next()` method. It is possible to implement both `__iter__()` and `next()` on the same class, and have `__iter__()` return `self`. This will work for simple cases, but not when you want two iterators looping over the same object at the same time.
So that's the iterator protocol, many objects implement this protocol:
1. Built-in lists, dictionaries, tuples, sets, files.
2. User defined classes that implement `__iter__()`.
3. Generators.
Note that a `for` loop doesn't know what kind of object it's dealing with - it just follows the iterator protocol, and is happy to get item after item as it calls `next()`. Built-in lists return their items one by one, dictionaries return the *keys* one by one, files return the *lines* one by one, etc. And generators return... well that's where `yield` comes in:
```
def f123():
yield 1
yield 2
yield 3
for item in f123():
print item
```
Instead of `yield` statements, if you had three `return` statements in `f123()` only the first would get executed, and the function would exit. But `f123()` is no ordinary function. When `f123()` is called, it *does not* return any of the values in the yield statements! It returns a generator object. Also, the function does not really exit - it goes into a suspended state. When the `for` loop tries to loop over the generator object, the function resumes from its suspended state at the very next line after the `yield` it previously returned from, executes the next line of code, in this case a `yield` statement, and returns that as the next item. This happens until the function exits, at which point the generator raises `StopIteration`, and the loop exits.
So the generator object is sort of like an adapter - at one end it exhibits the iterator protocol, by exposing `__iter__()` and `next()` methods to keep the `for` loop happy. At the other end however, it runs the function just enough to get the next value out of it, and puts it back in suspended mode.
## Why Use Generators?
Usually you can write code that doesn't use generators but implements the same logic. One option is to use the temporary list 'trick' I mentioned before. That will not work in all cases, for e.g. if you have infinite loops, or it may make inefficient use of memory when you have a really long list. The other approach is to implement a new iterable class `SomethingIter` that keeps state in instance members and performs the next logical step in it's `next()` (or `__next__()` in Python 3) method. Depending on the logic, the code inside the `next()` method may end up looking very complex and be prone to bugs. Here generators provide a clean and easy solution. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 6,400,990 | 215 | 2011-06-19T06:33:58Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | The `yield` keyword is reduced to two simple facts:
1. If the compiler detects the `yield` keyword *anywhere* inside a function, that function no longer returns via the `return` statement. ***Instead***, it **immediately** returns a **lazy "pending list" object** called a generator
2. A generator is iterable. What is an *iterable*? It's anything like a `list` or `set` or `range` or dict-view, with a *built-in protocol for visiting each element in a certain order*.
In a nutshell: **a generator is a lazy, incrementally-pending list**, and **`yield` statements allow you to use function notation to program the list values** the generator should incrementally spit out.
```
generator = myYieldingFunction(...)
x = list(generator)
generator
v
[x[0], ..., ???]
generator
v
[x[0], x[1], ..., ???]
generator
v
[x[0], x[1], x[2], ..., ???]
StopIteration exception
[x[0], x[1], x[2]] done
list==[x[0], x[1], x[2]]
```
---
## Example
Let's define a function `makeRange` that's just like Python's `range`. Calling `makeRange(n)` RETURNS A GENERATOR:
```
def makeRange(n):
# return 0,1,2,...,n-1
i = 0
while i < n:
yield i
i += 1
>>> makeRange(5)
<generator object makeRange at 0x19e4aa0>
```
To force the generator to immediately return its pending values, you can pass it into `list()` (just like you could any iterable):
```
>>> list(makeRange(5))
[0, 1, 2, 3, 4]
```
---
## Comparing example to "just returning a list"
The above example can be thought of as merely creating a list which you append to and return:
```
# list-version # # generator-version
def makeRange(n): # def makeRange(n):
"""return [0,1,2,...,n-1]""" #~ """return 0,1,2,...,n-1"""
TO_RETURN = [] #>
i = 0 # i = 0
while i < n: # while i < n:
TO_RETURN += [i] #~ yield i
i += 1 # i += 1
return TO_RETURN #>
>>> makeRange(5)
[0, 1, 2, 3, 4]
```
There is one major difference though; see the last section.
---
## How you might use generators
An iterable is the last part of a list comprehension, and all generators are iterable, so they're often used like so:
```
# _ITERABLE_
>>> [x+10 for x in makeRange(5)]
[10, 11, 12, 13, 14]
```
To get a better feel for generators, you can play around with the `itertools` module (be sure to use `chain.from_iterable` rather than `chain` when warranted). For example, you might even use generators to implement infinitely-long lazy lists like `itertools.count()`. You could implement your own `def enumerate(iterable): zip(count(), iterable)`, or alternatively do so with the `yield` keyword in a while-loop.
Please note: generators can actually be used for many more things, such as [implementing coroutines](http://www.dabeaz.com/coroutines/index.html) or non-deterministic programming or other elegant things. However, the "lazy lists" viewpoint I present here is the most common use you will find.
---
## Behind the scenes
This is how the "Python iteration protocol" works. That is, what is going on when you do `list(makeRange(5))`. This is what I describe earlier as a "lazy, incremental list".
```
>>> x=iter(range(5))
>>> next(x)
0
>>> next(x)
1
>>> next(x)
2
>>> next(x)
3
>>> next(x)
4
>>> next(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
```
The built-in function `next()` just calls the objects `.next()` function, which is a part of the "iteration protocol" and is found on all iterators. You can manually use the `next()` function (and other parts of the iteration protocol) to implement fancy things, usually at the expense of readability, so try to avoid doing that...
---
## Minutiae
Normally, most people would not care about the following distinctions and probably want to stop reading here.
In Python-speak, an *iterable* is any object which "understands the concept of a for-loop" like a list `[1,2,3]`, and an *iterator* is a specific instance of the requested for-loop like `[1,2,3].__iter__()`. A *generator* is exactly the same as any iterator, except for the way it was written (with function syntax).
When you request an iterator from a list, it creates a new iterator. However, when you request an iterator from an iterator (which you would rarely do), it just gives you a copy of itself.
Thus, in the unlikely event that you are failing to do something like this...
```
> x = myRange(5)
> list(x)
[0, 1, 2, 3, 4]
> list(x)
[]
```
... then remember that a generator is an *iterator*; that is, it is one-time-use. If you want to reuse it, you should call `myRange(...)` again. If you need to use the result twice, convert the result to a list and store it in a variable `x = list(myRange(5))`. Those who absolutely need to clone a generator (for example, who are doing terrifyingly hackish metaprogramming) can use [`itertools.tee`](https://docs.python.org/2/library/itertools.html#itertools.tee) if absolutely necessary, since the copyable iterator Python [PEP](http://en.wikipedia.org/wiki/Python_Enhancement_Proposal#Development) standards proposal has been deferred. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 12,716,515 | 49 | 2012-10-03T20:38:16Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | Here are some [Python examples of how to actually implement generators](https://github.com/dustingetz/sandbox/blob/master/etc/lazy.py) as if Python did not provide syntactic sugar for them (or in a language without native syntax, like [JavaScript](http://en.wikipedia.org/wiki/JavaScript)). Snippets from that link is below.
**As a Python generator:**
```
from itertools import islice
def fib_gen():
a, b = 1, 1
while True:
yield a
a, b = b, a + b
assert [1, 1, 2, 3, 5] == list(islice(fib_gen(), 5))
```
**Using lexical closures instead of generators**
```
def ftake(fnext, last):
return [fnext() for _ in xrange(last)]
def fib_gen2():
#funky scope due to python2.x workaround
#for python 3.x use nonlocal
def _():
_.a, _.b = _.b, _.a + _.b
return _.a
_.a, _.b = 0, 1
return _
assert [1,1,2,3,5] == ftake(fib_gen2(), 5)
```
**Using object closures instead of generators** (because [ClosuresAndObjectsAreEquivalent](http://c2.com/cgi/wiki?ClosuresAndObjectsAreEquivalent))
```
class fib_gen3:
def __init__(self):
self.a, self.b = 1, 1
def __call__(self):
r = self.a
self.a, self.b = self.b, self.a + self.b
return r
assert [1,1,2,3,5] == ftake(fib_gen3(), 5)
``` |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 14,352,675 | 88 | 2013-01-16T06:42:09Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | Yield gives you a generator.
```
def get_odd_numbers(i):
return range(1, i, 2)
def yield_odd_numbers(i):
for x in range(1, i, 2):
yield x
foo = get_odd_numbers(10)
bar = yield_odd_numbers(10)
foo
[1, 3, 5, 7, 9]
bar
<generator object yield_odd_numbers at 0x1029c6f50>
bar.next()
1
bar.next()
3
bar.next()
5
```
As you can see, in the first case foo holds the entire list in memory at once. It's not a big deal for a list with 5 elements, but what if you want a list of 5 million? Not only is this a huge memory eater, it also costs a lot of time to build at the time that the function is called. In the second case, bar just gives you a generator. A generator is an iterable--which means you can use it in a for loop, etc, but each value can only be accessed once. All the values are also not stored in memory at the same time; the generator object "remembers" where it was in the looping the last time you called it--this way, if you're using an iterable to (say) count to 50 billion, you don't have to count to 50 billion all at once and store the 50 billion numbers to count through. Again, this is a pretty contrived example, you probably would use itertools if you really wanted to count to 50 billion. :)
This is the most simple use case of generators. As you said, it can be used to write efficient permutations, using yield to push things up through the call stack instead of using some sort of stack variable. Generators can also be used for specialized tree traversal, and all manner of other things. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 14,404,292 | 95 | 2013-01-18T17:25:17Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | For those who prefer a minimal working example, meditate on this interactive [Python](http://en.wikipedia.org/wiki/Python_%28programming_language%29) session:
```
>>> def f():
... yield 1
... yield 2
... yield 3
...
>>> g = f()
>>> for i in g:
... print i
...
1
2
3
>>> for i in g:
... print i
...
>>> # Note that this time nothing was printed
``` |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 14,554,322 | 51 | 2013-01-28T01:37:10Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | I was going to post "read page 19 of Beazley's 'Python: Essential Reference' for a quick description of generators", but so many others have posted good descriptions already.
Also, note that `yield` can be used in coroutines as the dual of their use in generator functions. Although it isn't the same use as your code snippet, `(yield)` can be used as an expression in a function. When a caller sends a value to the method using the `send()` method, then the coroutine will execute until the next `(yield)` statement is encountered.
Generators and coroutines are a cool way to set up data-flow type applications. I thought it would be worthwhile knowing about the other use of the `yield` statement in functions. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 15,814,755 | 81 | 2013-04-04T14:56:19Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | There is one type of answer that I don't feel has been given yet, among the many great answers that describe how to use generators. Here is the PL theory answer:
The `yield` statement in python returns a generator. A generator in python is a function that returns *continuations* (and specifically a type of coroutine, but continuations represent the more general mechanism to understand what is going on).
Continuations in programming languages theory are a much more fundamental kind of computation, but they are not often used because they are extremely hard to reason about and also very difficult to implement. But the idea of what a continuation is, is straightforward: it is the state of a computation that has not yet finished. In this state are saved the current values of variables and the operations that have yet to be performed, and so on. Then at some point later in the program the continuation can be invoked, such that the program's variables are reset to that state and the operations that were saved are carried out.
Continuations, in this more general form, can be implemented in two ways. In the `call/cc` way, the program's stack is literally saved and then when the continuation is invoked, the stack is restored.
In continuation passing style (CPS), continuations are just normal functions (only in languages where functions are first class) which the programmer explicitly manages and passes around to subroutines. In this style, program state is represented by closures (and the variables that happen to be encoded in them) rather than variables that reside somewhere on the stack. Functions that manage control flow accept continuation as arguments (in some variations of CPS, functions may accept multiple continuations) and manipulate control flow by invoking them by simply calling them and returning afterwards. A very simple example of continuation passing style is as follows:
```
def save_file(filename):
def write_file_continuation():
write_stuff_to_file(filename)
check_if_file_exists_and_user_wants_to_overwrite( write_file_continuation )
```
In this (very simplistic) example, the programmer saves the operation of actually writing the file into a continuation (which can potentially be a very complex operation with many details to write out), and then passes that continuation (i.e, as a first-class closure) to another operator which does some more processing, and then calls it if necessary. (I use this design pattern a lot in actual GUI programming, either because it saves me lines of code or, more importantly, to manage control flow after GUI events trigger)
The rest of this post will, without loss of generality, conceptualize continuations as CPS, because it is a hell of a lot easier to understand and read.
Now let's talk about generators in python. Generators are a specific subtype of continuation. Whereas **continuations are able in general to save the state of a *computation*** (i.e., the program's call stack), **generators are only able to save the state of iteration over an *iterator***. Although, this definition is slightly misleading for certain use cases of generators. For instance:
```
def f():
while True:
yield 4
```
This is clearly a reasonable iterable whose behavior is well defined -- each time the generator iterates over it, it returns 4 (and does so forever). But it isn't probably the prototypical type of iterable that comes to mind when thinking of iterators (i.e., `for x in collection: do_something(x)`). This example illustrates the power of generators: if anything is an iterator, a generator can save the state of its iteration.
To reiterate: Continuations can save the state of a program's stack and generators can save the state of iteration. This means that continuations are more a lot powerful than generators, but also that generators are a lot, lot easier. They are easier for the language designer to implement, and they are easier for the programmer to use (if you have some time to burn, try to read and understand [this page about continuations and call/cc](http://www.madore.org/~david/computers/callcc.html)).
But you could easily implement (and conceptualize) generators as a simple, specific case of continuation passing style:
Whenever `yield` is called, it tells the function to return a continuation. When the function is called again, it starts from wherever it left off. So, in pseudo-pseudocode (i.e., not pseudocode but not code) the generator's `next` method is basically as follows:
```
class Generator():
def __init__(self,iterable,generatorfun):
self.next_continuation = lambda:generatorfun(iterable)
def next(self):
value, next_continuation = self.next_continuation()
self.next_continuation = next_continuation
return value
```
where `yield` keyword is actually syntactic sugar for the real generator function, basically something like:
```
def generatorfun(iterable):
if len(iterable) == 0:
raise StopIteration
else:
return (iterable[0], lambda:generatorfun(iterable[1:]))
```
Remember that this is just pseudocode and the actual implementation of generators in python is more complex. But as an exercise to understand what is going on, try to use continuation passing style to implement generator objects without use of the `yield` keyword. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 17,113,322 | 34 | 2013-06-14T16:36:59Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | Here is a mental image of what `yield` does.
I like to think of a thread as having a stack (even if it's not implemented that way).
When a normal function is called, it puts its local variables on the stack, does some computation, returns and clears the stack. The values of its local variables are never seen again.
With a `yield` function, when it's called first, it similarly adds its local variables to the stack, but then takes its local variables to a special hideaway instead of clearing them, when it returns via `yield`. A possible place to put them would be somewhere in the heap.
Note that it's not *the function* any more, it's a kind of an imprint or ghost of the function that the `for` loop is hanging onto.
When it is called again, it retrieves its local variables from its special hideaway and puts them back on the stack and computes, then hides them again in the same way. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 18,365,578 | 33 | 2013-08-21T19:01:25Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | From a programming viewpoint, the iterators are implemented as **thunks**
<http://en.wikipedia.org/wiki/Thunk_(functional_programming)>
To implement iterators/generators/etc as thunks (also called anonymous functions), one uses messages sent to a closure object, which has a dispatcher, and the dispatcher answers to "messages".
<http://en.wikipedia.org/wiki/Message_passing>
"*next*" is a message sent to a closure, created by "*iter*" call.
There are lots of ways to implement this computation. I used mutation but it is easy to do it without mutation, by returning the current value and the next yielder.
Here is a demonstration which uses the structure of R6RS but the semantics is absolutely identical as in python, it's the same model of computation, only a change in syntax is required to rewrite it in python.
> ```
> Welcome to Racket v6.5.0.3.
>
> -> (define gen
> (lambda (l)
> (define yield
> (lambda ()
> (if (null? l)
> 'END
> (let ((v (car l)))
> (set! l (cdr l))
> v))))
> (lambda(m)
> (case m
> ('yield (yield))
> ('init (lambda (data)
> (set! l data)
> 'OK))))))
> -> (define stream (gen '(1 2 3)))
> -> (stream 'yield)
> 1
> -> (stream 'yield)
> 2
> -> (stream 'yield)
> 3
> -> (stream 'yield)
> 'END
> -> ((stream 'init) '(a b))
> 'OK
> -> (stream 'yield)
> 'a
> -> (stream 'yield)
> 'b
> -> (stream 'yield)
> 'END
> -> (stream 'yield)
> 'END
> ->
> ``` |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 20,704,301 | 41 | 2013-12-20T13:07:18Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | Here is a simple example:
```
def isPrimeNumber(n):
print "isPrimeNumber({}) call".format(n)
if n==1:
return False
for x in range(2,n):
if n % x == 0:
return False
return True
def primes (n=1):
while(True):
print "loop step ---------------- {}".format(n)
if isPrimeNumber(n): yield n
n += 1
for n in primes():
if n> 10:break
print "wiriting result {}".format(n)
```
output :
```
loop step ---------------- 1
isPrimeNumber(1) call
loop step ---------------- 2
isPrimeNumber(2) call
loop step ---------------- 3
isPrimeNumber(3) call
wiriting result 3
loop step ---------------- 4
isPrimeNumber(4) call
loop step ---------------- 5
isPrimeNumber(5) call
wiriting result 5
loop step ---------------- 6
isPrimeNumber(6) call
loop step ---------------- 7
isPrimeNumber(7) call
wiriting result 7
loop step ---------------- 8
isPrimeNumber(8) call
loop step ---------------- 9
isPrimeNumber(9) call
loop step ---------------- 10
isPrimeNumber(10) call
loop step ---------------- 11
isPrimeNumber(11) call
```
I am not a Python developer, but it looks to me `yield` holds the position of program flow and the next loop start from "yield" position. It seems like it is waiting at that position, and just before that, returning a value outside, and next time continues to work.
Seems to me an interesting and nice ability :D |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 21,541,902 | 64 | 2014-02-04T02:27:35Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | While a lot of answers show why you'd use a `yield` to create a generator, there are more uses for `yield`. It's quite easy to make a coroutine, which enables the passing of information between two blocks of code. I won't repeat any of the fine examples that have already been given about using `yield` to create a generator.
To help understand what a `yield` does in the following code, you can use your finger to trace the cycle through any code that has a `yield`. Every time your finger hits the `yield`, you have to wait for a `next` or a `send` to be entered. When a `next` is called, you trace through the code until you hit the `yield`⦠the code on the right of the `yield` is evaluated and returned to the caller⦠then you wait. When `next` is called again, you perform another loop through the code. However, you'll note that in a coroutine, `yield` can also be used with a `send`⦠which will send a value from the caller *into* the yielding function. If a `send` is given, then `yield` receives the value sent, and spits it out the left hand side⦠then the trace through the code progresses until you hit the `yield` again (returning the value at the end, as if `next` was called).
For example:
```
>>> def coroutine():
... i = -1
... while True:
... i += 1
... val = (yield i)
... print("Received %s" % val)
...
>>> sequence = coroutine()
>>> sequence.next()
0
>>> sequence.next()
Received None
1
>>> sequence.send('hello')
Received hello
2
>>> sequence.close()
``` |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 24,944,096 | 49 | 2014-07-24T21:15:29Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | There is another `yield` use and meaning (since python 3.3):
```
yield from <expr>
```
<http://www.python.org/dev/peps/pep-0380/>
> A syntax is proposed for a generator to delegate part of its operations to another generator. This allows a section of code containing 'yield' to be factored out and placed in another generator. Additionally, the subgenerator is allowed to return with a value, and the value is made available to the delegating generator.
>
> The new syntax also opens up some opportunities for optimisation when one generator re-yields values produced by another.
moreover [this](https://www.python.org/dev/peps/pep-0492/) will introduce (since python 3.5):
```
async def new_coroutine(data):
...
await blocking_action()
```
to avoid coroutines confused with regular generator (today `yield` is used in both). |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 30,341,713 | 24 | 2015-05-20T06:19:32Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | `yield` is like a return element for a function. The difference is, that the `yield` element turns a function into a generator. A generator behaves just like a function until something is 'yielded'. The generator stops until it is next called, and continues from exactly the same point as it started. You can get a sequence of all the 'yielded' values in one, by calling `list(generator())`. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 31,042,491 | 113 | 2015-06-25T06:11:11Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | > **What does the `yield` keyword do in Python?**
# Answer Outline/Summary
* A function with [**`yield`**](https://docs.python.org/reference/expressions.html#yieldexpr), when called, **returns a [Generator](https://docs.python.org/2/tutorial/classes.html#generators).**
* Generators are iterators because they implement the [**iterator protocol**](https://docs.python.org/2/library/stdtypes.html#iterator-types), so you can iterate over them.
* A generator can also be **sent information**, making it conceptually a **coroutine**.
* In Python 3, you can **delegate** from one generator to another in both directions with **`yield from`**.
# Generators:
**`yield`** is only legal inside of a function definition, and **the inclusion of `yield` in a function definition makes it return a generator.**
The idea for generators comes from other languages (see footnote 1) with varying implementations. In Python's Generators, the execution of the code is [frozen](https://docs.python.org/3.5/glossary.html#term-generator-iterator) at the point of the yield. When the generator is called (methods are discussed below) execution resumes and then freezes at the next yield.
`yield` provides an
easy way of [implementing the iterator protocol](https://docs.python.org/2/library/stdtypes.html#generator-types), defined by the following two methods:
`__iter__` and `next` (Python 2) or `__next__` (Python 3). Both of those methods
make an object an iterator that you could type-check with the `Iterator` Abstract Base
Class from the `collections` module.
```
>>> def func():
... yield 'I am'
... yield 'a generator!'
...
>>> type(func) # A function with yield is still a function
<type 'function'>
>>> gen = func()
>>> type(gen) # but it returns a generator
<type 'generator'>
>>> hasattr(gen, '__iter__') # that's an iterable
True
>>> hasattr(gen, 'next') # and with .next (.__next__ in Python 3)
True # implements the iterator protocol.
```
The generator type is a sub-type of iterator:
```
>>> import collections, types
>>> issubclass(types.GeneratorType, collections.Iterator)
True
```
And if necessary, we can type-check like this:
```
>>> isinstance(gen, types.GeneratorType)
True
>>> isinstance(gen, collections.Iterator)
True
```
A feature of an `Iterator` [is that once exhausted](https://docs.python.org/2/glossary.html#term-iterator), you can't reuse or reset it:
```
>>> list(gen)
['I am', 'a generator!']
>>> list(gen)
[]
```
You'll have to make another if you want to use its functionality again (see footnote 2):
```
>>> list(func())
['I am', 'a generator!']
```
One can yield data programmatically, for example:
```
def func(an_iterable):
for item in an_iterable:
yield item
```
The above simple generator is also equivalent to the below - as of Python 3.3 (and not available in Python 2), you can use [`yield from`](https://www.python.org/dev/peps/pep-0380/):
```
def func(an_iterable):
yield from an_iterable
```
However, `yield from` also allows for delegation to subgenerators,
which will be explained in the following section on cooperative delegation with sub-coroutines.
# Coroutines:
`yield` forms an expression that allows data to be sent into the generator (see footnote 3)
Here is an example, take note of the `received` variable, which will point to the data that is sent to the generator:
```
def bank_account(deposited, interest_rate):
while True:
calculated_interest = interest_rate * deposited
received = yield calculated_interest
if received:
deposited += received
>>> my_account = bank_account(1000, .05)
```
First, we must queue up the generator with the builtin function, [`next`](https://docs.python.org/2/library/functions.html#next). It will
call the appropriate `next` or `__next__` method, depending on the version of
Python you are using:
```
>>> first_year_interest = next(my_account)
>>> first_year_interest
50.0
```
And now we can send data into the generator. ([Sending `None` is
the same as calling `next`](https://www.python.org/dev/peps/pep-0342/).) :
```
>>> next_year_interest = my_account.send(first_year_interest + 1000)
>>> next_year_interest
102.5
```
## Cooperative Delegation to Sub-Coroutine with `yield from`
Now, recall that `yield from` is available in Python 3. This allows us to delegate
coroutines to a subcoroutine:
```
def money_manager(expected_rate):
under_management = yield # must receive deposited value
while True:
try:
additional_investment = yield expected_rate * under_management
if additional_investment:
under_management += additional_investment
except GeneratorExit:
'''TODO: write function to send unclaimed funds to state'''
finally:
'''TODO: write function to mail tax info to client'''
def investment_account(deposited, manager):
'''very simple model of an investment account that delegates to a manager'''
next(manager) # must queue up manager
manager.send(deposited)
while True:
try:
yield from manager
except GeneratorExit:
return manager.close()
```
And now we can delegate functionality to a sub-generator and it can be used
by a generator just as above:
```
>>> my_manager = money_manager(.06)
>>> my_account = investment_account(1000, my_manager)
>>> first_year_return = next(my_account)
>>> first_year_return
60.0
>>> next_year_return = my_account.send(first_year_return + 1000)
>>> next_year_return
123.6
```
You can read more about the precise semantics of `yield from` in [PEP 380.](https://www.python.org/dev/peps/pep-0380/#formal-semantics)
## Other Methods: close and throw
The `close` method raises `GeneratorExit` at the point the function
execution was frozen. This will also be called by `__del__` so you
can put any cleanup code where you handle the `GeneratorExit`:
```
>>> my_account.close()
```
You can also throw an exception which can be handled in the generator
or propagated back to the user:
```
>>> import sys
>>> try:
... raise ValueError
... except:
... my_manager.throw(*sys.exc_info())
...
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
File "<stdin>", line 2, in <module>
ValueError
```
# Conclusion
I believe I have covered all aspects of the following question:
> **What does the `yield` keyword do in Python?**
It turns out that `yield` does a lot. I'm sure I could add even more
thorough examples to this. If you want more or have some constructive criticism, let me know by commenting
below.
---
# Appendix:
## Critique of the Top/Accepted Answer\*\*
* It is confused on what makes an **iterable**, just using a list as an example. See my references above, but in summary: an iterable has an `__iter__` method returning an **iterator**. An **iterator** provides a `.next` (Python 2 or `.__next__` (Python 3) method, which is implicitly called by `for` loops until it raises `StopIteration`, and once it does, it will continue to do so.
* It then uses a generator expression to describe what a generator is. Since a generator is simply a convenient way to create an **iterator**, it only confuses the matter, and we still have not yet gotten to the `yield` part.
* In **Controlling a generator exhaustion** he calls the `.next` method, when instead he should use the builtin function, `next`. It would be an appropriate layer of indirection, because his code does not work in Python 3.
* Itertools? This was not relevant to what `yield` does at all.
* No discussion of the methods that `yield` provides along with the new functionality `yield from` in Python 3. **The top/accepted answer is a very incomplete answer.**
## The `return` statement in a generator
In [Python 2](https://docs.python.org/2/reference/simple_stmts.html#the-return-statement):
> In a generator function, the `return` statement is not allowed to include an `expression_list`. In that context, a bare `return` indicates that the generator is done and will cause `StopIteration` to be raised.
An `expression_list` is basically any number of expressions separated by commas - essentially, in Python 2, you can stop the generator with `return`, but you can't return a value.
In [Python 3](https://docs.python.org/3/reference/simple_stmts.html#the-return-statement):
> In a generator function, the `return` statement indicates that the generator is done and will cause `StopIteration` to be raised. The returned value (if any) is used as an argument to construct `StopIteration` and becomes the `StopIteration.value` attribute.
## Footnotes
1. The languages CLU, Sather, and Icon were referenced in the proposal
to introduce the concept of generators to Python. The general idea is
that a function can maintain internal state and yield intermediate
data points on demand by the user. This promised to be [superior in performance
to other approaches, including Python threading](https://www.python.org/dev/peps/pep-0255/), which isn't even available on some systems.
2. This means, for example, that `xrange` objects (`range` in Python 3) aren't `Iterator`s, even though they are iterable, because they can be reused. Like lists, their `__iter__` methods return iterator objects.
3. `yield` was originally introduced as a statement, meaning that it
could only appear at the beginning of a line in a code block.
Now `yield` creates a yield expression.
<https://docs.python.org/2/reference/simple_stmts.html#grammar-token-yield_stmt>
This change was [proposed](https://www.python.org/dev/peps/pep-0342/) to allow a user to send data into the generator just as
one might receive it. To send data, one must be able to assign it to something, and
for that, a statement just won't work. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 31,692,481 | 24 | 2015-07-29T06:11:25Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | Like every answer suggests, `yield` is used for creating a sequence generator. It's used for generating some sequence dynamically. Eg. While reading a file line by line on a network, you can use the `yield` function as follows:
```
def getNextLines():
while con.isOpen():
yield con.read()
```
You can use it in your code as follows :
```
for line in getNextLines():
doSomeThing(line)
```
***Execution Control Transfer gotcha***
The execution control will be transferred from getNextLines() to the for loop when yield is executed. Thus, every time getNextLines() is invoked, execution begins from the point where it was paused last time.
Thus in short, a function with the following code
```
def simpleYield():
yield "first time"
yield "second time"
yield "third time"
yield "Now some useful value {}".format(12)
for i in simpleYield():
print i
```
will print
```
"first time"
"second time"
"third time"
"Now some useful value 12"
```
I hope this helps you. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 32,331,953 | 18 | 2015-09-01T12:42:19Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | **Yield is an Object**
A `return` in a function will return a single value.
If you want **function to return huge set of values** use `yield`.
More importantly, `yield` is a **barrier**
> like Barrier in Cuda Language, it will not transfer control until it gets
> completed.
i.e
It will run the code in your function from the beginning until it hits `yield`. Then, itâll return the first value of the loop.
Then, every other call will run the loop you have written in the function one more time, returning the next value until there is no value to return. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 33,788,856 | 17 | 2015-11-18T19:37:29Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | The `yield` keyword simply collects returning results. Think of `yield` like `return +=` |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 35,526,740 | 12 | 2016-02-20T17:41:32Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | ### Official Reference on `yield` : **[PEP 255 -- Simple Generators](https://www.python.org/dev/peps/pep-0255/)**:
Most questions regarding the `yield` statement and the semantics/functionality that it introduces are present in *PEP 255*. The collective knowledge from all previous answers is amazing but I'll add an answer that references the official presentation.
So first of, the form of the `yield` statement:
```
yield_stmt: "yield" expression_list
```
consist of the *keyword* **yield** along with an optional `expression_list`.
Syntactically yield can only appear inside a function definition and its presence alone is responsible for tranforming a function into a generator object:
> The yield statement may only be used inside functions. A function
> that contains a `yield` statement is called a [generator](http://stackoverflow.com/questions/1756096/understanding-generators-in-python) function. A generator function is an ordinary function object in all respects, but has the new `CO_GENERATOR` flag set in the code object's `co_flags` member.
So after you define your generator you're left with a generator function that is waiting to be called:
> When a generator function is called, the actual arguments are bound to
> function-local formal argument names in the usual way, *but no code in*
> *the body of the function is executed.*
So parameters are bound in the same way as they do for all callable but the body of the generator object is not executed, what happens is:
> Instead a generator-iterator object is returned; this conforms to the iterator protocol[6], so in particular can be used in for-loops in a natural way.
We get back an object that comforms to the **[*iterator protocol*](https://docs.python.org/2/library/stdtypes.html#iterator-types)** this means that the `generator` object implements `__iter__` and `__next__` and as such can be used in `for` loops like any object that supports iteration.
The key difference that `yield` makes is here, specifically:
> Each time the `.next()` method of a generator-iterator is invoked, the
> code in the body of the generator-function is executed until a `yield`
> or `return` statement (see below) is encountered, or until the end of
> the body is reached.
So everything *until* the `yield` is executed and then execution stops, at that point what happens is:
> If a `yield` statement is encountered, the state of the function is
> frozen, and the value of expression\_list is returned to `.next()`'s
> caller.
So in the case of a `for` loop: `for i in gen_func(params): pass` the value of `i` is going to be equal to `expression_list` as previously stated.
But "frozen" you may ask, what does that mean? This is further explained as:
> By "frozen" we mean that all local state is retained, including the
> current bindings of local variables, the instruction pointer, and the
> internal evaluation stack: enough information is saved so that the
> next time .next() is invoked, the function can proceed exactly as if
> the yield statement were just another external call.
So state is retained when `yield` is encountered thereby allowing consequent calls to `next` to continue smoothly. When a `next` call is made the generator is going to execute everything until it finds another `yield` statement. That cicle is repeated until no `yield` (i.e control flows off the end of the generator) or a `return` is found in which case a `StopIteration` exception is raised signalling that the generator has been exhausted. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 36,168,155 | 10 | 2016-03-23T01:18:17Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | At a glance, the yield statement is used to define generators, replacing the return of a function to provide a result to its caller without destroying local variables. Unlike a function, where on each call it starts with new set of variables, a generator will resume the execution where it was left off.
About Python Generators
Since the yield keyword is only used with generators, it makes sense to recall the concept of generators first.
The idea of generators is to calculate a series of results one-by-one on demand (on the fly). In the simplest case, a generator can be used as a list, where each element is calculated lazily. Let's compare a list and a generator that do the same thing - return powers of two:
Iterating over the list and the generator looks completely the same. However, although the generator is iterable, it is not a collection and thus has no length. Collections (lists, tuples, sets, etc) keep all values in memory and we can access them whenever needed. A generator calculates the values on the fly and forgets them, so it does not have any overview about the own result set.
Generators are especially useful for memory-intensive tasks, where there is no need to keep all of the elements of a memory-heavy list accessible at the same time. Calculating a series of values one-by-one can also be useful in situations where the complete result is never needed, yielding intermediate results to the caller until some requirement is satisfied and further processing stops.
Using the Python "yield" keyword
A good example is a search task, where typically there is no need to wait for all results to be found. Performing a file-system search, a user would be happier to receive results on-the-fly, rather the wait for a search engine to go through every single file and only afterwards return results. Are there any people who really navigate through all Google search results until the last page?
Since a search functionality cannot be created using list-comprehensions, we are going to define a generator using a function with the yield statement/keyword. The yield instruction should be put into a place where the generator returns an intermediate result to the caller and sleeps until the next invocation occurs.
So far the most practical aspects of Python generators have been described. For more detailed info and an interesting discussion take a look at the Python Enhancement Proposal 255, which discusses the feature of the language in detail.
Happy Pythoning!
**For more info go to <http://pythoncentral.io/python-generators-and-yield-keyword/>** |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 36,214,653 | 14 | 2016-03-25T05:40:24Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | (My below answer only speaks from the perspective of using Python generator, not the [underlying implementation of generator mechanism](http://stackoverflow.com/questions/8389812/how-are-generators-and-coroutines-implemented-in-cpython), which involves some tricks of stack and heap manipulation.)
When `yield` is used instead of a `return` in a python function, that function is turned into something special called `generator function`. That function will return an object of `generator` type. **The `yield` keyword is a flag to notify the python compiler to treat such function specially.** Normal functions will terminate once some value is returned from it. But with the help of the compiler, the generator function **can be thought of** as resumable. That is, the execution context will be restored and the execution will continue from last run. Until you explicitly call return, which will raise a `StopIteration` exception (which is also part of the iterator protocol), or reach the end of the function. I found a lot of references about `generator` but this [one](https://docs.python.org/dev/howto/functional.html#generators) from the `functional programming perspective` is the most digestable.
(Now I want to talk about the rationale behind `generator`, and the `iterator` based on my own understanding. I hope this can help you grasp the ***essential motivation*** of iterator and generator. Such concept shows up in other languages as well such as C#.)
As I understand, when we want to process a bunch of data, we usually first store the data somewhere and then process it one by one. But this ***intuitive*** approach is problematic. If the data volume is huge, it's expensive to store them as a whole beforehand. **So instead of storing the `data` itself directly, why not store some kind of `metadata` indirectly, i.e. `the logic how the data is computed`**.
There are 2 approaches to wrap such metadata.
1. The OO approach, we wrap the metadata `as a class`. This is the so-called `iterator` who implements the iterator protocol (i.e. the `__next__()`, and `__iter__()` methods). This is also the commonly seen [iterator design pattern](https://en.wikipedia.org/wiki/Iterator_pattern#Python).
2. The functional approach, we wrap the metadata `as a function`. This is
the so-called `generator function`. But under the hood, the returned `generator object` still `IS-A` iterator because it also implements the iterator protocol.
Either way, an iterator is created, i.e. some object that can give you the data you want. The OO approach may be a bit complex. Anyway, which one to use is up to you. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 36,220,775 | 27 | 2016-03-25T13:21:44Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | **TL;DR**
# When you find yourself building a list from scratch...
```
def squares_list(n):
the_list = []
for x in range(n):
y = x * x
the_list.append(y)
return the_list
```
# ...you may want to yield the pieces instead.
```
def squares_the_yield_way(n):
for x in range(n):
y = x * x
yield y
```
This was my first aha-moment with yield.
---
`yield` is a sugary way to say
> build a series of stuff
Same behavior:
```
>>> for square in squares_list(4):
... print(square)
...
0
1
4
9
>>> for square in squares_the_yield_way(4):
... print(square)
...
0
1
4
9
```
Different behavior:
Yield is **single-use**: you can only iterate through once. Conceptually the yield-function returns an ordered container of things. But it's revealing that we call any function with a yield in it a [generator function](http://stackoverflow.com/a/1756342/673991). And the term for what it returns is an [iterator](http://stackoverflow.com/a/9884501/673991).
Yield is **lazy**, it puts off computation until you need it. A function with a yield in it *doesn't actually execute at all* when you call it. The iterator object it returns uses [magic](https://docs.python.org/reference/simple_stmts.html#yield) to maintain the function's internal context. Each time you call `next()` on the iterator (as happens in a for-loop), execution inches forward to the next yield. (Or `return`, which raises `StopIteration` and ends the series.)
Yield is **versatile**. It can do infinite loops:
```
>>> def squares_all_of_them():
... x = 0
... while True:
... yield x * x
... x += 1
...
>>> squares = squares_all_of_them()
>>> for i in range(6):
... print(squares.next())
...
0
1
4
9
16
25
```
---
Brilliant choice of the word `yield` because [both meanings of the verb](https://www.google.com/search?q=yield+meaning) apply:
> **yield** — produce or provide (as in agriculture)
...provide the next data in the series.
> **yield** — give way or relinquish (as in political power)
...relinquish CPU execution until the iterator advances. |
What does the "yield" keyword do? | 231,767 | 5,524 | 2008-10-23T22:21:11Z | 37,964,180 | 8 | 2016-06-22T09:40:15Z | [
"python",
"iterator",
"generator",
"yield",
"coroutine"
] | What is the use of the `yield` keyword in Python? What does it do?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = list(), [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
A list is returned? A single element is returned? Is it called again? When will subsequent calls stop?
---
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/). | Yet another TL;DR
**iterator on list**: `next()` returns the next element of the list
**iterator generator**: `next()` will compute the next element on the fly
You can see the yield/generator as a way to manually run the **control flow** from outside (like continue loop 1 step), by calling next, however complex the flow.
NOTE: the generator is **NOT** a normal function, it remembers previous state like local variables (stack), see other answers or articles for detailed explanation, the generator can only be **iterated on once**.
You could do without `yield` but it would not be as nice, so it can be considered 'very nice' language sugar. |
Python inheritance - how to disable a function | 231,839 | 21 | 2008-10-23T22:40:55Z | 231,871 | 16 | 2008-10-23T22:52:51Z | [
"python",
"inheritance",
"interface",
"private"
] | In C++ you can disable a function in parent's class by declaring it as private in the child class. How can this be done in Python? I.E. How can I hide parent's function from child's public interface? | There really aren't any true "private" attributes or methods in Python. One thing you can do is simply override the method you don't want in the subclass, and raise an exception:
```
>>> class Foo( object ):
... def foo( self ):
... print 'FOO!'
...
>>> class Bar( Foo ):
... def foo( self ):
... raise AttributeError( "'Bar' object has no attribute 'foo'" )
...
>>> b = Bar()
>>> b.foo()
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "<interactive input>", line 3, in foo
AttributeError: 'Bar' object has no attribute 'foo'
``` |
Python inheritance - how to disable a function | 231,839 | 21 | 2008-10-23T22:40:55Z | 235,657 | 15 | 2008-10-25T00:05:19Z | [
"python",
"inheritance",
"interface",
"private"
] | In C++ you can disable a function in parent's class by declaring it as private in the child class. How can this be done in Python? I.E. How can I hide parent's function from child's public interface? | kurosch's method of solving the problem isn't quite correct, because you can still use `b.foo` without getting an `AttributeError`. If you don't invoke the function, no error occurs. Here are two ways that I can think to do this:
```
import doctest
class Foo(object):
"""
>>> Foo().foo()
foo
"""
def foo(self): print 'foo'
def fu(self): print 'fu'
class Bar(object):
"""
>>> b = Bar()
>>> b.foo()
Traceback (most recent call last):
...
AttributeError
>>> hasattr(b, 'foo')
False
>>> hasattr(b, 'fu')
True
"""
def __init__(self): self._wrapped = Foo()
def __getattr__(self, attr_name):
if attr_name == 'foo': raise AttributeError
return getattr(self._wrapped, attr_name)
class Baz(Foo):
"""
>>> b = Baz()
>>> b.foo() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
AttributeError...
>>> hasattr(b, 'foo')
False
>>> hasattr(b, 'fu')
True
"""
foo = property()
if __name__ == '__main__':
doctest.testmod()
```
Bar uses the "wrap" pattern to restrict access to the wrapped object. [Martelli has a good talk](http://www.aleax.it/gdd_pydp.pdf) dealing with this. Baz uses [the property built-in](http://docs.python.org/library/functions.html?highlight=property#property) to implement the descriptor protocol for the attribute to override. |
Will Django be a good choice for a permissions based web-app? | 232,008 | 12 | 2008-10-23T23:50:15Z | 237,214 | 7 | 2008-10-25T23:49:42Z | [
"python",
"django",
"permissions"
] | I've been exploring the details of Django for about a week now and like what I see. However I've come upon some.. negativity in relation to fine-grained control of permissions to the CRUD interface.
What I'm writing is an Intranet client management web-app. The organisation is about 6 tiers, and I need to restrict access to client groups based on tiers. Continually expanding. I have a fairly good idea how I'm going to do this, but am not sure if I'll be able to integrate it well into the pre-built admin interface.
I've done absolutely zero Django development otherwise I'd probably have a better idea on whether this would work or not. I probably won't use Django if the generated admin interface is going to be useless to this project - but like I said, there is a heavy reliance on fine-grained custom permissions.
Will Django let me build custom permissions/rules and integrate it seamlessly into the admin CRUD interface?
Update One: I want to use the admin app to minimise the repitition of generating CRUD interfaces, so yes I consider it a must have.
Update Two:
I want to describe the permissions required for this project.
A client can belong to one or many 'stores'. Full time employees should only be able to edit clients at their store (even if they belong to another store). However, they should not be able to see/edit clients at another store. Casuals should only be able to view clients based on what store they are rostered too (or if the casual is logged in as the store user - more likely).
Management above them need to be able to see all employees for the stores they manage, nothing more.
Senior management should be able to edit ALL employees and grant permissions below themselves.
After reading the django documentation, it says you can't (autmoatically) set permissions for a sub-set of a group. Only the entire group. Is it easy enough to mock up your own permissions for this purpose? | If I read your updated requirements correctly, I don't think Django's existing auth system will be sufficient. It sounds like you need a full-on ACL system.
This subject has come up a number of times. Try googling on django+acl.
Random samplings ...
There was a Summer of Code project a couple of years ago, but I'm not sure where they got to. See <http://code.djangoproject.com/wiki/GenericAuthorization>
There is a fresh ticket at djngoproject.org that might be interesting:
* <http://code.djangoproject.com/ticket/9444>
There is some interesting code snips on dumpz.org:
* <http://dumpz.org/274/> models.py
* <http://dumpz.org/273/> signals.py
... but there are zero docs.
Good luck! |
How do I restrict foreign keys choices to related objects only in django | 232,435 | 33 | 2008-10-24T03:52:50Z | 232,644 | 12 | 2008-10-24T06:28:47Z | [
"python",
"django",
"django-models"
] | I have a two way foreign relation similar to the following
```
class Parent(models.Model):
name = models.CharField(max_length=255)
favoritechild = models.ForeignKey("Child", blank=True, null=True)
class Child(models.Model):
name = models.CharField(max_length=255)
myparent = models.ForeignKey(Parent)
```
How do I restrict the choices for Parent.favoritechild to only children whose parent is itself? I tried
```
class Parent(models.Model):
name = models.CharField(max_length=255)
favoritechild = models.ForeignKey("Child", blank=True, null=True, limit_choices_to = {"myparent": "self"})
```
but that causes the admin interface to not list any children. | This isn't how django works. You would only create the relation going one way.
```
class Parent(models.Model):
name = models.CharField(max_length=255)
class Child(models.Model):
name = models.CharField(max_length=255)
myparent = models.ForeignKey(Parent)
```
And if you were trying to access the children from the parent you would do
`parent_object.child_set.all()`. If you set a related\_name in the myparent field, then that is what you would refer to it as. Ex: `related_name='children'`, then you would do `parent_object.children.all()`
Read the [docs](http://docs.djangoproject.com/en/dev/topics/db/models/#many-to-one-relationships) <http://docs.djangoproject.com/en/dev/topics/db/models/#many-to-one-relationships> for more. |
How do I restrict foreign keys choices to related objects only in django | 232,435 | 33 | 2008-10-24T03:52:50Z | 252,087 | 20 | 2008-10-30T23:07:01Z | [
"python",
"django",
"django-models"
] | I have a two way foreign relation similar to the following
```
class Parent(models.Model):
name = models.CharField(max_length=255)
favoritechild = models.ForeignKey("Child", blank=True, null=True)
class Child(models.Model):
name = models.CharField(max_length=255)
myparent = models.ForeignKey(Parent)
```
How do I restrict the choices for Parent.favoritechild to only children whose parent is itself? I tried
```
class Parent(models.Model):
name = models.CharField(max_length=255)
favoritechild = models.ForeignKey("Child", blank=True, null=True, limit_choices_to = {"myparent": "self"})
```
but that causes the admin interface to not list any children. | I just came across [ForeignKey.limit\_choices\_to](http://docs.djangoproject.com/en/dev/ref/models/fields/#django.db.models.ForeignKey.limit_choices_to) in the Django docs.
Not sure yet how this works, but it might just be the right think here.
**Update:** ForeignKey.limit\_choices\_to allows to specify either a constant, a callable or a Q object to restrict the allowable choices for the key. A constant obviously is no use here, since it knows nothing about the objects involved.
Using a callable (function or class method or any callable object) seem more promising. The problem remains how to access the necessary information form the HttpRequest object. Using [thread local storage](http://stackoverflow.com/questions/160009/django-model-limitchoicestouser-user) may be a solution.
**2. Update:** Here is what hast worked for me:
I created a middle ware as described in the link above. It extracts one or more arguments from the request's GET part, such as "product=1" and stores this information in the thread locals.
Next there is a class method in the model that reads the thread local variable and returns a list of ids to limit the choice of a foreign key field.
```
@classmethod
def _product_list(cls):
"""
return a list containing the one product_id contained in the request URL,
or a query containing all valid product_ids if not id present in URL
used to limit the choice of foreign key object to those related to the current product
"""
id = threadlocals.get_current_product()
if id is not None:
return [id]
else:
return Product.objects.all().values('pk').query
```
It is important to return a query containing all possible ids if none was selected so the normal admin pages work ok.
The foreign key field is then declared as:
```
product = models.ForeignKey(Product, limit_choices_to=dict(id__in=BaseModel._product_list))
```
The catch is that you have to provide the information to restrict the choices via the request. I don't see a way to access "self" here. |
How do I restrict foreign keys choices to related objects only in django | 232,435 | 33 | 2008-10-24T03:52:50Z | 4,653,418 | 8 | 2011-01-11T01:44:54Z | [
"python",
"django",
"django-models"
] | I have a two way foreign relation similar to the following
```
class Parent(models.Model):
name = models.CharField(max_length=255)
favoritechild = models.ForeignKey("Child", blank=True, null=True)
class Child(models.Model):
name = models.CharField(max_length=255)
myparent = models.ForeignKey(Parent)
```
How do I restrict the choices for Parent.favoritechild to only children whose parent is itself? I tried
```
class Parent(models.Model):
name = models.CharField(max_length=255)
favoritechild = models.ForeignKey("Child", blank=True, null=True, limit_choices_to = {"myparent": "self"})
```
but that causes the admin interface to not list any children. | The new "right" way of doing this, at least since Django 1.1 is by overriding the AdminModel.formfield\_for\_foreignkey(self, db\_field, request, \*\*kwargs).
See <http://docs.djangoproject.com/en/1.2/ref/contrib/admin/#django.contrib.admin.ModelAdmin.formfield_for_foreignkey>
For those who don't want to follow the link below is an example function that is close for the above questions models.
```
class MyModelAdmin(admin.ModelAdmin):
def formfield_for_foreignkey(self, db_field, request, **kwargs):
if db_field.name == "favoritechild":
kwargs["queryset"] = Child.objects.filter(myparent=request.object_id)
return super(MyModelAdmin, self).formfield_for_manytomany(db_field, request, **kwargs)
```
I'm only not sure about how to get the current object that is being edited. I expect it is actually on the self somewhere but I'm not sure. |
How do I restrict foreign keys choices to related objects only in django | 232,435 | 33 | 2008-10-24T03:52:50Z | 19,556,353 | 16 | 2013-10-24T03:25:58Z | [
"python",
"django",
"django-models"
] | I have a two way foreign relation similar to the following
```
class Parent(models.Model):
name = models.CharField(max_length=255)
favoritechild = models.ForeignKey("Child", blank=True, null=True)
class Child(models.Model):
name = models.CharField(max_length=255)
myparent = models.ForeignKey(Parent)
```
How do I restrict the choices for Parent.favoritechild to only children whose parent is itself? I tried
```
class Parent(models.Model):
name = models.CharField(max_length=255)
favoritechild = models.ForeignKey("Child", blank=True, null=True, limit_choices_to = {"myparent": "self"})
```
but that causes the admin interface to not list any children. | The 'right' way to do it is to use a custom form. From there, you can access self.instance, which is the current object. Example --
```
from django import forms
from django.contrib import admin
from models import *
class SupplierAdminForm(forms.ModelForm):
class Meta:
model = Supplier
def __init__(self, *args, **kwargs):
super(SupplierAdminForm, self).__init__(*args, **kwargs)
if self.instance:
self.fields['cat'].queryset = Cat.objects.filter(supplier=self.instance)
class SupplierAdmin(admin.ModelAdmin):
form = SupplierAdminForm
``` |
Cannot import SQLite with Python 2.6 | 233,320 | 10 | 2008-10-24T12:33:24Z | 233,336 | 10 | 2008-10-24T12:36:50Z | [
"python",
"sqlite"
] | I'm running Python 2.6 on Unix and when I run the interactive prompt ([SQLite](http://en.wikipedia.org/wiki/SQLite) is supposed to be preinstalled) I get:
```
[root@idev htdocs]# python
Python 2.6 (r26:66714, Oct 23 2008, 16:25:34)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named sqlite
>>>
```
How do I resolve this? | ```
import sqlite3
```
[sqlite3](http://docs.python.org/library/sqlite3.html#module-sqlite3) - DB-API 2.0 interface for SQLite databases.
You are missing the `.so` (shared object) - probably an installation step. In my Linux python installation, `_sqlite3` is at:
```
${somewhere}/lib/python2.6/lib-dynload/_sqlite3.so
``` |
Cannot import SQLite with Python 2.6 | 233,320 | 10 | 2008-10-24T12:33:24Z | 233,865 | 13 | 2008-10-24T14:52:21Z | [
"python",
"sqlite"
] | I'm running Python 2.6 on Unix and when I run the interactive prompt ([SQLite](http://en.wikipedia.org/wiki/SQLite) is supposed to be preinstalled) I get:
```
[root@idev htdocs]# python
Python 2.6 (r26:66714, Oct 23 2008, 16:25:34)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named sqlite
>>>
```
How do I resolve this? | The error:
```
ImportError: No module named _sqlite3
```
means that [SQLite](http://en.wikipedia.org/wiki/SQLite) 3 does not find the associated shared library. On Mac OS X it's \_sqlite3.so and it should be the same on other Unix systems.
To resolve the error you have to locate the \_sqlite3.so library on your computer and then check your PYTHONPATH for this directory location.
To print the Python search path enter the following in the Python shell:
```
import sys
print sys.path
```
If the directory containing your library is missing you can try adding it interactively with
```
sys.path.append('/your/dir/here')
```
and try
```
import sqlite3
```
again. If this works you have to add this directory permanently to your PYTHONPATH environment variable.
PS: If the library is missing you should (re-)install the module. |
Cannot import SQLite with Python 2.6 | 233,320 | 10 | 2008-10-24T12:33:24Z | 939,030 | 10 | 2009-06-02T11:35:17Z | [
"python",
"sqlite"
] | I'm running Python 2.6 on Unix and when I run the interactive prompt ([SQLite](http://en.wikipedia.org/wiki/SQLite) is supposed to be preinstalled) I get:
```
[root@idev htdocs]# python
Python 2.6 (r26:66714, Oct 23 2008, 16:25:34)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named sqlite
>>>
```
How do I resolve this? | Python 2.6 detects where the sqlite3 development headers are installed, and will silently skip building \_sqlite3 if it is not available. If you are building from source, install sqlite3 including development headers. In my case, `sudo yum install sqlite-devel` sorted this out on a CentOS 4.7. Then, rebuild Python from source code. |
Lexical closures in Python | 233,673 | 113 | 2008-10-24T14:08:25Z | 233,800 | 12 | 2008-10-24T14:36:07Z | [
"python",
"closures",
"lazy-evaluation",
"late-binding"
] | While I was investigating a problem I had with lexical closures in Javascript code, I came along this problem in Python:
```
flist = []
for i in xrange(3):
def func(x): return x * i
flist.append(func)
for f in flist:
print f(2)
```
Note that this example mindfully avoids `lambda`. It prints "4 4 4", which is surprising. I'd expect "0 2 4".
This equivalent Perl code does it right:
```
my @flist = ();
foreach my $i (0 .. 2)
{
push(@flist, sub {$i * $_[0]});
}
foreach my $f (@flist)
{
print $f->(2), "\n";
}
```
"0 2 4" is printed.
Can you please explain the difference ?
---
Update:
The problem **is not** with `i` being global. This displays the same behavior:
```
flist = []
def outer():
for i in xrange(3):
def inner(x): return x * i
flist.append(inner)
outer()
#~ print i # commented because it causes an error
for f in flist:
print f(2)
```
As the commented line shows, `i` is unknown at that point. Still, it prints "4 4 4". | look at this:
```
for f in flist:
print f.func_closure
(<cell at 0x00C980B0: int object at 0x009864B4>,)
(<cell at 0x00C980B0: int object at 0x009864B4>,)
(<cell at 0x00C980B0: int object at 0x009864B4>,)
```
It means they all point to the same i variable instance, which will have a value of 2 once the loop is over.
A readable solution:
```
for i in xrange(3):
def ffunc(i):
def func(x): return x * i
return func
flist.append(ffunc(i))
``` |
Lexical closures in Python | 233,673 | 113 | 2008-10-24T14:08:25Z | 233,835 | 123 | 2008-10-24T14:47:08Z | [
"python",
"closures",
"lazy-evaluation",
"late-binding"
] | While I was investigating a problem I had with lexical closures in Javascript code, I came along this problem in Python:
```
flist = []
for i in xrange(3):
def func(x): return x * i
flist.append(func)
for f in flist:
print f(2)
```
Note that this example mindfully avoids `lambda`. It prints "4 4 4", which is surprising. I'd expect "0 2 4".
This equivalent Perl code does it right:
```
my @flist = ();
foreach my $i (0 .. 2)
{
push(@flist, sub {$i * $_[0]});
}
foreach my $f (@flist)
{
print $f->(2), "\n";
}
```
"0 2 4" is printed.
Can you please explain the difference ?
---
Update:
The problem **is not** with `i` being global. This displays the same behavior:
```
flist = []
def outer():
for i in xrange(3):
def inner(x): return x * i
flist.append(inner)
outer()
#~ print i # commented because it causes an error
for f in flist:
print f(2)
```
As the commented line shows, `i` is unknown at that point. Still, it prints "4 4 4". | Python is actually behaving as defined. **Three separate functions** are created, but they each have the **closure of the environment they're defined in** - in this case, the global environment (or the outer function's environment if the loop is placed inside another function). This is exactly the problem, though - in this environment, **i is mutated**, and the closures all **refer to the same i**.
Here is the best solution I can come up with - create a function creater and invoke *that* instead. This will force **different environments** for each of the functions created, with a **different i** in each one.
```
flist = []
for i in xrange(3):
def funcC(j):
def func(x): return x * j
return func
flist.append(funcC(i))
for f in flist:
print f(2)
```
This is what happens when you mix side effects and functional programming. |
Lexical closures in Python | 233,673 | 113 | 2008-10-24T14:08:25Z | 235,764 | 133 | 2008-10-25T01:56:42Z | [
"python",
"closures",
"lazy-evaluation",
"late-binding"
] | While I was investigating a problem I had with lexical closures in Javascript code, I came along this problem in Python:
```
flist = []
for i in xrange(3):
def func(x): return x * i
flist.append(func)
for f in flist:
print f(2)
```
Note that this example mindfully avoids `lambda`. It prints "4 4 4", which is surprising. I'd expect "0 2 4".
This equivalent Perl code does it right:
```
my @flist = ();
foreach my $i (0 .. 2)
{
push(@flist, sub {$i * $_[0]});
}
foreach my $f (@flist)
{
print $f->(2), "\n";
}
```
"0 2 4" is printed.
Can you please explain the difference ?
---
Update:
The problem **is not** with `i` being global. This displays the same behavior:
```
flist = []
def outer():
for i in xrange(3):
def inner(x): return x * i
flist.append(inner)
outer()
#~ print i # commented because it causes an error
for f in flist:
print f(2)
```
As the commented line shows, `i` is unknown at that point. Still, it prints "4 4 4". | The functions defined in the loop keep accessing the same variable `i` while its value changes. At the end of the loop, all the functions point to the same variable, which is holding the last value in the loop: the effect is what reported in the example.
In order to evaluate `i` and use its value, a common pattern is to set it as a parameter default: parameter defaults are evaluated when the `def` statement is executed, and thus the value of the loop variable is frozen.
The following works as expected:
```
flist = []
for i in xrange(3):
def func(x, i=i): # the *value* of i is copied in func() environment
return x * i
flist.append(func)
for f in flist:
print f(2)
``` |
Lexical closures in Python | 233,673 | 113 | 2008-10-24T14:08:25Z | 236,253 | 7 | 2008-10-25T11:28:04Z | [
"python",
"closures",
"lazy-evaluation",
"late-binding"
] | While I was investigating a problem I had with lexical closures in Javascript code, I came along this problem in Python:
```
flist = []
for i in xrange(3):
def func(x): return x * i
flist.append(func)
for f in flist:
print f(2)
```
Note that this example mindfully avoids `lambda`. It prints "4 4 4", which is surprising. I'd expect "0 2 4".
This equivalent Perl code does it right:
```
my @flist = ();
foreach my $i (0 .. 2)
{
push(@flist, sub {$i * $_[0]});
}
foreach my $f (@flist)
{
print $f->(2), "\n";
}
```
"0 2 4" is printed.
Can you please explain the difference ?
---
Update:
The problem **is not** with `i` being global. This displays the same behavior:
```
flist = []
def outer():
for i in xrange(3):
def inner(x): return x * i
flist.append(inner)
outer()
#~ print i # commented because it causes an error
for f in flist:
print f(2)
```
As the commented line shows, `i` is unknown at that point. Still, it prints "4 4 4". | What is happening is that the variable i is captured, and the functions are returning the value it is bound to at the time it is called. In functional languages this kind of situation never arises, as i wouldn't be rebound. However with python, and also as you've seen with lisp, this is no longer true.
The difference with your scheme example is to do with the semantics of the do loop. Scheme is effectively creating a new i variable each time through the loop, rather than reusing an existing i binding as with the other languages. If you use a different variable created external to the loop and mutate it, you'll see the same behaviour in scheme. Try replacing your loop with:
```
(let ((ii 1)) (
(do ((i 1 (+ 1 i)))
((>= i 4))
(set! flist
(cons (lambda (x) (* ii x)) flist))
(set! ii i))
))
```
Take a look [here](http://lambda-the-ultimate.org/node/2648) for some further discussion of this.
[Edit] Possibly a better way to describe it is to think of the do loop as a macro which performs the following steps:
1. Define a lambda taking a single parameter (i), with a body defined by the body of the loop,
2. An immediate call of that lambda with appropriate values of i as its parameter.
ie. the equivalent to the below python:
```
flist = []
def loop_body(i): # extract body of the for loop to function
def func(x): return x*i
flist.append(func)
map(loop_body, xrange(3)) # for i in xrange(3): body
```
The i is no longer the one from the parent scope but a brand new variable in its own scope (ie. the parameter to the lambda) and so you get the behaviour you observe. Python doesn't have this implicit new scope, so the body of the for loop just shares the i variable. |
Lexical closures in Python | 233,673 | 113 | 2008-10-24T14:08:25Z | 6,805,307 | 23 | 2011-07-24T06:24:40Z | [
"python",
"closures",
"lazy-evaluation",
"late-binding"
] | While I was investigating a problem I had with lexical closures in Javascript code, I came along this problem in Python:
```
flist = []
for i in xrange(3):
def func(x): return x * i
flist.append(func)
for f in flist:
print f(2)
```
Note that this example mindfully avoids `lambda`. It prints "4 4 4", which is surprising. I'd expect "0 2 4".
This equivalent Perl code does it right:
```
my @flist = ();
foreach my $i (0 .. 2)
{
push(@flist, sub {$i * $_[0]});
}
foreach my $f (@flist)
{
print $f->(2), "\n";
}
```
"0 2 4" is printed.
Can you please explain the difference ?
---
Update:
The problem **is not** with `i` being global. This displays the same behavior:
```
flist = []
def outer():
for i in xrange(3):
def inner(x): return x * i
flist.append(inner)
outer()
#~ print i # commented because it causes an error
for f in flist:
print f(2)
```
As the commented line shows, `i` is unknown at that point. Still, it prints "4 4 4". | Here's how you do it using the `functools` library (whom I'm not sure was available at the time the question was posed).
```
from functools import partial
flist = []
def func(i, x): return x * i
for i in xrange(3):
flist.append(partial(func, i))
for f in flist:
print f(2)
```
Outputs 0 2 4, as expected. |
Splitting strings in python | 234,512 | 6 | 2008-10-24T17:33:41Z | 234,645 | 8 | 2008-10-24T17:57:26Z | [
"python",
"string",
"split",
"parsing",
"tokenize"
] | I have a string which is like this:
this is [bracket test] "and quotes test "
I'm trying to write something in Python to split it up by space while ignoring spaces within square braces and quotes. The result I'm looking for is:
['this','is','bracket test','and quotes test '] | Here's a simplistic solution that works with your test input:
```
import re
re.findall('\[[^\]]*\]|\"[^\"]*\"|\S+',s)
```
This will return any code that matches either
* a open bracket followed by zero or more non-close-bracket characters followed by a close bracket,
* a double-quote followed by zero or more non-quote characters followed by a quote,
* any group of non-whitespace characters
This works with your example, but might fail for many real-world strings you may encounter. For example, you didn't say what you expect with unbalanced brackets or quotes,or how you want single quotes or escape characters to work. For simple cases, though, the above might be good enough. |
Default parameters to actions with Django | 234,695 | 3 | 2008-10-24T18:12:00Z | 234,995 | 8 | 2008-10-24T19:39:04Z | [
"python",
"django",
"django-urls"
] | Is there a way to have a default parameter passed to a action in the case where the regex didnt match anything using django?
```
urlpatterns = patterns('',(r'^test/(?P<name>.*)?$','myview.displayName'))
#myview.py
def displayName(request,name):
# write name to response or something
```
I have tried setting the third parameter in the urlpatterns to a dictionary containing ' and giving the name parameter a default value on the method, none of which worked. the name parameter always seems to be None. I really dont want to code a check for None if i could set a default value.
Clarification: here is an example of what i was changing it to.
```
def displayName(request,name='Steve'):
return HttpResponse(name)
#i also tried
urlpatterns = patterns('',
(r'^test/(?P<name>.*)?$',
'myview.displayName',
dict(name='Test')
)
)
```
when i point my browser at the view it displays the text
'None'
Any ideas? | The problem is that when the pattern is matched against 'test/' the groupdict captured by the regex contains the mapping 'name' => None:
```
>>> url.match("test/").groupdict()
{'name': None}
```
This means that when the view is invoked, using something I expect that is similar to below:
```
view(request, *groups, **groupdict)
```
which is equivalent to:
```
view(request, name = None)
```
for 'test/', meaning that name is assigned None rather than not assigned.
This leaves you with two options. You can:
1. Explicitly check for None in the view code which is kind of hackish.
2. Rewrite the url dispatch rule to make the name capture non-optional and introduce a second rule to capture when no name is provided.
For example:
```
urlpatterns = patterns('',
(r'^test/(?P<name>.+)$','myview.displayName'), # note the '+' instead of the '*'
(r'^test/$','myview.displayName'),
)
```
When taking the second approach, you can simply call the method without the capture pattern, and let python handle the default parameter or you can call a different view which delegates. |
Trailing slashes in Pylons Routes | 235,191 | 7 | 2008-10-24T20:42:08Z | 235,238 | 7 | 2008-10-24T20:58:16Z | [
"python",
"routes",
"pylons"
] | What is the best way to make trailing slashes not matter in the latest version of Routes (1.10)? I currently am using the clearly non-DRY:
```
map.connect('/logs/', controller='logs', action='logs')
map.connect('/logs', controller='logs', action='logs')
```
I think that turning minimization on would do the trick, but am under the impression that it was disabled in the newer versions of Routes for a reason. Unfortunately documentation doesn't seem to have caught up with Routes development, so I can't find any good resources to go to. Any ideas? | There are two possible ways to solve this:
1. [Do it entirely in pylons](http://wiki.pylonshq.com/display/pylonscookbook/Adding+trailing+slash+to+pages+automatically).
2. [Add an htaccess rule to rewrite the trailing slash](http://enarion.net/web/apache/htaccess/trailing-slash/).
Personally I don't like the trailing slash, because if you have a uri like:
<http://example.com/people>
You should be able to get the same data in xml format by going to:
<http://example.com/people.xml> |
Trailing slashes in Pylons Routes | 235,191 | 7 | 2008-10-24T20:42:08Z | 1,441,104 | 16 | 2009-09-17T20:19:09Z | [
"python",
"routes",
"pylons"
] | What is the best way to make trailing slashes not matter in the latest version of Routes (1.10)? I currently am using the clearly non-DRY:
```
map.connect('/logs/', controller='logs', action='logs')
map.connect('/logs', controller='logs', action='logs')
```
I think that turning minimization on would do the trick, but am under the impression that it was disabled in the newer versions of Routes for a reason. Unfortunately documentation doesn't seem to have caught up with Routes development, so I can't find any good resources to go to. Any ideas? | The following snippet added as the very last route worked for me:
```
map.redirect('/*(url)/', '/{url}',
_redirect_code='301 Moved Permanently')
``` |
Environment Variables in Python on Linux | 235,435 | 12 | 2008-10-24T22:13:54Z | 235,475 | 13 | 2008-10-24T22:28:38Z | [
"python",
"gdb",
"environment-variables"
] | Python's access to environment variables does not accurately reflect the operating system's view of the processes environment.
os.getenv and os.environ do not function as expected in particular cases.
Is there a way to properly get the running process' environment?
---
To demonstrate what I mean, take the two roughly equivalent programs (the first in C, the other in python):
```
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char *argv[]){
char *env;
for(;;){
env = getenv("SOME_VARIABLE");
if(env)
puts(env);
sleep(5);
}
}
```
---
```
import os
import time
while True:
env = os.getenv("SOME_VARIABLE")
if env is not None:
print env
time.sleep(5)
```
---
Now, if we run the C program and attach to the running process with gdb and forcibly change the environment under the hood by doing something like this:
```
(gdb) print setenv("SOME_VARIABLE", "my value", 1)
[Switching to Thread -1208600896 (LWP 16163)]
$1 = 0
(gdb) print (char *)getenv("SOME_VARIABLE")
$2 = 0x8293126 "my value"
```
then the aforementioned C program will start spewing out "my value" once every 5 seconds. The aforementioned python program, however, will not.
Is there a way to get the python program to function like the C program in this case?
(Yes, I realize this is a very obscure and potentially damaging action to perform on a running process)
Also, I'm currently using python 2.4, this may have been fixed in a later version of python. | That's a very good question.
It turns out that the `os` module initializes `os.environ` to the value of [`posix`](http://docs.python.org/library/posix.html)`.environ`, which is set on interpreter start up. In other words, the standard library does not appear to provide access to the [getenv](http://www.opengroup.org/onlinepubs/000095399/functions/getenv.html) function.
That is a case where it would probably be safe to use [ctypes](http://docs.python.org/library/ctypes.html) on unix. Since you would be calling an ultra-standard libc function. |
Environment Variables in Python on Linux | 235,435 | 12 | 2008-10-24T22:13:54Z | 242,175 | 10 | 2008-10-28T03:38:23Z | [
"python",
"gdb",
"environment-variables"
] | Python's access to environment variables does not accurately reflect the operating system's view of the processes environment.
os.getenv and os.environ do not function as expected in particular cases.
Is there a way to properly get the running process' environment?
---
To demonstrate what I mean, take the two roughly equivalent programs (the first in C, the other in python):
```
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char *argv[]){
char *env;
for(;;){
env = getenv("SOME_VARIABLE");
if(env)
puts(env);
sleep(5);
}
}
```
---
```
import os
import time
while True:
env = os.getenv("SOME_VARIABLE")
if env is not None:
print env
time.sleep(5)
```
---
Now, if we run the C program and attach to the running process with gdb and forcibly change the environment under the hood by doing something like this:
```
(gdb) print setenv("SOME_VARIABLE", "my value", 1)
[Switching to Thread -1208600896 (LWP 16163)]
$1 = 0
(gdb) print (char *)getenv("SOME_VARIABLE")
$2 = 0x8293126 "my value"
```
then the aforementioned C program will start spewing out "my value" once every 5 seconds. The aforementioned python program, however, will not.
Is there a way to get the python program to function like the C program in this case?
(Yes, I realize this is a very obscure and potentially damaging action to perform on a running process)
Also, I'm currently using python 2.4, this may have been fixed in a later version of python. | You can use `ctypes` to do this pretty simply:
```
>>> from ctypes import CDLL, c_char_p
>>> getenv = CDLL("libc.so.6").getenv
>>> getenv.restype = c_char_p
>>> getenv("HOME")
'/home/glyph'
``` |
How can I use Python for large scale development? | 236,407 | 50 | 2008-10-25T13:30:27Z | 236,421 | 14 | 2008-10-25T13:44:41Z | [
"python",
"development-environment"
] | I would be interested to learn about large scale development in Python and especially in how do you maintain a large code base?
* When you make incompatibility changes to the signature of a method, how do you find all the places where that method is being called. In C++/Java the compiler will find it for you, how do you do it in Python?
* When you make changes deep inside the code, how do you find out what operations an instance provides, since you don't have a static type to lookup?
* How do you handle/prevent typing errors (typos)?
* Are UnitTest's used as a substitute for static type checking?
As you can guess I almost only worked with statically typed languages (C++/Java), but I would like to try my hands on Python for larger programs. But I had a very bad experience, a long time ago, with the clipper (dBase) language, which was also dynamically typed. | my 0.10 EUR:
i have several python application in 'production'-state. our company use java, c++ and python. we develop with the eclipse ide (pydev for python)
**unittests are the key-solution for the problem.** (also for c++ and java)
the less secure world of "dynamic-typing" will make you less careless about your code quality
**BY THE WAY**:
large scale development doesn't mean, that you use one single language!
large scale development often uses **a handful of languages specific to the problem**.
so i agree to *the-hammer-problem* :-)
---
PS: [static-typing & python](http://www.xoltar.org/old_site/misc/static_typing_eckel.html) |
How can I use Python for large scale development? | 236,407 | 50 | 2008-10-25T13:30:27Z | 236,445 | 31 | 2008-10-25T14:01:28Z | [
"python",
"development-environment"
] | I would be interested to learn about large scale development in Python and especially in how do you maintain a large code base?
* When you make incompatibility changes to the signature of a method, how do you find all the places where that method is being called. In C++/Java the compiler will find it for you, how do you do it in Python?
* When you make changes deep inside the code, how do you find out what operations an instance provides, since you don't have a static type to lookup?
* How do you handle/prevent typing errors (typos)?
* Are UnitTest's used as a substitute for static type checking?
As you can guess I almost only worked with statically typed languages (C++/Java), but I would like to try my hands on Python for larger programs. But I had a very bad experience, a long time ago, with the clipper (dBase) language, which was also dynamically typed. | I had some experience with modifying "Frets On Fire", an open source python "Guitar Hero" clone.
as I see it, python is not really suitable for a really large scale project.
I found myself spending a large part of the development time debugging issues related to assignment of incompatible types, things that static typed laguages will reveal effortlessly at compile-time.
also, since types are determined on run-time, trying to understand existing code becomes harder, because you have no idea what's the type of that parameter you are currently looking at.
in addition to that, calling functions using their name string with the `__getattr__` built in function is generally more common in Python than in other programming languages, thus getting the call graph to a certain function somewhat hard (although you can call functions with their name in some statically typed languages as well).
I think that Python really shines in small scale software, rapid prototype development, and gluing existing programs together, but I would not use it for large scale software projects, since in those types of programs maintainability becomes the real issue, and in my opinion python is relatively weak there. |
How can I use Python for large scale development? | 236,407 | 50 | 2008-10-25T13:30:27Z | 236,515 | 50 | 2008-10-25T14:46:30Z | [
"python",
"development-environment"
] | I would be interested to learn about large scale development in Python and especially in how do you maintain a large code base?
* When you make incompatibility changes to the signature of a method, how do you find all the places where that method is being called. In C++/Java the compiler will find it for you, how do you do it in Python?
* When you make changes deep inside the code, how do you find out what operations an instance provides, since you don't have a static type to lookup?
* How do you handle/prevent typing errors (typos)?
* Are UnitTest's used as a substitute for static type checking?
As you can guess I almost only worked with statically typed languages (C++/Java), but I would like to try my hands on Python for larger programs. But I had a very bad experience, a long time ago, with the clipper (dBase) language, which was also dynamically typed. | ## Don't use a screw driver as a hammer
Python is not a statically typed language, so don't try to use it that way.
When you use a specific tool, you use it for what it has been built. For Python, it means:
* **Duck typing** : no type checking. Only behavior matters. Therefore your code must be designed to use this feature. A good design means generic signatures, no dependences between components, high abstraction levels.. So if you change anything, you won't have to change the rest of the code. Python will not complain either, that what it has been built for. Types are not an issue.
* **Huge standard library**. You do not need to change all your calls in the program if you use standard features you haven't coded yourself. And Python come with batteries included. I keep discovering them everyday. I had no idea of the number of modules I could use when I started and tried to rewrite existing stuff like everybody. It's OK, you can't get it all right from the beginning.
You don't write Java, C++, Python, PHP, Erlang, whatever, the same way. They are good reasons why there is room for each of so many different languages, they do not do the same things.
## Unit tests are not a substitute
Unit tests must be performed with any language. The most famous unit test library ([JUnit](http://en.wikipedia.org/wiki/JUnit)) is from the Java world!
This has nothing to do with types. You check behaviors, again. You avoid trouble with regression. You ensure your customer you are on tracks.
## Python for large scale projects
> Languages, libraries and frameworks
> don't scale. Architectures do.
If you design a solid architecture, if you are able to make it evolves quickly, then it will scale. Unit tests help, automatic code check as well. But they are just safety nets. And small ones.
Python is especially suitable for large projects because it enforces some good practices and has a lot of usual design patterns built-in. But again, do not use it for what it is not designed. E.g : Python is not a technology for CPU intensive tasks.
In a huge project, you will most likely use several different technologies anyway. As a [SGBD](http://stackoverflow.com/questions/980813/what-is-sgbd) and a templating language, or else. Python is no exception.
You will probably want to use C/C++ for the part of your code you need to be fast. Or Java to fit in a [Tomcat](http://en.wikipedia.org/wiki/Apache_Tomcat) environment. Don't know, don't care. Python can play well with these.
## As a conclusion
My answer may feel a bit rude, but don't get me wrong: this is a very good question.
A lot of people come to Python with old habits. I screwed myself trying to code Java like Python. You can, but will never get the best of it.
If you have played / want to play with Python, it's great! It's a wonderful tool. But just a tool, really. |
How can I use Python for large scale development? | 236,407 | 50 | 2008-10-25T13:30:27Z | 236,537 | 21 | 2008-10-25T15:05:10Z | [
"python",
"development-environment"
] | I would be interested to learn about large scale development in Python and especially in how do you maintain a large code base?
* When you make incompatibility changes to the signature of a method, how do you find all the places where that method is being called. In C++/Java the compiler will find it for you, how do you do it in Python?
* When you make changes deep inside the code, how do you find out what operations an instance provides, since you don't have a static type to lookup?
* How do you handle/prevent typing errors (typos)?
* Are UnitTest's used as a substitute for static type checking?
As you can guess I almost only worked with statically typed languages (C++/Java), but I would like to try my hands on Python for larger programs. But I had a very bad experience, a long time ago, with the clipper (dBase) language, which was also dynamically typed. | Since nobody pointed out pychecker, pylint and similar tools, I will: pychecker and pylint are tools that can help you find incorrect assumptions (about function signatures, object attributes, etc.) They won't find everything that a compiler might find in a statically typed language -- but they can find problems that such compilers for such languages can't find, too.
Python (and any dynamically typed language) is fundamentally different in terms of the errors you're likely to cause and how you would detect and fix them. It has definite downsides as well as upsides, but many (including me) would argue that in Python's case, the ease of writing code (and the ease of making it structurally sound) and of modifying code *without* breaking API compatibility (adding new optional arguments, providing different objects that have the same set of methods and attributes) make it suitable just fine for large codebases. |
How can I use Python for large scale development? | 236,407 | 50 | 2008-10-25T13:30:27Z | 236,589 | 11 | 2008-10-25T15:36:01Z | [
"python",
"development-environment"
] | I would be interested to learn about large scale development in Python and especially in how do you maintain a large code base?
* When you make incompatibility changes to the signature of a method, how do you find all the places where that method is being called. In C++/Java the compiler will find it for you, how do you do it in Python?
* When you make changes deep inside the code, how do you find out what operations an instance provides, since you don't have a static type to lookup?
* How do you handle/prevent typing errors (typos)?
* Are UnitTest's used as a substitute for static type checking?
As you can guess I almost only worked with statically typed languages (C++/Java), but I would like to try my hands on Python for larger programs. But I had a very bad experience, a long time ago, with the clipper (dBase) language, which was also dynamically typed. | Here are some items that have helped me maintain a fairly large system in python.
* Structure your code in layers. i.e separate biz logic, presentaion logic and your persistence layers. Invest a bit of time in defining these layers and make sure everyone on the project is bought in. For large systems creating a framework that forces you into a certain way of development can be key as well.
* Tests are key, without unit tests you will likely end up with an unmanagable code base several times quicker than with other languages. Keep in mind that unit tests are often not sufficient, make sure to have several integration/acceptance tests you can run quickly after any major change.
* Use [Fail Fast](http://en.wikipedia.org/wiki/Fail-fast) principle. Add assertions for cases you feel your code maybe vulnerable.
* Have standard logging/error handling that will help you quickly navigate to the issue
* Use an IDE( pyDev works for me) that provides type ahead, pyLint/Checker integration that help you detect common typos right away and promote some coding standards
* Carefull about your imports, never do from x import \* or do relative imports without use of .
* Do refactor, a search/replace tool with regular expressions is often all you need to do move methods/class type refactoring. |
How do I convert any image to a 4-color paletted image using the Python Imaging Library? | 236,692 | 11 | 2008-10-25T17:00:09Z | 237,193 | 16 | 2008-10-25T23:26:14Z | [
"python",
"image-processing",
"python-imaging-library"
] | I have a device that supports 4-color graphics (much like CGA in the old days).
I wanted to use [PIL](http://www.pythonware.com/products/pil/) to read the image and convert it using my 4-color palette (of red, green, yellow, black), but I can't figure out if it's even possible at all. I found some mailing list archive posts that seem to suggest other people have tried to do so and failed.
A simple python example would be much appreciated!
Bonus points if you add something that then converts the image to a byte string where each byte represents 4 pixels of data (with each two bits representing a color from 0 to 3) | First: your four colour palette (black, green, red, yellow) has *no* blue component. So, you have to accept that your output image will hardly approximate the input image, unless there is no blue component to start with.
Try this code:
```
import Image
def estimate_color(c, bit, c_error):
c_new= c - c_error
if c_new > 127:
c_bit= bit
c_error= 255 - c_new
else:
c_bit= 0
c_error= -c_new
return c_bit, c_error
def image2cga(im):
"Produce a sequence of CGA pixels from image im"
im_width= im.size[0]
for index, (r, g, b) in enumerate(im.getdata()):
if index % im_width == 0: # start of a line
r_error= g_error= 0
r_bit, r_error= estimate_color(r, 1, r_error)
g_bit, g_error= estimate_color(g, 2, g_error)
yield r_bit|g_bit
def cvt2cga(imgfn):
"Convert an RGB image to (K, R, G, Y) CGA image"
inp_im= Image.open(imgfn) # assume it's RGB
out_im= Image.new("P", inp_im.size, None)
out_im.putpalette( (
0, 0, 0,
255, 0, 0,
0, 255, 0,
255, 255, 0,
) )
out_im.putdata(list(image2cga(inp_im)))
return out_im
if __name__ == "__main__":
import sys, os
for imgfn in sys.argv[1:]:
im= cvt2cga(imgfn)
dirname, filename= os.path.split(imgfn)
name, ext= os.path.splitext(filename)
newpathname= os.path.join(dirname, "cga-%s.png" % name)
im.save(newpathname)
```
This creates a PNG palette image with only the first four palette entries set to your colours. This sample image:
[](http://tzotzioy.googlepages.com/new_pic_baby2.jpg)
becomes
[](http://tzotzioy.googlepages.com/cga-new_pic_baby2.png)
It's trivial to take the output of `image2cga` (yields a sequence of 0-3 values) and pack every four values to a byte.
If you need help about what the code does, please ask and I will explain.
### EDIT1: Do not reinvent the wheel
Of course, turns out I was too enthusiastic and âas Thomas discoveredâ the Image.quantize method can take a palette image as argument and do the quantization with far better results than my ad-hoc method above:
```
def cga_quantize(image):
pal_image= Image.new("P", (1,1))
pal_image.putpalette( (0,0,0, 0,255,0, 255,0,0, 255,255,0) + (0,0,0)*252)
return image.convert("RGB").quantize(palette=pal_image)
```
### EDIT1, cont: Pack the pixels into bytes
For "added value", here follows code to produce the packed string (4 pixels per byte):
```
import itertools as it
# setup: create a map with tuples [(0,0,0,0)â¥(3,3,3,3)] as keys
# and values [chr(0)â¥chr(255)], because PIL does not yet support
# 4 colour palette images
TUPLE2CHAR= {}
# Assume (b7, b6) are pixel0, (b5, b4) are pixel1â¦
# Call it "big endian"
KEY_BUILDER= [
(0, 64, 128, 192), # pixel0 value used as index
(0, 16, 32, 48), # pixel1
(0, 4, 8, 12), # pixel2
(0, 1, 2, 3), # pixel3
]
# For "little endian", uncomment the following line
## KEY_BUILDER.reverse()
# python2.6 has itertools.product, but for compatibility purposes
# let's do it verbosely:
for ix0, px0 in enumerate(KEY_BUILDER[0]):
for ix1, px1 in enumerate(KEY_BUILDER[1]):
for ix2, px2 in enumerate(KEY_BUILDER[2]):
for ix3, px3 in enumerate(KEY_BUILDER[3]):
TUPLE2CHAR[ix0,ix1,ix2,ix3]= chr(px0+px1+px2+px3)
# Another helper function, copied almost verbatim from itertools docs
def grouper(n, iterable, padvalue=None):
"grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')"
return it.izip(*[it.chain(iterable, it.repeat(padvalue, n-1))]*n)
# now the functions
def seq2str(seq):
"""Takes a sequence of [0..3] values and packs them into bytes
using two bits per value"""
return ''.join(
TUPLE2CHAR[four_pixel]
for four_pixel in grouper(4, seq, 0))
# and the image related function
# Note that the following function is correct,
# but is not useful for Windows 16 colour bitmaps,
# which start at the *bottom* rowâ¦
def image2str(img):
return seq2str(img.getdata())
``` |
How to get file creation & modification date/times in Python? | 237,079 | 478 | 2008-10-25T21:54:56Z | 237,082 | 47 | 2008-10-25T21:58:01Z | [
"python",
"file"
] | I have a script that needs to do some stuff based on file creation & modification dates but has to run on Linux & Windows.
What's the best **cross-platform** way to get file creation & modification date/times in Python? | os.stat <http://www.python.org/doc/2.5.2/lib/module-stat.html>
edit: In newer code you should probably use [os.path.getmtime()](http://docs.python.org/library/os.path.html#os.path.getmtime) (thanks Christian Oudard)
but note that it returns a floating point value of time\_t with fraction seconds (if your OS supports it) |
How to get file creation & modification date/times in Python? | 237,079 | 478 | 2008-10-25T21:54:56Z | 237,084 | 531 | 2008-10-25T22:00:54Z | [
"python",
"file"
] | I have a script that needs to do some stuff based on file creation & modification dates but has to run on Linux & Windows.
What's the best **cross-platform** way to get file creation & modification date/times in Python? | You have a couple of choices. For one, you can use the [os.path.getmtime](https://docs.python.org/library/os.path.html#os.path.getmtime) and [os.path.getctime](https://docs.python.org/library/os.path.html#os.path.getctime) functions:
```
import os.path, time
print "last modified: %s" % time.ctime(os.path.getmtime(file))
print "created: %s" % time.ctime(os.path.getctime(file))
```
Your other option is to use [os.stat](https://docs.python.org/library/os.html#os.stat):
```
import os, time
(mode, ino, dev, nlink, uid, gid, size, atime, mtime, ctime) = os.stat(file)
print "last modified: %s" % time.ctime(mtime)
```
**Note**: ctime() does *not* refer to creation time on \*nix systems, but rather the last time the inode data changed. (thanks to kojiro for making that fact more clear in the comments by providing a link to an interesting blog post) |
How to get file creation & modification date/times in Python? | 237,079 | 478 | 2008-10-25T21:54:56Z | 237,092 | 34 | 2008-10-25T22:05:20Z | [
"python",
"file"
] | I have a script that needs to do some stuff based on file creation & modification dates but has to run on Linux & Windows.
What's the best **cross-platform** way to get file creation & modification date/times in Python? | There are two methods to get the mod time, os.path.getmtime() or os.stat(), but the ctime is not reliable cross-platform (see below).
### [os.path.getmtime()](http://www.python.org/doc/2.5.2/lib/module-os.path.html)
**getmtime**(*path*)
\*Return the time of last modification of path. The return value is a number giving the
number of seconds since the epoch (see the time module). Raise os.error if the file does
not exist or is inaccessible. New in version 1.5.2. Changed in version 2.3: If
os.stat\_float\_times() returns True, the result is a floating point number.\*
### [os.stat()](http://www.python.org/doc/2.5.2/lib/os-file-dir.html)
**stat**(*path*)
\*Perform a stat() system call on the given path. The return value is an object whose
attributes correspond to the members of the stat structure, namely: st\_mode (protection
bits), st\_ino (inode number), st\_dev (device), st\_nlink (number of hard links), st\_uid
(user ID of owner), st\_gid (group ID of owner), st\_size (size of file, in bytes),
st\_atime (time of most recent access), **st\_mtime** (time of most recent content
modification), **st\_ctime** (platform dependent; time of most recent metadata change on Unix, or the time of creation on Windows)\*:
```
>>> import os
>>> statinfo = os.stat('somefile.txt')
>>> statinfo
(33188, 422511L, 769L, 1, 1032, 100, 926L, 1105022698,1105022732, 1105022732)
>>> statinfo.st_size
926L
>>>
```
In the above example you would use statinfo.st\_mtime or statinfo.st\_ctime to get the mtime and ctime, respectively. |
How to get file creation & modification date/times in Python? | 237,079 | 478 | 2008-10-25T21:54:56Z | 237,093 | 10 | 2008-10-25T22:06:37Z | [
"python",
"file"
] | I have a script that needs to do some stuff based on file creation & modification dates but has to run on Linux & Windows.
What's the best **cross-platform** way to get file creation & modification date/times in Python? | `os.stat` returns a named tuple with `st_mtime` and `st_ctime` attributes. The modification time is `st_mtime` on both platforms; unfortunately, on Windows, `ctime` means "creation time", whereas on POSIX it means "change time". I'm not aware of any way to get the creation time on POSIX platforms. |
How to get file creation & modification date/times in Python? | 237,079 | 478 | 2008-10-25T21:54:56Z | 1,526,089 | 287 | 2009-10-06T14:51:26Z | [
"python",
"file"
] | I have a script that needs to do some stuff based on file creation & modification dates but has to run on Linux & Windows.
What's the best **cross-platform** way to get file creation & modification date/times in Python? | The best function to use for this is [os.path.getmtime()](http://docs.python.org/library/os.path.html#os.path.getmtime). Internally, this just uses `os.stat(filename).st_mtime`.
The datetime module is the best manipulating timestamps, so you can get the modification date as a `datetime` object like this:
```
import os
import datetime
def modification_date(filename):
t = os.path.getmtime(filename)
return datetime.datetime.fromtimestamp(t)
```
Usage example:
```
>>> d = modification_date('/var/log/syslog')
>>> print d
2009-10-06 10:50:01
>>> print repr(d)
datetime.datetime(2009, 10, 6, 10, 50, 1)
``` |
How to get file creation & modification date/times in Python? | 237,079 | 478 | 2008-10-25T21:54:56Z | 39,501,288 | 20 | 2016-09-14T23:51:54Z | [
"python",
"file"
] | I have a script that needs to do some stuff based on file creation & modification dates but has to run on Linux & Windows.
What's the best **cross-platform** way to get file creation & modification date/times in Python? | Getting some sort of modification date in a cross-platform way is easy - just call [`os.path.getmtime(path)`](https://docs.python.org/library/os.path.html#os.path.getmtime) and you'll get the Unix timestamp of when the file at `path` was last modified.
Getting file *creation* dates, on the other hand, is fiddly and platform-dependent, differing even between the three big OSes:
* On **Windows**, a file's `ctime` (documented at <https://msdn.microsoft.com/en-us/library/14h5k7ff.aspx>) stores its creation date. You can access this in Python through [`os.path.getctime()`](https://docs.python.org/library/os.path.html#os.path.getctime) or the [`.st_ctime`](https://docs.python.org/3/library/os.html#os.stat_result.st_ctime) attribute of the result of a call to [`os.stat()`](https://docs.python.org/3/library/os.html#os.stat). This *won't* work on Unix, where the `ctime` [is the last time that the file's attributes *or* content were changed](http://www.linux-faqs.info/general/difference-between-mtime-ctime-and-atime).
* On **Mac**, as well as some other Unix-based OSes, you can use the [`.st_birthtime`](https://docs.python.org/3/library/os.html#os.stat_result.st_birthtime) attribute of the result of a call to `os.stat()`.
* On **Linux**, this is currently impossible, at least without writing a C extension for Python. Although some file systems commonly used with Linux [do store creation dates](http://unix.stackexchange.com/questions/7562/what-file-systems-on-linux-store-the-creation-time) (for example, `ext4` stores them in `st_crtime`) , the Linux kernel [offers no way of accessing them](http://unix.stackexchange.com/questions/91197/how-to-find-creation-date-of-file); in particular, the structs it returns from `stat()` calls in C, as of the latest kernel version, [don't contain any creation date fields](https://github.com/torvalds/linux/blob/v4.8-rc6/include/linux/stat.h). You can also see that the identifier `st_crtime` doesn't currently feature anywhere in the [Python source](https://github.com/python/cpython/search?utf8=%E2%9C%93&q=st_crtime). At least if you're on `ext4`, the data *is* attached to the inodes in the file system, but there's no convenient way of accessing it.
The next-best thing on Linux is to access the file's `mtime`, through either [`os.path.getmtime()`](https://docs.python.org/library/os.path.html#os.path.getmtime) or the [`.st_mtime`](https://docs.python.org/3/library/os.html#os.stat_result.st_ctime) attribute of an `os.stat()` result. This will give you the last time the file's content was modified, which may be adequate for some use cases.
Putting this all together, cross-platform code should look something like this...
```
import os
import platform
def creation_date(path_to_file):
"""
Try to get the date that a file was created, falling back to when it was
last modified if that isn't possible.
See http://stackoverflow.com/a/39501288/1709587 for explanation.
"""
if platform.system() == 'Windows':
return os.path.getctime(path_to_file)
else:
stat = os.stat(path_to_file)
try:
return stat.st_birthtime
except AttributeError:
# We're probably on Linux. No easy way to get creation dates here,
# so we'll settle for when its content was last modified.
return stat.st_mtime
``` |
Is there a reason Python strings don't have a string length method? | 237,128 | 169 | 2008-10-25T22:37:03Z | 237,133 | 12 | 2008-10-25T22:38:58Z | [
"python"
] | I know that python has a `len()` function that is used to determine the size of a string, but I was wondering why its not a method of the string object.
## **Update**
Ok, I realized I was embarrassingly mistaken. `__len__()` is actually a method of a string object. It just seems weird to see object oriented code in Python using the len function on string objects. Furthermore, it's also weird to see `__len__` as the name instead of just len. | ```
met% python -c 'import this' | grep 'only one'
There should be one-- and preferably only one --obvious way to do it.
``` |
Is there a reason Python strings don't have a string length method? | 237,128 | 169 | 2008-10-25T22:37:03Z | 237,149 | 35 | 2008-10-25T22:49:54Z | [
"python"
] | I know that python has a `len()` function that is used to determine the size of a string, but I was wondering why its not a method of the string object.
## **Update**
Ok, I realized I was embarrassingly mistaken. `__len__()` is actually a method of a string object. It just seems weird to see object oriented code in Python using the len function on string objects. Furthermore, it's also weird to see `__len__` as the name instead of just len. | There is a `len` method:
```
>>> a = 'a string of some length'
>>> a.__len__()
23
>>> a.__len__
<method-wrapper '__len__' of str object at 0x02005650>
``` |
Is there a reason Python strings don't have a string length method? | 237,128 | 169 | 2008-10-25T22:37:03Z | 237,150 | 155 | 2008-10-25T22:51:19Z | [
"python"
] | I know that python has a `len()` function that is used to determine the size of a string, but I was wondering why its not a method of the string object.
## **Update**
Ok, I realized I was embarrassingly mistaken. `__len__()` is actually a method of a string object. It just seems weird to see object oriented code in Python using the len function on string objects. Furthermore, it's also weird to see `__len__` as the name instead of just len. | Strings do have a length method: `__len__()`
The protocol in Python is to implement this method on objects which have a length and use the built-in [`len()`](http://www.python.org/doc/2.5.2/lib/built-in-funcs.html#l2h-45) function, which calls it for you, similar to the way you would implement `__iter__()` and use the built-in `iter()` function (or have the method called behind the scenes for you) on objects which are iterable.
See [Emulating container types](http://www.python.org/doc/2.5.2/ref/sequence-types.html) for more information.
Here's a good read on the subject of protocols in Python: [Python and the Principle of Least Astonishment](http://lucumr.pocoo.org/2011/7/9/python-and-pola/) |
Is there a reason Python strings don't have a string length method? | 237,128 | 169 | 2008-10-25T22:37:03Z | 237,312 | 80 | 2008-10-26T01:11:08Z | [
"python"
] | I know that python has a `len()` function that is used to determine the size of a string, but I was wondering why its not a method of the string object.
## **Update**
Ok, I realized I was embarrassingly mistaken. `__len__()` is actually a method of a string object. It just seems weird to see object oriented code in Python using the len function on string objects. Furthermore, it's also weird to see `__len__` as the name instead of just len. | Jim's answer to [this question](http://stackoverflow.com/questions/83983/why-isnt-the-len-function-inherited-by-dictionaries-and-lists-in-python) may help; I copy it here. Quoting Guido van Rossum:
> First of all, I chose len(x) over x.len() for HCI reasons (def \_\_len\_\_() came much later). There are two intertwined reasons actually, both HCI:
>
> (a) For some operations, prefix notation just reads better than postfix â prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like x\*(a+b) into x\*a + x\*b to the clumsiness of doing the same thing using a raw OO notation.
>
> (b) When I read code that says len(x) I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read x.len(), I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len(). Witness the confusion we occasionally have when a class that is not implementing a mapping has a get() or keys() method, or something that isnât a file has a write() method.
>
> Saying the same thing in another way, I see âlenâ as a built-in operation. Iâd hate to lose that. /â¦/ |
Is there a reason Python strings don't have a string length method? | 237,128 | 169 | 2008-10-25T22:37:03Z | 23,192,800 | 26 | 2014-04-21T07:19:39Z | [
"python"
] | I know that python has a `len()` function that is used to determine the size of a string, but I was wondering why its not a method of the string object.
## **Update**
Ok, I realized I was embarrassingly mistaken. `__len__()` is actually a method of a string object. It just seems weird to see object oriented code in Python using the len function on string objects. Furthermore, it's also weird to see `__len__` as the name instead of just len. | Python is a pragmatic programming language, and the reasons for `len()` being a function and not a method of `str`, `list`, `dict` etc. are pragmatic.
The `len()` built-in function deals directly with built-in types: the CPython implementation of `len()` actually returns the value of the `ob_size` field in the [`PyVarObject` C struct](http://hg.python.org/cpython/file/8c8315bac6a8/Include/object.h#l111) that represents any variable-sized built-in object in memory. This is **much** faster than calling a method -- no attribute lookup needs to happen. Getting the number of items in a collection is a common operation and must work efficiently for such basic and diverse types as `str`, `list`, `array.array` etc.
However, to promote consistency, when applying `len(o)` to a user-defined type, Python calls `o.__len__()` as a fallback. `__len__`, `__abs__` and all the other special methods documented in the [Python Data Model](https://docs.python.org/3.4/reference/datamodel.html) make it easy to create objects that behave like the built-ins, enabling the expressive and highly consistent APIs we call "Pythonic".
By implementing special methods your objects can support iteration, overload infix operators, manage contexts in **`with`** blocks etc. You can think of the Data Model as a way of using the Python language itself as a framework where the objects you create can be integrated seamlessly.
A second reason, supported by quotes from Guido van Rossum like [this one](http://effbot.org/pyfaq/why-does-python-use-methods-for-some-functionality-e-g-list-index-but-functions-for-other-e-g-len-list.htm), is that it is easier to read and write `len(s)` than `s.len()`.
The notation `len(s)` is consistent with unary operators with prefix notation, like `abs(n)`. `len()` is used way more often than `abs()`, and it deserves to be as easy to write.
There may also be a historical reason: in the ABC language which preceded Python (and was very influential in its design), there was a unary operator written as `#s` which meant `len(s)`. |
Multiple mouse pointers? | 237,155 | 15 | 2008-10-25T22:54:43Z | 262,789 | 8 | 2008-11-04T18:14:04Z | [
"python",
"user-interface",
"mouse",
"multi-user"
] | Is there a way to accept input from more than one mouse separately? I'm interested in making a multi-user application and I thought it would be great if I could have 2 or more users holding wireless mice each interacting with the app individually with a separate mouse arrow.
Is this something I should try to farm out to some other application/driver/os\_magic? or is there a library I can use to accomplish this? Language isn't a *HUGE* deal, but C, C++, and Python are preferrable.
Thanks :)
edit:
Found this multi-pointer toolkit for linux (it's actually a multi-pointer x server):
<http://wearables.unisa.edu.au/mpx/> | You could try the [Microsoft Windows MultiPoint Software Development Kit 1.1](http://www.microsoft.com/downloads/details.aspx?familyid=F851122A-4925-4788-BC39-409644CE0F9B&displaylang=en)
or the new
[Microsoft Windows MultiPoint Software Development Kit 1.5](http://www.microsoft.com/downloads/details.aspx?FamilyID=0eb18c26-5e02-4c90-ae46-06662818f817&displaylang=en)
and the main [Microsoft Multipoint](http://www.microsoft.com/multipoint/mouse-sdk/) site |
Any python libs for parsing apache config files? | 237,209 | 8 | 2008-10-25T23:36:52Z | 2,450,905 | 7 | 2010-03-15T22:27:46Z | [
"python",
"parsing",
"apache-config"
] | Any python libs for parsing apache config files or if not python anyone aware of such thing in other languages (perl, php, java, c#)?
As i'll be able to rewrite them in python. | I did find an interesting Apache Config parser for python here: <http://www.poldylicious.de/node/25>
The Apache Config Parser mentioned is not documented, but it does work. |
How to disable HTML encoding when using Context in django | 237,235 | 13 | 2008-10-26T00:09:08Z | 237,243 | 15 | 2008-10-26T00:13:28Z | [
"python",
"django",
"django-templates"
] | In my django application I am using a template to construct email body, one of the parameters is url, note there are two parametes separated by ampersand in the url.
```
t = loader.get_template("sometemplate")
c = Context({
'foo': 'bar',
'url': 'http://127.0.0.1/test?a=1&b=2',
})
print t.render(c)
```
After rendering it produces: `http://127.0.0.1/test?a=1&amp;b=2`
Note the ampersand is HTML encoded as "&". One way around the problem is to pass each parameter separately to my template and construct the url in the template, however I'd like to avoid doing that.
Is there a way to disable HTML encoding of context parameters or at the very least avoid encoding of ampersands? | To turn it off for a single variable, use `mark_safe`:
```
from django.utils.safestring import mark_safe
t = loader.get_template("sometemplate")
c = Context({
'foo': 'bar',
'url': mark_safe('http://127.0.0.1/test?a=1&b=2'),
})
print t.render(c)
```
Alternatively, to totally turn autoescaping off from your Python code, [use the `autoescape` argument when initialising a `Context`](http://docs.djangoproject.com/en/dev/howto/custom-template-tags/#auto-escaping-considerations):
```
c = Context({
'foo': 'bar',
'url': 'http://127.0.0.1/test?a=1&b=2',
}, autoescape=False)
```
The [How to turn [Automatic HTML escaping] off](http://docs.djangoproject.com/en/dev/topics/templates/#how-to-turn-it-off) section of the documentation covers some of the in-template options if you'd rather do it there. |
python properties and inheritance | 237,432 | 49 | 2008-10-26T02:49:09Z | 237,447 | 7 | 2008-10-26T03:07:11Z | [
"python",
"inheritance",
"properties",
"polymorphism"
] | I have a base class with a property which (the get method) I want to overwrite in the subclass. My first thought was something like:
```
class Foo(object):
def _get_age(self):
return 11
age = property(_get_age)
class Bar(Foo):
def _get_age(self):
return 44
```
This does not work (subclass bar.age returns 11). I found a solution with an lambda expression which works:
```
age = property(lambda self: self._get_age())
```
So is this the right solution for using properties and overwrite them in a subclass, or are there other preferred ways to do this? | Yes, this is the way to do it; the property declaration executes at the time the parent class' definition is executed, which means it can only "see" the versions of the methods which exist on the parent class. So when you redefine one or more of those methods on a child class, you need to re-declare the property using the child class' version of the method(s). |
python properties and inheritance | 237,432 | 49 | 2008-10-26T02:49:09Z | 237,858 | 61 | 2008-10-26T10:50:15Z | [
"python",
"inheritance",
"properties",
"polymorphism"
] | I have a base class with a property which (the get method) I want to overwrite in the subclass. My first thought was something like:
```
class Foo(object):
def _get_age(self):
return 11
age = property(_get_age)
class Bar(Foo):
def _get_age(self):
return 44
```
This does not work (subclass bar.age returns 11). I found a solution with an lambda expression which works:
```
age = property(lambda self: self._get_age())
```
So is this the right solution for using properties and overwrite them in a subclass, or are there other preferred ways to do this? | I simply prefer to repeat the `property()` as well as you will repeat the `@classmethod` decorator when overriding a class method.
While this seems very verbose, at least for Python standards, you may notice:
1) for read only properties, `property` can be used as a decorator:
```
class Foo(object):
@property
def age(self):
return 11
class Bar(Foo):
@property
def age(self):
return 44
```
2) in Python 2.6, [properties grew a pair of methods](http://docs.python.org/library/functions.html#property) `setter` and `deleter` which can be used to apply to general properties the shortcut already available for read-only ones:
```
class C(object):
@property
def x(self):
return self._x
@x.setter
def x(self, value):
self._x = value
``` |
python properties and inheritance | 237,432 | 49 | 2008-10-26T02:49:09Z | 291,707 | 10 | 2008-11-14T22:52:52Z | [
"python",
"inheritance",
"properties",
"polymorphism"
] | I have a base class with a property which (the get method) I want to overwrite in the subclass. My first thought was something like:
```
class Foo(object):
def _get_age(self):
return 11
age = property(_get_age)
class Bar(Foo):
def _get_age(self):
return 44
```
This does not work (subclass bar.age returns 11). I found a solution with an lambda expression which works:
```
age = property(lambda self: self._get_age())
```
So is this the right solution for using properties and overwrite them in a subclass, or are there other preferred ways to do this? | Another way to do it, without having to create any additional classes. I've added a set method to show what you do if you only override one of the two:
```
class Foo(object):
def _get_age(self):
return 11
def _set_age(self, age):
self._age = age
age = property(_get_age, _set_age)
class Bar(Foo):
def _get_age(self):
return 44
age = property(_get_age, Foo._set_age)
```
This is a pretty contrived example, but you should get the idea. |
How do I parse a listing of files to get just the filenames in python? | 237,699 | 2 | 2008-10-26T07:42:17Z | 237,705 | 10 | 2008-10-26T07:55:46Z | [
"python",
"parsing",
"scripting"
] | So lets say I'm using Python's [ftplib](http://www.python.org/doc/2.5.2/lib/module-ftplib.html) to retrieve a list of log files from an FTP server. How would I parse that list of files to get just the file names (the last column) inside a list? See the link above for example output. | Using retrlines() probably isn't the best idea there, since it just prints to the console and so you'd have to do tricky things to even get at that output. A likely better bet would be to use the nlst() method, which returns exactly what you want: a list of the file names. |
How do you log server errors on django sites | 238,081 | 139 | 2008-10-26T14:37:42Z | 238,111 | 74 | 2008-10-26T14:53:31Z | [
"python",
"django",
"error-logging"
] | So, when playing with the development I can just set `settings.DEBUG` to `True` and if an error occures I can see it nicely formatted, with good stack trace and request information.
But on kind of production site I'd rather use `DEBUG=False` and show visitors some standard error 500 page with information that I'm working on fixing this bug at this moment ;)
At the same time I'd like to have some way of logging all those information (stack trace and request info) to a file on my server - so I can just output it to my console and watch errors scroll, email the log to me every hour or something like this.
What logging solutions would you recomend for a django-site, that would meet those simple requirements? I have the application running as `fcgi` server and I'm using apache web server as frontend (although thinking of going to lighttpd). | Well, when `DEBUG = False`, Django will automatically mail a full traceback of any error to each person listed in the `ADMINS` setting, which gets you notifications pretty much for free. If you'd like more fine-grained control, you can write and add to your settings a middleware class which defines a method named `process_exception()`, which will have access to the exception that was raised:
<http://docs.djangoproject.com/en/dev/topics/http/middleware/#process-exception>
Your `process_exception()` method can then perform whatever type of logging you'd like: writing to console, writing to a file, etc., etc.
Edit: though it's a bit less useful, you can also listen for the `got_request_exception` signal, which will be sent whenever an exception is encountered during request processing:
<http://docs.djangoproject.com/en/dev/ref/signals/#got-request-exception>
This does *not* give you access to the exception object, however, so the middleware method is much easier to work with. |
How do you log server errors on django sites | 238,081 | 139 | 2008-10-26T14:37:42Z | 239,882 | 27 | 2008-10-27T13:33:44Z | [
"python",
"django",
"error-logging"
] | So, when playing with the development I can just set `settings.DEBUG` to `True` and if an error occures I can see it nicely formatted, with good stack trace and request information.
But on kind of production site I'd rather use `DEBUG=False` and show visitors some standard error 500 page with information that I'm working on fixing this bug at this moment ;)
At the same time I'd like to have some way of logging all those information (stack trace and request info) to a file on my server - so I can just output it to my console and watch errors scroll, email the log to me every hour or something like this.
What logging solutions would you recomend for a django-site, that would meet those simple requirements? I have the application running as `fcgi` server and I'm using apache web server as frontend (although thinking of going to lighttpd). | Obviously James is correct, but if you wanted to log exceptions in a datastore there are a few opensource solutions already available:
1) CrashLog is a good choice: <http://code.google.com/p/django-crashlog/>
2) Db-Log is a good choice as well: <http://code.google.com/p/django-db-log/>
What is the difference between the two? Almost nothing that I can see, so either one will suffice.
I've used both and the work well. |
How do you log server errors on django sites | 238,081 | 139 | 2008-10-26T14:37:42Z | 4,198,664 | 39 | 2010-11-16T20:34:56Z | [
"python",
"django",
"error-logging"
] | So, when playing with the development I can just set `settings.DEBUG` to `True` and if an error occures I can see it nicely formatted, with good stack trace and request information.
But on kind of production site I'd rather use `DEBUG=False` and show visitors some standard error 500 page with information that I'm working on fixing this bug at this moment ;)
At the same time I'd like to have some way of logging all those information (stack trace and request info) to a file on my server - so I can just output it to my console and watch errors scroll, email the log to me every hour or something like this.
What logging solutions would you recomend for a django-site, that would meet those simple requirements? I have the application running as `fcgi` server and I'm using apache web server as frontend (although thinking of going to lighttpd). | django-db-log, mentioned in another answer, has been replaced with:
<https://github.com/dcramer/django-sentry> |
How do you log server errors on django sites | 238,081 | 139 | 2008-10-26T14:37:42Z | 6,395,837 | 64 | 2011-06-18T11:58:01Z | [
"python",
"django",
"error-logging"
] | So, when playing with the development I can just set `settings.DEBUG` to `True` and if an error occures I can see it nicely formatted, with good stack trace and request information.
But on kind of production site I'd rather use `DEBUG=False` and show visitors some standard error 500 page with information that I'm working on fixing this bug at this moment ;)
At the same time I'd like to have some way of logging all those information (stack trace and request info) to a file on my server - so I can just output it to my console and watch errors scroll, email the log to me every hour or something like this.
What logging solutions would you recomend for a django-site, that would meet those simple requirements? I have the application running as `fcgi` server and I'm using apache web server as frontend (although thinking of going to lighttpd). | Django Sentry is a good way to go, as already mentioned, but there is a bit of work involved in setting it up properly (as a separate website). If you just want to log everything to a simple text file here's the logging configuration to put in your `settings.py`
```
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
# Include the default Django email handler for errors
# This is what you'd get without configuring logging at all.
'mail_admins': {
'class': 'django.utils.log.AdminEmailHandler',
'level': 'ERROR',
# But the emails are plain text by default - HTML is nicer
'include_html': True,
},
# Log to a text file that can be rotated by logrotate
'logfile': {
'class': 'logging.handlers.WatchedFileHandler',
'filename': '/var/log/django/myapp.log'
},
},
'loggers': {
# Again, default Django configuration to email unhandled exceptions
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
# Might as well log any errors anywhere else in Django
'django': {
'handlers': ['logfile'],
'level': 'ERROR',
'propagate': False,
},
# Your own app - this assumes all your logger names start with "myapp."
'myapp': {
'handlers': ['logfile'],
'level': 'WARNING', # Or maybe INFO or DEBUG
'propagate': False
},
},
}
``` |
How do you log server errors on django sites | 238,081 | 139 | 2008-10-26T14:37:42Z | 19,267,228 | 10 | 2013-10-09T08:51:51Z | [
"python",
"django",
"error-logging"
] | So, when playing with the development I can just set `settings.DEBUG` to `True` and if an error occures I can see it nicely formatted, with good stack trace and request information.
But on kind of production site I'd rather use `DEBUG=False` and show visitors some standard error 500 page with information that I'm working on fixing this bug at this moment ;)
At the same time I'd like to have some way of logging all those information (stack trace and request info) to a file on my server - so I can just output it to my console and watch errors scroll, email the log to me every hour or something like this.
What logging solutions would you recomend for a django-site, that would meet those simple requirements? I have the application running as `fcgi` server and I'm using apache web server as frontend (although thinking of going to lighttpd). | Some time has passed since EMP's most helpful code submission. I just now implemented it, and while thrashing around with some manage.py option, to try to chase down a bug, I got a deprecation warning to the effect that with my current version of Django (1.5.?) a require\_debug\_false filter is now needed for the mail\_admins handler.
Here is the revised code:
```
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
# Include the default Django email handler for errors
# This is what you'd get without configuring logging at all.
'mail_admins': {
'class': 'django.utils.log.AdminEmailHandler',
'level': 'ERROR',
'filters': ['require_debug_false'],
# But the emails are plain text by default - HTML is nicer
'include_html': True,
},
# Log to a text file that can be rotated by logrotate
'logfile': {
'class': 'logging.handlers.WatchedFileHandler',
'filename': '/home/username/public_html/djangoprojectname/logfilename.log'
},
},
'loggers': {
# Again, default Django configuration to email unhandled exceptions
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
# Might as well log any errors anywhere else in Django
'django': {
'handlers': ['logfile'],
'level': 'ERROR',
'propagate': False,
},
# Your own app - this assumes all your logger names start with "myapp."
'myapp': {
'handlers': ['logfile'],
'level': 'DEBUG', # Or maybe INFO or WARNING
'propagate': False
},
},
}
``` |
Pattern matching of lists in Python | 238,102 | 31 | 2008-10-26T14:50:40Z | 238,124 | 48 | 2008-10-26T15:01:12Z | [
"python",
"functional-programming",
"pattern-matching"
] | I want to do some pattern matching on lists in Python. For example, in Haskell, I can do something like the following:
```
fun (head : rest) = ...
```
So when I pass in a list, `head` will be the first element, and `rest` will be the trailing elements.
Likewise, in Python, I can automatically unpack tuples:
```
(var1, var2) = func_that_returns_a_tuple()
```
I want to do something similar with lists in Python. Right now, I have a function that returns a list, and a chunk of code that does the following:
```
ls = my_func()
(head, rest) = (ls[0], ls[1:])
```
I wondered if I could somehow do that in one line in Python, instead of two. | So far as I know there's no way to make it a one-liner in current Python without introducing another function, e.g.:
```
split_list = lambda lst: (lst[0], lst[1:])
head, rest = split_list(my_func())
```
However, in Python 3.0 the specialized syntax used for variadic argument signatures and argument unpacking will become available for this type of general sequence unpacking as well, so in 3.0 you'll be able to write:
```
head, *rest = my_func()
```
See [PEP 3132](https://www.python.org/dev/peps/pep-3132/) for details. |
Pattern matching of lists in Python | 238,102 | 31 | 2008-10-26T14:50:40Z | 238,170 | 28 | 2008-10-26T16:05:12Z | [
"python",
"functional-programming",
"pattern-matching"
] | I want to do some pattern matching on lists in Python. For example, in Haskell, I can do something like the following:
```
fun (head : rest) = ...
```
So when I pass in a list, `head` will be the first element, and `rest` will be the trailing elements.
Likewise, in Python, I can automatically unpack tuples:
```
(var1, var2) = func_that_returns_a_tuple()
```
I want to do something similar with lists in Python. Right now, I have a function that returns a list, and a chunk of code that does the following:
```
ls = my_func()
(head, rest) = (ls[0], ls[1:])
```
I wondered if I could somehow do that in one line in Python, instead of two. | First of all, please note that the "pattern matching" of functional languages and the assignment to tuples you mention are not really that similar. In functional languages the patterns are used to give partial definitions of a function. So `f (x : s) = e` does not mean take the head and tail of the argument of `f` and return `e` using them, but it means that *if* the argument of `f` is of the form `x : s` (for some `x` and `s`), *then* `f (x : s)` is equal to `e`.
The assignment of python is more like a multiple assignment (I suspect that was its original intention). So you write, for example, `x, y = y, x` to swap the values in `x` and `y` without needing a temporary variable (as you would with a simple assignment statement). This has little to do with pattern matching as it is basically a shorthand for the "simultaneous" execution of `x = y` and `y = x`. Although python allows arbitrary sequences instead of comma-separated lists, I would not suggest calling this pattern matching. With pattern matching you check whether or not something matches a pattern; in the python assignment you should ensure that the sequences on both sides are the same.
To do what you seem to want you would usually (also in functional languages) use either a auxiliary function (as mentioned by others) or something similar to `let` or `where` constructs (which you can regard as using anonymous functions). For example:
```
(head, tail) = (x[0], x[1:]) where x = my_func()
```
Or, in actual python:
```
(head, tail) = (lambda x: (x[0], x[1:]))(my_func())
```
Note that this is essentially the same as the solutions given by others with an auxiliary function except that this is the one-liner you wanted. It is, however, not necessarily better than a separate function.
(Sorry if my answer is a bit over the top. I just think it's important to make the distinction clear.) |
How can I support wildcards in user-defined search strings in Python? | 238,600 | 8 | 2008-10-26T20:52:35Z | 238,602 | 14 | 2008-10-26T20:54:30Z | [
"python",
"search",
"parsing",
"string",
"wildcard"
] | Is there a simple way to support wildcards ("\*") when searching strings - without using RegEx?
Users are supposed to enter search terms using wildcards, but should not have to deal with the complexity of RegEx:
```
"foo*" => str.startswith("foo")
"*foo" => str.endswith("foo")
"*foo*" => "foo" in str
```
(it gets more complicated when there are multiple search terms though, e.g. "foo\*bar\*baz")
This seems like a common issue, so I wonder whether there's a ready-made solution for it.
Any help would be greatly appreciated! | You could try the [`fnmatch`](http://www.python.org/doc/2.5.2/lib/module-fnmatch.html) module, it's got a shell-like wildcard syntax. |
How can I call a DLL from a scripting language? | 239,020 | 9 | 2008-10-27T03:42:07Z | 239,041 | 15 | 2008-10-27T03:57:11Z | [
"python",
"perl",
"dll"
] | I have a third-party product, a terminal emulator, which provides a DLL that can be linked to a C program to basically automate the driving of this product (send keystrokes, detect what's on the screen and so forth).
I want to drive it from a scripting language (I'm comfortable with Python and slightly less so with Perl) so that we don't have to compile and send out executables to our customers whenever there's a problem found.
We also want the customers to be able to write their own scripts using ours as baselines and they won't entertain the idea of writing and compiling C code.
What's a good way of getting Python/Perl to interface to a Windows DLL. My first thought was to write a server program and have a Python script communicate with it via TCP but there's got to be an easier solution. | One way to call C libraries from Python is to use [ctypes](https://docs.python.org/library/ctypes.html):
```
>>> from ctypes import *
>>> windll.user32.MessageBoxA(None, "Hello world", "ctypes", 0);
``` |
How can I call a DLL from a scripting language? | 239,020 | 9 | 2008-10-27T03:42:07Z | 239,043 | 12 | 2008-10-27T03:58:05Z | [
"python",
"perl",
"dll"
] | I have a third-party product, a terminal emulator, which provides a DLL that can be linked to a C program to basically automate the driving of this product (send keystrokes, detect what's on the screen and so forth).
I want to drive it from a scripting language (I'm comfortable with Python and slightly less so with Perl) so that we don't have to compile and send out executables to our customers whenever there's a problem found.
We also want the customers to be able to write their own scripts using ours as baselines and they won't entertain the idea of writing and compiling C code.
What's a good way of getting Python/Perl to interface to a Windows DLL. My first thought was to write a server program and have a Python script communicate with it via TCP but there's got to be an easier solution. | In Perl, [Win32::API](http://search.cpan.org/perldoc?Win32::API) is an easy way to some interfacing to DLLs. There is also [Inline::C](http://search.cpan.org/perldoc?Inline::C), if you have access to a compiler and the windows headers.
Perl [XSUB](http://search.cpan.org/perldoc?perlxs)s can also create an interface between Perl and C. |
Python list of lists, changes reflected across sublists unexpectedly | 240,178 | 227 | 2008-10-27T14:57:22Z | 240,202 | 19 | 2008-10-27T15:02:54Z | [
"python",
"list",
"nested-lists",
"mutable"
] | I needed to create a list of lists in Python, so I typed the following:
```
myList = [[1] * 4] * 3
```
The list looked like this:
```
[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]
```
Then I changed one of the innermost values:
```
myList[0][0] = 5
```
Now my list looks like this:
```
[[5, 1, 1, 1], [5, 1, 1, 1], [5, 1, 1, 1]]
```
which is not what I wanted or expected. Can someone please explain what's going on, and how to get around it? | ```
[[1] * 4] * 3
```
or even
```
[[1, 1, 1, 1]] * 3
```
Creates a list that references the internal `[1,1,1,1]` 3 times - not three copies of the inner list, so any time you modify the list (in any position), you'll see the change three times.
It's the same as this example:
```
>>> inner = [1,1,1,1]
>>> outer = [inner]*3
>>> outer
[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]
>>> inner[0] = 5
>>> outer
[[5, 1, 1, 1], [5, 1, 1, 1], [5, 1, 1, 1]]
```
where it's probably a little less surprising. |
Python list of lists, changes reflected across sublists unexpectedly | 240,178 | 227 | 2008-10-27T14:57:22Z | 240,205 | 186 | 2008-10-27T15:03:30Z | [
"python",
"list",
"nested-lists",
"mutable"
] | I needed to create a list of lists in Python, so I typed the following:
```
myList = [[1] * 4] * 3
```
The list looked like this:
```
[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]
```
Then I changed one of the innermost values:
```
myList[0][0] = 5
```
Now my list looks like this:
```
[[5, 1, 1, 1], [5, 1, 1, 1], [5, 1, 1, 1]]
```
which is not what I wanted or expected. Can someone please explain what's going on, and how to get around it? | When you write `[x]*3` you get, essentially, the list `[x, x, x]`. That is, a list with 3 references to the same `x`. When you then modify this single `x` it is visible via all three references to it.
To fix it, you need to make sure that you create a new list at each position. One way to do it is
```
[[1]*4 for n in range(3)]
``` |
Python list of lists, changes reflected across sublists unexpectedly | 240,178 | 227 | 2008-10-27T14:57:22Z | 240,215 | 26 | 2008-10-27T15:07:02Z | [
"python",
"list",
"nested-lists",
"mutable"
] | I needed to create a list of lists in Python, so I typed the following:
```
myList = [[1] * 4] * 3
```
The list looked like this:
```
[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]
```
Then I changed one of the innermost values:
```
myList[0][0] = 5
```
Now my list looks like this:
```
[[5, 1, 1, 1], [5, 1, 1, 1], [5, 1, 1, 1]]
```
which is not what I wanted or expected. Can someone please explain what's going on, and how to get around it? | Actually, this is exactly what you would expect. Let's decompose what is happening here:
You write
```
lst = [[1] * 4] * 3
```
This is equivalent to:
```
lst1 = [1]*4
lst = [lst1]*3
```
This means `lst` is a list with 3 elements all pointing to `lst1`. This means the two following lines are equivalent:
```
lst[0][0] = 5
lst1[0] = 5
```
As `lst[0]` is nothing but `lst1`.
To obtain the desired behavior, you can use list comprehension:
```
lst = [ [1]*4 for n in xrange(3) ]
```
In this case, the expression is re-evaluated for each n, leading to a different list. |
Python list of lists, changes reflected across sublists unexpectedly | 240,178 | 227 | 2008-10-27T14:57:22Z | 18,454,568 | 56 | 2013-08-26T23:17:52Z | [
"python",
"list",
"nested-lists",
"mutable"
] | I needed to create a list of lists in Python, so I typed the following:
```
myList = [[1] * 4] * 3
```
The list looked like this:
```
[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]
```
Then I changed one of the innermost values:
```
myList[0][0] = 5
```
Now my list looks like this:
```
[[5, 1, 1, 1], [5, 1, 1, 1], [5, 1, 1, 1]]
```
which is not what I wanted or expected. Can someone please explain what's going on, and how to get around it? | ```
size = 3
matrix_surprise = [[0] * size] * size
matrix = [[0]*size for i in range(size)]
```

[Live Python Tutor Visualize](http://pythontutor.com/visualize.html#code=size+%3D+3%0Amatrix_surprise+%3D+%5B%5B0%5D+*+size%5D+*+size%0Amatrix+%3D+%5B%5B0%5D*size+for+i+in+range(size)%5D&mode=display&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=2&curInstr=6) |
Python lazy list | 241,141 | 14 | 2008-10-27T19:32:52Z | 242,111 | 7 | 2008-10-28T02:59:18Z | [
"python",
"lazylist"
] | I would like create my own collection that has all the attributes of python list and also knows how to save/load itself into/from a database. Also I want to make the load implicit and lazy, as in it doesn't happen at the point of creation of the list, but waits until its first used.
Is there a single`__xxx__`method I can override to load the list on first usage of any list property (such as`len`,`getitem`,`iter`...etc) without having to override them all? | Not exactly. For emulating things *other* than lists, there's `__getattribute__`, but unfortunately Python doesn't consider operators like `x[y]` or `x(y)` to be *exactly* the same as `x.__getitem__(y)` or `x.__call__(y)`. Operators like that are attributes of the class, not attributes of the instance, as you can see here:
```
>>> class x(object):
... def __getattribute__(self, o):
... print o
...
>>> x()[3]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'x' object does not support indexing
```
However, you can take advantage of Python's dynamic nature to effectively eliminate that distinction. If your main concern is to save yourself typing, and to produce less code that needs maintaining, you can do something like this:
```
class override(object):
def __init__(self, methodName):
self.methodName = methodName
def __get__(self, oself, cls):
oself._load(self.methodName)
return getattr(super(oself.__class__, oself), self.methodName)
class LazyList(list):
def _load(self, name):
print 'Loading data for %s...' % (name,)
for methodName in set(dir(list)) - set(dir(object)):
locals()[methodName] = override(methodName)
```
You probably don't want to use `dir()` in real life, but a suitable fixed list of strings could work as a substitute. |
Python lazy list | 241,141 | 14 | 2008-10-27T19:32:52Z | 5,104,787 | 7 | 2011-02-24T12:39:42Z | [
"python",
"lazylist"
] | I would like create my own collection that has all the attributes of python list and also knows how to save/load itself into/from a database. Also I want to make the load implicit and lazy, as in it doesn't happen at the point of creation of the list, but waits until its first used.
Is there a single`__xxx__`method I can override to load the list on first usage of any list property (such as`len`,`getitem`,`iter`...etc) without having to override them all? | Not a *single*, but 5 is enough:
```
from collections import MutableSequence
class Monitored(MutableSequence):
  def __init__(self):
    super(Monitored, self).__init__()
    self._list = []
  def __len__(self):
    r = len(self._list)
    print "len: {0:d}".format(r)
    return r
  def __getitem__(self, index):
    r = self._list[index]
    print "getitem: {0!s}".format(index)
    return r
  def __setitem__(self, index, value):
    print "setitem {0!s}: {1:s}".format(index, repr(value))
    self._list[index] = value
  def __delitem__(self, index):
    print "delitem: {0!s}".format(index)
    del self._list[index]
  def insert(self, index, value):
    print "insert at {0:d}: {1:s}".format(index, repr(value))
    self._list.insert(index, value)
```
The correct way of checking if something implements the whole list interface is to check if it is a subclass of `MutableSequence`. The ABCs found in the `collections` module, of which `MutableSequence` is one, are there for two reasons:
1. to allow you to make your own classes emulating internal container types so that they are usable everywhere a normal built-in is.
2. to use as argument for `isinstance` and `issubclass` to verify that an object implements the necessary functionality:
```
>>> isinstance([], MutableSequence)
True
>>> issubclass(list, MutableSequence)
True
```
Our `Monitored` class works like this:
```
>>> m = Monitored()
>>> m.append(3)
len: 0
insert at 0: 3
>>> m.extend((1, 4))
len: 1
insert at 1: 1
len: 2
insert at 2: 4
>>> m.l
[3, 1, 4]
>>> m.remove(4)
getitem: 0
getitem: 1
getitem: 2
delitem: 2
>>> m.pop(0) # after this, m.l == [1]
getitem: 0
delitem: 0
3
>>> m.insert(0, 4)
insert at 0: 4
>>> m.reverse() # After reversing, m.l == [1, 4]
len: 2
getitem: 1
getitem: 0
setitem 0: 1
setitem 1: 4
>>> m.index(4)
getitem: 0
getitem: 1
1
``` |
Single Table Inheritance in Django | 241,250 | 15 | 2008-10-27T20:18:08Z | 243,543 | 15 | 2008-10-28T14:29:20Z | [
"python",
"django",
"django-models",
"single-table-inheritance"
] | Is there explicit support for Single Table Inheritance in Django? Last I heard, the feature was still under development and debate.
Are there libraries/hacks I can use in the meantime to capture the basic behavior? I have a hierarchy that mixes different objects. The canonical example of a corporation structure with an Employee class, subclasses for types of employees, and a manager\_id (parent\_id) would be a good approximation of the problem I am solving.
In my case, I would like to represent the idea that an employee can manage other employees while being managed by a different employee. There are not separate classes for Manager and Worker, which makes this hard to spread across tables. Sub-classes would represent types of employees-programmers, accountants, sales, etc and would be independent of who supervises who (OK, I guess it's no longer a typical corporation in some respect). | There are currently two forms of inheritance in Django - MTI (model table inheritance) and ABC (abstract base classes).
I wrote a [tutorial](http://web.archive.org/web/20090227074910/http://thisweekindjango.com/articles/2008/jun/17/abstract-base-classes-vs-model-tab/) on what's going on under the hood.
You can also reference the official docs on [model inheritance](http://docs.djangoproject.com/en/dev/topics/db/models/#model-inheritance). |
Single Table Inheritance in Django | 241,250 | 15 | 2008-10-27T20:18:08Z | 1,720,733 | 14 | 2009-11-12T08:25:21Z | [
"python",
"django",
"django-models",
"single-table-inheritance"
] | Is there explicit support for Single Table Inheritance in Django? Last I heard, the feature was still under development and debate.
Are there libraries/hacks I can use in the meantime to capture the basic behavior? I have a hierarchy that mixes different objects. The canonical example of a corporation structure with an Employee class, subclasses for types of employees, and a manager\_id (parent\_id) would be a good approximation of the problem I am solving.
In my case, I would like to represent the idea that an employee can manage other employees while being managed by a different employee. There are not separate classes for Manager and Worker, which makes this hard to spread across tables. Sub-classes would represent types of employees-programmers, accountants, sales, etc and would be independent of who supervises who (OK, I guess it's no longer a typical corporation in some respect). | I think the OP is asking about Single-Table Inheritance as [defined here](http://www.martinfowler.com/eaaCatalog/singleTableInheritance.html):
> Relational databases don't support inheritance, so when mapping from objects to databases we have to consider how to represent our nice inheritance structures in relational tables. When mapping to a relational database, we try to minimize the joins that can quickly mount up when processing an inheritance structure in multiple tables. Single Table Inheritance maps all fields of all classes of an inheritance structure into a single table.
That is, a single database table for a whole hierarchy of entity classes. Django does not support that kind of inheritance. |
Python snippet to remove C and C++ comments | 241,327 | 32 | 2008-10-27T20:47:24Z | 241,329 | 19 | 2008-10-27T20:48:57Z | [
"c++",
"python",
"c",
"regex",
"comments"
] | I'm looking for Python code that removes C and C++ comments from a string. (Assume the string contains an entire C source file.)
I realize that I could .match() substrings with a Regex, but that doesn't solve nesting `/*`, or having a `//` inside a `/* */`.
Ideally, I would prefer a non-naive implementation that properly handles awkward cases. | C (and C++) comments cannot be nested. Regular expressions work well:
```
//.*?\n|/\*.*?\*/
```
This requires the âSingle lineâ flag (`Re.S`) because a C comment can span multiple lines.
```
def stripcomments(text):
return re.sub('//.*?\n|/\*.*?\*/', '', text, re.S)
```
This code should work.
/EDIT: Notice that my above code actually makes an assumption about line endings! This code won't work on a Mac text file. However, this can be amended relatively easily:
```
//.*?(\r\n?|\n)|/\*.*?\*/
```
This regular expression should work on all text files, regardless of their line endings (covers Windows, Unix and Mac line endings).
/EDIT: MizardX and Brian (in the comments) made a valid remark about the handling of strings. I completely forgot about that because the above regex is plucked from a parsing module that has additional handling for strings. MizardX's solution should work very well but it only handles double-quoted strings. |
Python snippet to remove C and C++ comments | 241,327 | 32 | 2008-10-27T20:47:24Z | 241,506 | 54 | 2008-10-27T21:48:07Z | [
"c++",
"python",
"c",
"regex",
"comments"
] | I'm looking for Python code that removes C and C++ comments from a string. (Assume the string contains an entire C source file.)
I realize that I could .match() substrings with a Regex, but that doesn't solve nesting `/*`, or having a `//` inside a `/* */`.
Ideally, I would prefer a non-naive implementation that properly handles awkward cases. | This handles C++-style comments, C-style comments, strings and simple nesting thereof.
```
def comment_remover(text):
def replacer(match):
s = match.group(0)
if s.startswith('/'):
return " " # note: a space and not an empty string
else:
return s
pattern = re.compile(
r'//.*?$|/\*.*?\*/|\'(?:\\.|[^\\\'])*\'|"(?:\\.|[^\\"])*"',
re.DOTALL | re.MULTILINE
)
return re.sub(pattern, replacer, text)
```
Strings needs to be included, because comment-markers inside them does not start a comment.
**Edit:** re.sub didn't take any flags, so had to compile the pattern first.
**Edit2:** Added character literals, since they could contain quotes that would otherwise be recognized as string delimiters.
**Edit3:** Fixed the case where a legal expression `int/**/x=5;` would become `intx=5;` which would not compile, by replacing the comment with a space rather then an empty string. |
Python snippet to remove C and C++ comments | 241,327 | 32 | 2008-10-27T20:47:24Z | 242,226 | 9 | 2008-10-28T04:03:20Z | [
"c++",
"python",
"c",
"regex",
"comments"
] | I'm looking for Python code that removes C and C++ comments from a string. (Assume the string contains an entire C source file.)
I realize that I could .match() substrings with a Regex, but that doesn't solve nesting `/*`, or having a `//` inside a `/* */`.
Ideally, I would prefer a non-naive implementation that properly handles awkward cases. | I don't know if you're familiar with `sed`, the UNIX-based (but Windows-available) text parsing program, but I've found a sed script [here](http://sed.sourceforge.net/grabbag/scripts/remccoms3.sed) which will remove C/C++ comments from a file. It's very smart; for example, it will ignore '//' and '/\*' if found in a string declaration, etc. From within Python, it can be used using the following code:
```
import subprocess
from cStringIO import StringIO
input = StringIO(source_code) # source_code is a string with the source code.
output = StringIO()
process = subprocess.Popen(['sed', '/path/to/remccoms3.sed'],
input=input, output=output)
return_code = process.wait()
stripped_code = output.getvalue()
```
In this program, `source_code` is the variable holding the C/C++ source code, and eventually `stripped_code` will hold C/C++ code with the comments removed. Of course, if you have the file on disk, you could have the `input` and `output` variables be file handles pointing to those files (`input` in read-mode, `output` in write-mode). `remccoms3.sed` is the file from the above link, and it should be saved in a readable location on disk. `sed` is also available on Windows, and comes installed by default on most GNU/Linux distros and Mac OS X.
This will probably be better than a pure Python solution; no need to reinvent the wheel. |
SUDS - programmatic access to methods and types | 241,892 | 18 | 2008-10-28T00:52:41Z | 1,842,812 | 16 | 2009-12-03T20:46:55Z | [
"python",
"suds"
] | I'm investigating SUDS as a SOAP client for python. I want to inspect the methods available from a specified service, and the types required by a specified method.
The aim is to generate a user interface, allowing users to select a method, then fill in values in a dynamically generated form.
I can get some information on a particular method, but am unsure how to parse it:
```
client = Client(url)
method = client.sd.service.methods['MyMethod']
```
I am unable to **programmaticaly** figure out what object type I need to create to be able to call the service
```
obj = client.factory.create('?')
res = client.service.MyMethod(obj, soapheaders=authen)
```
Does anyone have some sample code? | According to `suds` [documentation](https://fedorahosted.org/suds/wiki/Documentation#BASICUSAGE), you can inspect `service` object with `__str()__`. So the following gets a list of methods and complex types:
```
from suds.client import Client;
url = 'http://www.webservicex.net/WeatherForecast.asmx?WSDL'
client = Client(url)
temp = str(client);
```
The code above produces following result (contents of `temp`):
```
Suds ( https://fedorahosted.org/suds/ ) version: 0.3.4 (beta) build: R418-20081208
Service ( WeatherForecast ) tns="http://www.webservicex.net"
Prefixes (1)
ns0 = "http://www.webservicex.net"
Ports (2):
(WeatherForecastSoap)
Methods (2):
GetWeatherByPlaceName(xs:string PlaceName, )
GetWeatherByZipCode(xs:string ZipCode, )
Types (3):
ArrayOfWeatherData
WeatherData
WeatherForecasts
(WeatherForecastSoap12)
Methods (2):
GetWeatherByPlaceName(xs:string PlaceName, )
GetWeatherByZipCode(xs:string ZipCode, )
Types (3):
ArrayOfWeatherData
WeatherData
WeatherForecasts
```
This would be much easier to parse. Also every method is listed with their parameters along with their types. You could, probably, even use just regular expression to extract information you need. |
SUDS - programmatic access to methods and types | 241,892 | 18 | 2008-10-28T00:52:41Z | 1,858,144 | 22 | 2009-12-07T06:12:25Z | [
"python",
"suds"
] | I'm investigating SUDS as a SOAP client for python. I want to inspect the methods available from a specified service, and the types required by a specified method.
The aim is to generate a user interface, allowing users to select a method, then fill in values in a dynamically generated form.
I can get some information on a particular method, but am unsure how to parse it:
```
client = Client(url)
method = client.sd.service.methods['MyMethod']
```
I am unable to **programmaticaly** figure out what object type I need to create to be able to call the service
```
obj = client.factory.create('?')
res = client.service.MyMethod(obj, soapheaders=authen)
```
Does anyone have some sample code? | Okay, so SUDS does quite a bit of magic.
A `suds.client.Client`, is built from a WSDL file:
```
client = suds.client.Client("http://mssoapinterop.org/asmx/simple.asmx?WSDL")
```
It downloads the WSDL and creates a definition in `client.wsdl`. When you call a method using SUDS via `client.service.<method>` it's actually doing a whole lot of recursive resolve magic behind the scenes against that interpreted WSDL. To discover the parameters and types for methods you'll need to introspect this object.
For example:
```
for method in client.wsdl.services[0].ports[0].methods.values():
print '%s(%s)' % (method.name, ', '.join('%s: %s' % (part.type, part.name) for part in method.soap.input.body.parts))
```
This should print something like:
```
echoInteger((u'int', http://www.w3.org/2001/XMLSchema): inputInteger)
echoFloatArray((u'ArrayOfFloat', http://soapinterop.org/): inputFloatArray)
echoVoid()
echoDecimal((u'decimal', http://www.w3.org/2001/XMLSchema): inputDecimal)
echoStructArray((u'ArrayOfSOAPStruct', http://soapinterop.org/xsd): inputStructArray)
echoIntegerArray((u'ArrayOfInt', http://soapinterop.org/): inputIntegerArray)
echoBase64((u'base64Binary', http://www.w3.org/2001/XMLSchema): inputBase64)
echoHexBinary((u'hexBinary', http://www.w3.org/2001/XMLSchema): inputHexBinary)
echoBoolean((u'boolean', http://www.w3.org/2001/XMLSchema): inputBoolean)
echoStringArray((u'ArrayOfString', http://soapinterop.org/): inputStringArray)
echoStruct((u'SOAPStruct', http://soapinterop.org/xsd): inputStruct)
echoDate((u'dateTime', http://www.w3.org/2001/XMLSchema): inputDate)
echoFloat((u'float', http://www.w3.org/2001/XMLSchema): inputFloat)
echoString((u'string', http://www.w3.org/2001/XMLSchema): inputString)
```
So the first element of the part's type tuple is probably what you're after:
```
>>> client.factory.create(u'ArrayOfInt')
(ArrayOfInt){
_arrayType = ""
_offset = ""
_id = ""
_href = ""
_arrayType = ""
}
```
Update:
For the Weather service it appears that the "parameters" are a part with an `element` not a `type`:
```
>>> client = suds.client.Client('http://www.webservicex.net/WeatherForecast.asmx?WSDL')
>>> client.wsdl.services[0].ports[0].methods.values()[0].soap.input.body.parts[0].element
(u'GetWeatherByZipCode', http://www.webservicex.net)
>>> client.factory.create(u'GetWeatherByZipCode')
(GetWeatherByZipCode){
ZipCode = None
}
```
But this is magic'd into the parameters of the method call (a la `client.service.GetWeatherByZipCode("12345")`. IIRC this is SOAP RPC binding style? I think there's enough information here to get you started. Hint: the Python command line interface is your friend! |
SUDS - programmatic access to methods and types | 241,892 | 18 | 2008-10-28T00:52:41Z | 16,616,472 | 7 | 2013-05-17T19:25:56Z | [
"python",
"suds"
] | I'm investigating SUDS as a SOAP client for python. I want to inspect the methods available from a specified service, and the types required by a specified method.
The aim is to generate a user interface, allowing users to select a method, then fill in values in a dynamically generated form.
I can get some information on a particular method, but am unsure how to parse it:
```
client = Client(url)
method = client.sd.service.methods['MyMethod']
```
I am unable to **programmaticaly** figure out what object type I need to create to be able to call the service
```
obj = client.factory.create('?')
res = client.service.MyMethod(obj, soapheaders=authen)
```
Does anyone have some sample code? | Here's a quick script I wrote based on the above information to list the input methods suds reports as available on a WSDL. Pass in the WSDL URL. Works for the project I'm currently on, I can't guarantee it for yours.
```
import suds
def list_all(url):
client = suds.client.Client(url)
for service in client.wsdl.services:
for port in service.ports:
methods = port.methods.values()
for method in methods:
print(method.name)
for part in method.soap.input.body.parts:
part_type = part.type
if(not part_type):
part_type = part.element[0]
print(' ' + str(part.name) + ': ' + str(part_type))
o = client.factory.create(part_type)
print(' ' + str(o))
``` |
OpenGl with Python | 242,059 | 15 | 2008-10-28T02:29:24Z | 246,894 | 16 | 2008-10-29T13:58:22Z | [
"python",
"opengl",
"fedora"
] | I am currently in a course that is using OpenGL and I have been using C for all the programs so far. I have Python installed on Fedora as well as OpenGL, however the minute I call an OpenGL command in my Python code, I get a **segmentation fault**. I have no idea why this is.
Just to avoid the "just use C" comments, here is why I want to use Python:
There are a couple reasons I am wanting to switch from C to Python, but the main one is because we are about to start writing a raytracer and I would like to use classes to make it easier on me. Since I hate classes in C++ and structs in C seems a little crazy, I thought I would give Python a try at it. I have also been looking for a reason to use Python again as it has been a while.
Thanks for any help. | You may also want to consider using [Pyglet](http://www.pyglet.org/) instead of PyOpenGL. It's a ctypes-wrapper around the native OpenGL libs on the local platform, along with windowing support (should handle most of the stuff you want to use GLUT for.) The [pyglet-users](http://groups.google.com/group/pyglet-users) list is pretty active and very helpful. |
Starting python debugger automatically on error | 242,485 | 106 | 2008-10-28T07:37:15Z | 242,506 | 26 | 2008-10-28T07:51:45Z | [
"python",
"debugging"
] | This is a question I have wondered about for quite some time, yet I have never found a suitable solution. If I run a script and I come across, let's say an IndexError, python prints the line, location and quick description of the error and exits. Is it possible to automatically start pdb when an error is encountered? I am not against having an extra import statement at the top of the file, nor a few extra lines of code. | This isn't the debugger, but probably just as useful(?)
I know I heard Guido mention this in a speech somewhere.
I just checked python -?, and if you use the -i command you can interact where your script stopped.
So given this script:
```
testlist = [1,2,3,4,5, 0]
prev_i = None
for i in testlist:
if not prev_i:
prev_i = i
else:
result = prev_i/i
```
You can get this output!
```
PS D:\> python -i debugtest.py
Traceback (most recent call last):
File "debugtest.py", line 10, in <module>
result = prev_i/i
ZeroDivisionError: integer division or modulo by zero
>>>
>>>
>>> prev_i
1
>>> i
0
>>>
```
To be honest I haven't used this, but I should be, seems very useful. |
Starting python debugger automatically on error | 242,485 | 106 | 2008-10-28T07:37:15Z | 242,514 | 64 | 2008-10-28T07:56:24Z | [
"python",
"debugging"
] | This is a question I have wondered about for quite some time, yet I have never found a suitable solution. If I run a script and I come across, let's say an IndexError, python prints the line, location and quick description of the error and exits. Is it possible to automatically start pdb when an error is encountered? I am not against having an extra import statement at the top of the file, nor a few extra lines of code. | You can use [traceback.print\_exc](http://docs.python.org/library/traceback.html#traceback.print_exc) to print the exceptions traceback. Then use [sys.exc\_info](http://docs.python.org/library/sys#sys.exc_info) to extract the traceback and finally call [pdb.post\_mortem](http://docs.python.org/library/pdb#pdb.post_mortem) with that traceback
```
import pdb, traceback, sys
def bombs():
a = []
print a[0]
if __name__ == '__main__':
try:
bombs()
except:
type, value, tb = sys.exc_info()
traceback.print_exc()
pdb.post_mortem(tb)
```
If you want to start an interactive command line with [code.interact](http://docs.python.org/library/code#code.interact) using the locals of the frame where the exception originated you can do
```
import traceback, sys, code
def bombs():
a = []
print a[0]
if __name__ == '__main__':
try:
bombs()
except:
type, value, tb = sys.exc_info()
traceback.print_exc()
last_frame = lambda tb=tb: last_frame(tb.tb_next) if tb.tb_next else tb
frame = last_frame().tb_frame
ns = dict(frame.f_globals)
ns.update(frame.f_locals)
code.interact(local=ns)
``` |
Starting python debugger automatically on error | 242,485 | 106 | 2008-10-28T07:37:15Z | 242,531 | 43 | 2008-10-28T08:14:56Z | [
"python",
"debugging"
] | This is a question I have wondered about for quite some time, yet I have never found a suitable solution. If I run a script and I come across, let's say an IndexError, python prints the line, location and quick description of the error and exits. Is it possible to automatically start pdb when an error is encountered? I am not against having an extra import statement at the top of the file, nor a few extra lines of code. | Use the following module:
```
import sys
def info(type, value, tb):
if hasattr(sys, 'ps1') or not sys.stderr.isatty():
# we are in interactive mode or we don't have a tty-like
# device, so we call the default hook
sys.__excepthook__(type, value, tb)
else:
import traceback, pdb
# we are NOT in interactive mode, print the exceptionâ¦
traceback.print_exception(type, value, tb)
print
# â¦then start the debugger in post-mortem mode.
# pdb.pm() # deprecated
pdb.post_mortem(tb) # more âmodernâ
sys.excepthook = info
```
Name it `debug` (or whatever you like) and put it somewhere in your python path.
Now, at the start of your script, just add an `import debug`. |
Starting python debugger automatically on error | 242,485 | 106 | 2008-10-28T07:37:15Z | 2,438,834 | 170 | 2010-03-13T15:20:52Z | [
"python",
"debugging"
] | This is a question I have wondered about for quite some time, yet I have never found a suitable solution. If I run a script and I come across, let's say an IndexError, python prints the line, location and quick description of the error and exits. Is it possible to automatically start pdb when an error is encountered? I am not against having an extra import statement at the top of the file, nor a few extra lines of code. | ```
python -m pdb myscript.py
```
You'll need to enter 'c' (for Continue) when execution begins, but then it will run to the error point and give you control there. |
Starting python debugger automatically on error | 242,485 | 106 | 2008-10-28T07:37:15Z | 13,003,378 | 16 | 2012-10-21T23:49:51Z | [
"python",
"debugging"
] | This is a question I have wondered about for quite some time, yet I have never found a suitable solution. If I run a script and I come across, let's say an IndexError, python prints the line, location and quick description of the error and exits. Is it possible to automatically start pdb when an error is encountered? I am not against having an extra import statement at the top of the file, nor a few extra lines of code. | **Ipython** has a command for toggling this behavior: **%pdb**. It does exactly what you described, maybe even a bit more (giving you more informative backtraces with syntax highlighting and code completion). It's definitely worth a try! |
How to set the PYTHONPATH in Emacs? | 243,060 | 14 | 2008-10-28T12:17:35Z | 243,239 | 21 | 2008-10-28T13:17:01Z | [
"python",
"emacs",
"environment-variables"
] | Emacs does not recognize my correct python path. I think it is a general problem with emacs not recognizing my environment variables. I have GNU Emacs 22.1.1 (i386-apple-darwin8.9.1, Carbon Version 1.6.0) of 2007-06-17 installed.
I have set the PYTHONPATH in my ~/.bashrc. Maybe I should set it somewhere else? | `.bashrc` only gets read when a shell starts; it won't affect Carbon Emacs. Instead, use `setenv` in your `.emacs`:
```
(setenv "PYTHONPATH" "PATH_STRING_HERE")
```
You can set `PYTHONPATH` for the entire Mac OS session, by adding it to `~/.MacOSX/environment.plist` (more [here](http://developer.apple.com/documentation/MacOSX/Conceptual/BPRuntimeConfig/Articles/EnvironmentVars.html#//apple_ref/doc/uid/20002093-113982)). You probably don't want to do this unless you have XCode (and its property list editor) installed.
([Via Procrastiblog](http://procrastiblog.com/2007/07/09/changing-your-path-in-emacs-compilation-mode/)) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.