title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Cycle a list from alternating sides | 36,533,553 | 27 | 2016-04-10T18:11:13Z | 36,533,770 | 8 | 2016-04-10T18:26:55Z | [
"python",
"algorithm",
"list",
"iteration"
] | Given a list
```
a = [0,1,2,3,4,5,6,7,8,9]
```
how can I get
```
b = [0,9,1,8,2,7,3,6,4,5]
```
That is, produce a new list in which each successive element is alternately taken from the two sides of the original list? | A very nice one-liner in Python 2.7:
```
results = list(sum(zip(a, reversed(a))[:len(a)/2], ()))
>>>> [0, 9, 1, 8, 2, 7, 3, 6, 4, 5]
```
First you zip the list with its reverse, take *half* that list, sum the tuples to form one tuple, and *then* convert to list.
In Python 3, `zip` returns a generator, so you have have to use `islice` from `itertools`:
```
from itertools import islice
results = list(sum(islice(zip(a, reversed(a)),0,int(len(a)/2)),()))
```
*Edit*: It appears this only works perfectly for even-list lengths - odd-list lengths will omit the middle element :( A small correction for `int(len(a)/2)` to `int(len(a)/2) + 1` will give you a duplicate middle value, so be warned. |
Cycle a list from alternating sides | 36,533,553 | 27 | 2016-04-10T18:11:13Z | 36,533,868 | 50 | 2016-04-10T18:35:17Z | [
"python",
"algorithm",
"list",
"iteration"
] | Given a list
```
a = [0,1,2,3,4,5,6,7,8,9]
```
how can I get
```
b = [0,9,1,8,2,7,3,6,4,5]
```
That is, produce a new list in which each successive element is alternately taken from the two sides of the original list? | ```
>>> [a[-i//2] if i % 2 else a[i//2] for i in range(len(a))]
[0, 9, 1, 8, 2, 7, 3, 6, 4, 5]
```
*Explanation:*
This code picks numbers from the beginning (`a[i//2]`) and from the end (`a[-i//2]`) of `a`, alternatingly (`if i%2 else`). A total of `len(a)` numbers are picked, so this produces no ill effects even if `len(a)` is odd.
`[-i//2 for i in range(len(a))]` yields `0, -1, -1, -2, -2, -3, -3, -4, -4, -5`,
`[ i//2 for i in range(len(a))]` yields `0, 0, 1, 1, 2, 2, 3, 3, 4, 4`,
and `i%2` alternates between `False` and `True`,
so the indices we extract from `a` are: `0, -1, 1, -2, 2, -3, 3, -4, 4, -5`.
*My assessment of pythonicness:*
The nice thing about this one-liner is that it's short and shows symmetry (`+i//2` and `-i//2`).
**The bad thing, though, is that this symmetry is deceptive:**
One might think that `-i//2` were the same as `i//2` with the sign flipped. [But in Python, integer division returns the floor](http://python-history.blogspot.de/2010/08/why-pythons-integer-division-floors.html) of the result instead of truncating towards zero. So `-1//2 == -1`.
Also, I find accessing list elements by index less pythonic than iteration. |
Variable assignment faster than one liner | 36,548,518 | 63 | 2016-04-11T12:19:24Z | 36,549,633 | 87 | 2016-04-11T13:08:10Z | [
"python",
"python-3.x",
"cpython",
"python-internals"
] | I have encountered this weird behavior and failed to explain it. These are the benchmarks:
```
py -3 -m timeit "tuple(range(2000)) == tuple(range(2000))"
10000 loops, best of 3: 97.7 usec per loop
py -3 -m timeit "a = tuple(range(2000)); b = tuple(range(2000)); a==b"
10000 loops, best of 3: 70.7 usec per loop
```
How come comparison with variable assignment is faster than using a one liner with temporary variables by more than 27%?
By the Python docs, garbage collection is disabled during timeit so it can't be that. Is it some sort of an optimization?
The results may also be reproduced in Python 2.x though to lesser extent.
Running Windows 7, CPython 3.5.1, Intel i7 3.40 GHz, 64 bit both OS and Python. Seems like a different machine I've tried running at Intel i7 3.60 GHz with Python 3.5.0 does not reproduce the results.
---
Running using the same Python process with `timeit.timeit()` @ 10000 loops produced 0.703 and 0.804 respectively. Still shows although to lesser extent. (~12.5%) | My results were similar to yours: the code using variables was pretty consistently 10-20 % faster. However when I used IPython on the very same Python 3.4, I got these results:
```
In [1]: %timeit -n10000 -r20 tuple(range(2000)) == tuple(range(2000))
10000 loops, best of 20: 74.2 µs per loop
In [2]: %timeit -n10000 -r20 a = tuple(range(2000)); b = tuple(range(2000)); a==b
10000 loops, best of 20: 75.7 µs per loop
```
Notably, I never managed to get even close to the 74.2 µs for the former when I used `-mtimeit` from the command line.
---
So I decided to run the command with `strace` and indeed there is something fishy going on:
```
% strace -o withoutvars python3 -m timeit "tuple(range(2000)) == tuple(range(2000))"
10000 loops, best of 3: 134 usec per loop
% strace -o withvars python3 -mtimeit "a = tuple(range(2000)); b = tuple(range(2000)); a==b"
10000 loops, best of 3: 75.8 usec per loop
% grep mmap withvars|wc -l
46
% grep mmap withoutvars|wc -l
41149
```
Now that is a good reason for the difference. The code that does not use variables causes the `mmap` system call be called almost 1000x more than the one that uses intermediate variables.
The `withoutvars` is full of `mmap`/`munmap` for a 256k region; these same lines are repeated over and over again:
```
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f32e56de000
munmap(0x7f32e56de000, 262144) = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f32e56de000
munmap(0x7f32e56de000, 262144) = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f32e56de000
munmap(0x7f32e56de000, 262144) = 0
```
---
The `mmap` call seems to be coming from the function `_PyObject_ArenaMmap` from `Objects/obmalloc.c`; the `obmalloc.c` also contains the macro `ARENA_SIZE`, which is `#define`d to be `(256 << 10)` (that is `262144`); similarly the `munmap` matches the `_PyObject_ArenaMunmap` from `obmalloc.c`.
`obmalloc.c` says that
> Prior to Python 2.5, arenas were never `free()`'ed. Starting with Python 2.5,
> we do try to `free()` arenas, and use some mild heuristic strategies to increase
> the likelihood that arenas eventually can be freed.
Thus these heuristics and the fact that Python object allocator releases these free arenas as soon as they're emptied lead to `python3 -mtimeit 'tuple(range(2000)) == tuple(range(2000))'` triggering pathological behaviour where one 256 kiB memory area is re-allocated and released repeatedly; and this allocation happens with `mmap`/`munmap`, which is comparatively costly as they're system calls - furthermore, `mmap` with `MAP_ANONYMOUS` requires that the newly mapped pages must be zeroed - even though Python wouldn't care.
The behaviour does not seem to be present with the code that uses an intermediate variable, possibly because it is using slightly *more* memory and never actually freeing all the objects from the last memory arena. Most notably it cannot be guaranteed that the code using intermediate variables is always faster - indeed in some setups it might be that using intermediate variables will result in extra `mmap` calls, whereas the code that compares return values directly might be fine. |
Assigning string with boolean expression | 36,550,588 | 4 | 2016-04-11T13:46:24Z | 36,551,857 | 7 | 2016-04-11T14:41:09Z | [
"python",
"python-2.7"
] | I am trying to understand this code from someone else's project. If you want the context it's here: <https://github.com/newsapps/beeswithmachineguns/blob/master/beeswithmachineguns/bees.py#L501>
`IS_PY2` is just a boolean variable, `True` if the Python major version is 2.
I know that a non-empty string is `True`, but for some reason I don't understand `openmode` is assigned either `'w'` or `'wt'` rather than `True` or `False`.
```
openmode = IS_PY2 and 'w' or 'wt'
openkwargs = IS_PY2 and {} or {'encoding': 'utf-8', 'newline': ''}
```
Could someone explain the result? | The `and` and `or` operators *don't* simply perform a boolean operation on their operands, giving a boolean result. The result they give is *always* one of their operands. These operators evaluate from left to right, and they short-circuit, meaning that they stop evaluating their operands as soon as possible.
In pure boolean logic, `False and x` is `False`, no matter what `x` is, so there's no need to examine `x`. The Python expression `False and x` will give a result of `False` and it will not attempt to evaluate `x`. Thus `False and some_function()` will *not* call `some_function()`.
Similarly, `True and x` in pure boolean logic will have the same truth value as `x`, i.e., if `x` is `True` then `True and x` is `True`, otherwise its `False`.
But the Python `and` operator can handle arbitrary operands.
In `a and b` if `a` is false-ish, then `b` won't be evaluated and the result will be `a`. If `a` is true-ish, then `b` *will* be evaluated and become the result.
Here's a short demo, using Python 2:
```
print False and 'boolean'
print 0 and 'integer'
print '' and 'string'
print [] and 'list'
print
print True and 'boolean'
print 7 and 'integer'
print 'a' and 'string'
print [42] and 'list'
print
print True and False
print True and 0
print True and ''
print True and []
print
```
**output**
```
False
0
[]
boolean
integer
string
list
False
0
[]
```
(Those blank lines between `0` and `[]` are where the empty string is getting printed).
Similar considerations apply to the `or` operator.
In pure boolean logic, `True or x` is `True`, no matter what `x` is so if the first part of an `or` expression is True-ish we don't need to evaluate the second part. And `False or x` has the truth value of `x`.
```
print False or 'boolean'
print 0 or 'integer'
print '' or 'string'
print [] or 'list'
print
print True or 'boolean'
print 7 or 'integer'
print 'a' or 'string'
print [42] or 'list'
print
print False or False
print False or 0
print False or ''
print False or []
print
```
**output**
```
boolean
integer
string
list
True
7
a
[42]
False
0
[]
```
As I said earlier, these operators are evaluated left to right, and we can chain them if we want. Here are the "classic" cases:
```
print True and 'yes' or 'no'
print False and 'yes' or 'no'
```
Those statements are equivalent to
```
print (True and 'yes') or 'no'
print (False and 'yes') or 'no'
```
**output**
```
yes
no
```
That construction was common in early versions of Python. These days, it's far more common to see an *`if` expression*:
```
print 'yes' if True else 'no'
print 'yes' if False else 'no'
```
Which is generally considered to be more readable than the ternary expression using `and` and `or`. Also, `a and b or c` is *not* equivalent to `b if a else c` if `b` is false-ish.
However, it's still important to understand how this ternary `and ... or` thing works, especially if you need to read or maintain older code. And some old Pythonistas still prefer the `and ... or` form, as it's slightly shorter even if it is a little bewildering when you don't understand how it works. :) |
Making len() work with instance methods | 36,557,079 | 17 | 2016-04-11T19:01:52Z | 36,557,192 | 26 | 2016-04-11T19:07:06Z | [
"python",
"instance-methods"
] | Is there a way to make `len()` work with instance methods without modifying the class?
Example of my problem:
```
>>> class A(object):
... pass
...
>>> a = A()
>>> a.__len__ = lambda: 2
>>> a.__len__()
2
>>> len(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'A' has no len()
```
Note:
* different instances of `A` will have different `__len__` methods attached
* I cannot change the class `A` | No. Python always looks up special methods through the object's class. There are several good reasons for this, one being that `repr(A)` should use `type(A).__repr__` instead of `A.__repr__`, which is intended to handle instances of `A` instead of the `A` class itself.
If you want different instances of `A` to compute their `len` differently, consider having `__len__` delegate to another method:
```
class A(object):
def __len__(self):
return self._len()
a = A()
a._len = lambda: 2
``` |
Making len() work with instance methods | 36,557,079 | 17 | 2016-04-11T19:01:52Z | 36,557,277 | 10 | 2016-04-11T19:12:20Z | [
"python",
"instance-methods"
] | Is there a way to make `len()` work with instance methods without modifying the class?
Example of my problem:
```
>>> class A(object):
... pass
...
>>> a = A()
>>> a.__len__ = lambda: 2
>>> a.__len__()
2
>>> len(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'A' has no len()
```
Note:
* different instances of `A` will have different `__len__` methods attached
* I cannot change the class `A` | Special methods such as `__len__` (double-underscore or "dunder" methods) must be defined on the class. They won't work if only defined on the instance.
It is possible to define non-dunder methods on an instance. However, you must convert your function to an instance method by adding a wrapper to it, which is how `self` gets passed in. (This would normally be done when accessing the method, as a method defined on the class is a descriptor that returns a wrapper.) This can be done as follows:
```
a.len = (lambda self: 2).__get__(a, type(a))
```
Combining these ideas, we can write a `__len__()` on the class that delegates to a `len()` that we can define on the instance:
```
class A(object):
def __len__(self):
return self.len()
a = A()
a.len = (lambda self: 2).__get__(a, type(a))
print(len(a)) # prints 2
```
You can actually simplify this in your case because you don't need `self` in order to return your constant `2`. So you can just assign `a.len = lambda: 2`. However, if you need `self`, then you need to make the method wrapper. |
Why does dict.get(key) run slower than dict[key] | 36,566,331 | 4 | 2016-04-12T07:24:26Z | 36,566,435 | 7 | 2016-04-12T07:29:13Z | [
"python",
"performance",
"dictionary"
] | While running a numerical integrator, I noticed a noticeable difference in speed depending on how I extract the value of the field in a dictionary
```
import numpy as np
def bad_get(mydict):
'''Extract the name field using get()'''
output = mydict.get('name', None)
return output
def good_get(mydict):
'''Extract the name field using if-else'''
if 'name' in mydict:
output = mydict['name']
else:
output = None
return output
name_dict = dict()
name_dict['name'] = np.zeros((5000,5000))
```
On my system, I notice the following difference (using iPython)
```
%%timeit
bad_get(name_dict)
The slowest run took 7.75 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 247 ns per loop
```
Compared to
```
%%timeit
good_get(name_dict)
1000000 loops, best of 3: 188 ns per loop
```
This may seem like a small difference, but in for some arrays the difference appears to be even more dramatic. What causes this behavior, and is there some way I should alter my use of the `get()` function? | Python has to do more work for `dict.get()`:
* `get` is an attribute, so Python has to look this up, and then bind the descriptor found to the dictionary instance.
* `()` is a call, so the current frame has to be pushed on the stack, a call has to be made, then the frame has to be popped again from the stack to continue.
The `[...]` notation, used with a `dict`, doesn't require a separate attribute step or frame push and pop.
You can see the difference when you use the [Python bytecode disassembler `dis`](https://docs.python.org/2/library/dis.html):
```
>>> import dis
>>> dis.dis(compile('d[key]', '', 'eval'))
1 0 LOAD_NAME 0 (d)
3 LOAD_NAME 1 (key)
6 BINARY_SUBSCR
7 RETURN_VALUE
>>> dis.dis(compile('d.get(key)', '', 'eval'))
1 0 LOAD_NAME 0 (d)
3 LOAD_ATTR 1 (get)
6 LOAD_NAME 2 (key)
9 CALL_FUNCTION 1
12 RETURN_VALUE
```
so the `d[key]` expression only has to execute a `BINARY_SUBSCR` opcode, while `d.get(key)` adds a `LOAD_ATTR` opcode. `CALL_FUNCTION` is a lot more expensive than `BINARY_SUBSCR` on a built-in type (custom types with `__getitem__` methods still end up doing a function call).
If the *majority* of your keys exist in the dictionary, you could use `try...except KeyError` to handle missing keys:
```
try:
return mydict['name']
except KeyError:
return None
```
Exception handling is cheap if there are no exceptions. |
Is there a "wildcard method" in Python? | 36,574,812 | 2 | 2016-04-12T13:29:30Z | 36,574,885 | 9 | 2016-04-12T13:32:27Z | [
"python",
"class",
"methods"
] | I am looking for a way to use a method of a class which is not defined in that class, but handled dynamically. What I would like to achieve, to take an example, is to move from
```
class Hello:
def aaa(self, msg=""):
print("{msg} aaa".format(msg=msg))
def bbb(self, msg=""):
print("{msg} bbb".format(msg=msg))
if __name__ == "__main__":
h = Hello()
h.aaa("hello")
h.bbb("hello")
# hello aaa
# hello bbb
```
to the possibility of using `aaa` and `bbb` (and others) within the class without the need to define them explicitly. For the example above that would be a construction which receives the name of the method used (`aaa` for instance) and format a message accordingly.
In other other words, a "wildcard method" which would itself handle its name and perform conditional actions depending on the name. In pseudocode (to replicate the example above)
```
def *wildcard*(self, msg=""):
method = __name__which__was__used__to__call__me__
print("{msg} {method}".format(msg=msg, method=method))
```
Is such a construction possible? | You could overload the class' [`__getattr__`](https://docs.python.org/3/reference/datamodel.html#object.__getattr__) method:
```
class Hello:
def __getattr__(self, name):
def f(msg=""):
print("{} {}".format(msg, name))
return f
if __name__ == "__main__":
h = Hello()
h.aaa("hello")
h.bbb("hello")
```
Result:
```
hello aaa
hello bbb
``` |
Square root of complex numbers in python | 36,584,466 | 11 | 2016-04-12T21:32:51Z | 36,584,865 | 8 | 2016-04-12T22:03:22Z | [
"python",
"math",
"complex-numbers"
] | I have run across some confusing behaviour with square roots of complex numbers in python. Running this code:
```
from cmath import sqrt
a = 0.2
b = 0.2 + 0j
print(sqrt(a / (a - 1)))
print(sqrt(b / (b - 1)))
```
gives the output
```
0.5j
-0.5j
```
A similar thing happens with
```
print(sqrt(-1 * b))
print(sqrt(-b))
```
It appears these pairs of statements should give the same answer? | Both answers (`+0.5j` and `-0.5j`) are correct, since they are [complex conjugates](https://en.wikipedia.org/wiki/Complex_conjugate) -- i.e. the real part is identical, and the imaginary part is sign-flipped.
Looking at the [code](https://hg.python.org/cpython/file/tip/Modules/cmathmodule.c#l732) makes the behavior clear - the imaginary part of the result always has the *same sign* as the imaginary part of the input, as seen in lines 790 and 793:
```
r.imag = copysign(d, z.imag);
```
Since `a/(a-1)` is `0.25` which is implicitly `0.25+0j` you get a positive result; `b/(b-1)` produces `0.25-0j` (for some reason; not sure why it doesn't result in `0.25+0j` tbh) so your result is similarly negative.
EDIT: [This question](http://stackoverflow.com/questions/13387782/format-of-complex-number-in-python) has some useful discussion on the same issue. |
Splitting a python list into a list of overlapping chunks | 36,586,897 | 3 | 2016-04-13T01:21:52Z | 36,586,925 | 7 | 2016-04-13T01:26:03Z | [
"python"
] | hi this question is similar to this other [Slicing a list into a list of sub-lists](http://stackoverflow.com/questions/2231663/slicing-a-list-into-a-list-of-sub-lists), but in my case I want to include the last element of the each previous sub-list, as the first element in the next sub-list. And have to take into account that the last element have always to have at least two elements
eg:
```
list_ = ['a','b','c','d','e','f','g','h']
```
the result for size 3 sub-list:
```
resultant_list = [['a','b','c'],['c','d','e'],['e','f','g'],['g','h']]
``` | ```
>>> list_ = ['a','b','c','d','e','f','g','h']
>>> n = 3 # group size
>>> m = 1 # overlap size
>>> [list_[i:i+n-m+1] for i in xrange(0,len(list_), n-m)]
[['a', 'b', 'c'], ['c', 'd', 'e'], ['e', 'f', 'g'], ['g', 'h']]
``` |
What does x[x < 2] = 0 mean in Python? | 36,603,042 | 83 | 2016-04-13T15:27:15Z | 36,603,120 | 44 | 2016-04-13T15:31:00Z | [
"python",
"python-2.7",
"numpy"
] | I came across some code with a line similar to
```
x[x<2]=0
```
Playing around with variations, I am still stuck on what this syntax does.
Examples:
```
>>> x = [1,2,3,4,5]
>>> x[x<2]
1
>>> x[x<3]
1
>>> x[x>2]
2
>>> x[x<2]=0
>>> x
[0, 2, 3, 4, 5]
``` | ```
>>> x = [1,2,3,4,5]
>>> x<2
False
>>> x[False]
1
>>> x[True]
2
```
The bool is simply converted to an integer. The index is either 0 or 1. |
What does x[x < 2] = 0 mean in Python? | 36,603,042 | 83 | 2016-04-13T15:27:15Z | 36,603,274 | 113 | 2016-04-13T15:37:54Z | [
"python",
"python-2.7",
"numpy"
] | I came across some code with a line similar to
```
x[x<2]=0
```
Playing around with variations, I am still stuck on what this syntax does.
Examples:
```
>>> x = [1,2,3,4,5]
>>> x[x<2]
1
>>> x[x<3]
1
>>> x[x>2]
2
>>> x[x<2]=0
>>> x
[0, 2, 3, 4, 5]
``` | This only makes sense with **[NumPy](http://en.wikipedia.org/wiki/NumPy) arrays**. The behavior with lists is useless, and specific to Python 2 (not Python 3). You may want to double-check if the original object was indeed a NumPy array (see further below) and not a list.
But in your code here, x is a simple list.
Since
```
x < 2
```
is False
i.e 0, therefore
`x[x<2]` is `x[0]`
`x[0]` gets changed.
Conversely, `x[x>2]` is `x[True]` or `x[1]`
So, `x[1]` gets changed.
**Why does this happen?**
The rules for comparison are:
1. When you order two strings or two numeric types the ordering is done in the expected way (lexicographic ordering for string, numeric ordering for integers).
2. When you order a numeric and a non-numeric type, the numeric type comes first.
3. When you order two incompatible types where neither is numeric, they are ordered by the alphabetical order of their typenames:
So, we have the following order
numeric < list < string < tuple
See the accepted answer for *[How does Python compare string and int?](http://stackoverflow.com/questions/3270680/how-does-python-compare-string-and-int)*.
**If x is a NumPy array**, then the syntax makes more sense because of **boolean array indexing**. In that case, `x < 2` isn't a boolean at all; it's an array of booleans representing whether each element of `x` was less than 2. `x[x < 2] = 0` then selects the elements of `x` that were less than 2 and sets those cells to 0. See *[Indexing](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html)*.
```
>>> x = np.array([1., -1., -2., 3])
>>> x < 0
array([False, True, True, False], dtype=bool)
>>> x[x < 0] += 20 # All elements < 0 get increased by 20
>>> x
array([ 1., 19., 18., 3.]) # Only elements < 0 are affected
``` |
What does x[x < 2] = 0 mean in Python? | 36,603,042 | 83 | 2016-04-13T15:27:15Z | 36,606,250 | 13 | 2016-04-13T18:08:26Z | [
"python",
"python-2.7",
"numpy"
] | I came across some code with a line similar to
```
x[x<2]=0
```
Playing around with variations, I am still stuck on what this syntax does.
Examples:
```
>>> x = [1,2,3,4,5]
>>> x[x<2]
1
>>> x[x<3]
1
>>> x[x>2]
2
>>> x[x<2]=0
>>> x
[0, 2, 3, 4, 5]
``` | The original code in your question works only in Python 2. If `x` is a `list` in Python 2, the comparison `x < y` is `False` if `y` is an `int`eger. This is because it does not make sense to compare a list with an integer. However in Python 2, if the operands are not comparable, the comparison is based in CPython on the [**alphabetical ordering of the names of the types**](https://docs.python.org/2/library/stdtypes.html#comparisons); additionally **all numbers come first in mixed-type comparisons**. This is not even spelled out in the documentation of CPython 2, and different Python 2 implementations could give different results. That is `[1, 2, 3, 4, 5] < 2` evaluates to `False` because `2` is a number and thus "smaller" than a `list` in CPython. This mixed comparison was eventually [deemed to be too obscure a feature](http://stackoverflow.com/a/2384139/918959), and was removed in Python 3.0.
---
Now, the result of `<` is a `bool`; and [`bool` is a *subclass* of `int`](https://www.python.org/dev/peps/pep-0285/):
```
>>> isinstance(False, int)
True
>>> isinstance(True, int)
True
>>> False == 0
True
>>> True == 1
True
>>> False + 5
5
>>> True + 5
6
```
So basically you're taking the element 0 or 1 depending on whether the comparison is true or false.
---
If you try the code above in Python 3, you will get `TypeError: unorderable types: list() < int()` due to [a change in Python 3.0](https://docs.python.org/3.0/whatsnew/3.0.html#ordering-comparisons):
> **Ordering Comparisons**
>
> Python 3.0 has simplified the rules for ordering comparisons:
>
> The ordering comparison operators (`<`, `<=`, `>=`, `>`) raise a `TypeError` exception when the operands donât have a meaningful natural ordering. Thus, expressions like `1 < ''`, `0 > None` or `len <= len` are no longer valid, and e.g. `None < None` raises `TypeError` instead of returning `False`. A corollary is that sorting a heterogeneous list no longer makes sense â all the elements must be comparable to each other. Note that this does not apply to the `==` and `!=` operators: objects of different incomparable types always compare unequal to each other.
---
There are many datatypes that *overload* the comparison operators to do something *different* (dataframes from pandas, numpy's arrays). If the code that you were using did something else, it was because `x` was *not a `list`*, but an instance of some other class with operator `<` overridden to return a value that is not a `bool`; and this value was then handled specially by `x[]` (aka `__getitem__`/`__setitem__`) |
What does x[x < 2] = 0 mean in Python? | 36,603,042 | 83 | 2016-04-13T15:27:15Z | 36,619,440 | 8 | 2016-04-14T09:47:42Z | [
"python",
"python-2.7",
"numpy"
] | I came across some code with a line similar to
```
x[x<2]=0
```
Playing around with variations, I am still stuck on what this syntax does.
Examples:
```
>>> x = [1,2,3,4,5]
>>> x[x<2]
1
>>> x[x<3]
1
>>> x[x>2]
2
>>> x[x<2]=0
>>> x
[0, 2, 3, 4, 5]
``` | This has one more use: code golf. Code golf is the art of writing programs that solve some problem in as few source code bytes as possible.
```
return(a,b)[c<d]
```
is roughly equivalent to
```
if c < d:
return b
else:
return a
```
except that both a and b are evaluated in the first version, but not in the second version.
`c<d` evaluates to `True` or `False`.
`(a, b)` is a tuple.
Indexing on a tuple works like indexing on a list: `(3,5)[1]` == `5`.
`True` is equal to `1` and `False` is equal to `0`.
1. `(a,b)[c<d]`
2. `(a,b)[True]`
3. `(a,b)[1]`
4. `b`
or for `False`:
1. `(a,b)[c<d]`
2. `(a,b)[False]`
3. `(a,b)[0]`
4. `a`
There's a good list on the stack exchange network of many nasty things you can do to python in order to save a few bytes. <http://codegolf.stackexchange.com/questions/54/tips-for-golfing-in-python>
Although in normal code this should never be used, and in your case it would mean that `x` acts both as something that can be compared to an integer and as a container that supports slicing, which is a very unusual combination. It's probably Numpy code, as others have pointed out. |
Efficiently merge lists into a list of dictionaries | 36,612,437 | 2 | 2016-04-14T01:46:51Z | 36,612,461 | 12 | 2016-04-14T01:49:34Z | [
"python",
"list",
"dictionary"
] | I have 2 lists and I want to merge them as list of dictionaries.
The code I have:
```
import pprint
list1 = [1, 2, 3, 4]
list2 = [0, 1, 1, 2]
newlist = []
for i in range(0, len(list1)):
newdict = {}
newdict["original"] = list1[i]
newdict["updated"] = list2[i]
newlist.append(newdict)
pprint.pprint(newlist)
```
Output:
```
[{'original': 1, 'updated': 0},
{'original': 2, 'updated': 1},
{'original': 3, 'updated': 1},
{'original': 4, 'updated': 2}]
```
Is there a better or faster way to do this? | You can [zip](https://docs.python.org/3/library/functions.html#zip) your two lists and then use a list comprehension, where you create your dictionary as each item in the list:
```
list1=[1,2,3,4]
list2=[0,1,1,2]
new_list = [{'original': v1, 'updated': v2} for v1, v2 in zip(list1, list2)]
print(new_list)
```
Output:
```
[{'updated': 0, 'original': 1}, {'updated': 1, 'original': 2}, {'updated': 1, 'original': 3}, {'updated': 2, 'original': 4}]
``` |
List of lists in to list of tuples, reordered | 36,614,053 | 6 | 2016-04-14T04:43:50Z | 36,614,070 | 10 | 2016-04-14T04:46:16Z | [
"python",
"list",
"tuples"
] | How 'pythonic-ly', do I turn this:
```
[[x1,y1], [x2,y2]]
```
Into:
```
[(x1,x2),(y1,y2)]
``` | Use a [`zip`](https://docs.python.org/3/library/functions.html#zip) and unpacking operator.
```
>>> l = [['x1','y1'], ['x2','y2']]
>>> zip(*l)
[('x1', 'x2'), ('y1', 'y2')]
``` |
ignoring backslash character in python | 36,623,916 | 2 | 2016-04-14T13:00:26Z | 36,624,033 | 9 | 2016-04-14T13:05:31Z | [
"python"
] | This one is a bit tricky I think.
if I have:
```
a = "fwd"
b = "\fwd"
```
how can I ignore the `"\"` so something like
```
print(a in b)
```
can evaluate to True? | You don't have `fwd` in `b`. You have `wd`, preceded by [ASCII codepoint 0C, the FORM FEED character](https://en.wikipedia.org/wiki/Page_break#Form_feed). That's the value Python puts there when you use a `\f` escape sequence in a regular string literal.
Double the backslash if you want to include a backslash or use a raw string literal:
```
b = '\\fwd'
b = r'\fwd'
```
Now `a in b` works:
```
>>> 'fwd' in '\\fwd'
True
>>> 'fwd' in r'\fwd'
True
```
See the [*String literals* documentation](https://docs.python.org/2/reference/lexical_analysis.html#string-literals):
> Unless an `'r'` or `'R'` prefix is present, escape sequences in strings are interpreted according to rules similar to those used by Standard C. The recognized escape sequences are:
>
> *[...]*
>
> `\f` ASCII Formfeed (FF) |
Compare two large dictionaries and create lists of values for keys they have in common | 36,628,586 | 18 | 2016-04-14T16:16:09Z | 36,628,623 | 27 | 2016-04-14T16:18:23Z | [
"python",
"dictionary"
] | I have a two dictionaries like:
```
dict1 = { (1,2) : 2, (2,3): 3, (1,3): 3}
dict2 = { (1,2) : 1, (1,3): 2}
```
What I want as output is two list of values for the items which exist in both dictionaries:
```
[2,3]
[1,2]
```
What I am doing right now is something like this:
```
list1 = []
list2 = []
for key in dict1.keys():
if key in dict2.keys():
list1.append(dict1.get(key))
list2.append(dict2.get(key))
```
This code is taking too long running which is not something that I am looking forward to. I was wondering if there might be a more efficient way of doing it? | ```
commons = set(dict1).intersection(set(dict2))
list1 = [dict1[k] for k in commons]
list2 = [dict2[k] for k in commons]
``` |
Compare two large dictionaries and create lists of values for keys they have in common | 36,628,586 | 18 | 2016-04-14T16:16:09Z | 36,628,627 | 14 | 2016-04-14T16:18:41Z | [
"python",
"dictionary"
] | I have a two dictionaries like:
```
dict1 = { (1,2) : 2, (2,3): 3, (1,3): 3}
dict2 = { (1,2) : 1, (1,3): 2}
```
What I want as output is two list of values for the items which exist in both dictionaries:
```
[2,3]
[1,2]
```
What I am doing right now is something like this:
```
list1 = []
list2 = []
for key in dict1.keys():
if key in dict2.keys():
list1.append(dict1.get(key))
list2.append(dict2.get(key))
```
This code is taking too long running which is not something that I am looking forward to. I was wondering if there might be a more efficient way of doing it? | Don't use `dict.keys`. On python2.x, it creates a new list every time it is called (which is an `O(N)` operation -- And `list.__contains__` is another `O(N)` operation on average). Just rely on the fact that dictionaries are iterable containers directly (with `O(1)` lookup):
```
list1 = []
list2 = []
for key in dict1:
if key in dict2:
list1.append(dict1.get(key))
list2.append(dict2.get(key))
```
---
Note that on python2.7, you can use `viewkeys` to get the intersection directly:
```
>>> a = {'foo': 'bar', 'baz': 'qux'}
>>> b = {'foo': 'bar'}
>>> a.viewkeys() & b
set(['foo'])
```
(on python3.x, you *can* use `keys` here instead of `viewkeys`)
```
for key in dict1.viewkeys() & dict2:
list1.append(dict1[key]))
list2.append(dict2[key]))
``` |
How to perfectly convert one-element list to tuple in Python? | 36,637,564 | 2 | 2016-04-15T03:04:33Z | 36,637,630 | 7 | 2016-04-15T03:10:35Z | [
"python",
"python-3.x",
"tuples",
"python-3.5"
] | So I am trying to do this:
```
tuple([1])
```
The output I expect is :
```
(1)
```
However, I got this:
```
(1,)
```
But if I do this:
```
tuple([1,2])
```
It works perfectly! like this:
```
(1,2)
```
This is so weird that I don't know why the tuple function cause this result.
Please help me to fix it. | This is such a common question that the [Python Wiki has a page dedicated to it](https://wiki.python.org/moin/TupleSyntax):
> **One Element Tuples**
>
> One-element tuples look like:
>
> `1,`
>
> The essential element here is the trailing comma. As for any
> expression, parentheses are optional, so you may also write
> one-element tuples like
>
> `(1,)`
>
> but it is the comma, not the parentheses, that define the tuple. |
Exceptions catching performance in python | 36,642,633 | 7 | 2016-04-15T08:57:31Z | 36,738,324 | 8 | 2016-04-20T08:41:27Z | [
"python",
"performance",
"exception-handling"
] | I know exceptions in python are fast when it comes to the `try` but that it may be expensive when it comes to the catch.
Does this mean that:
```
try:
some code
except MyException:
pass
```
is faster than this ?
```
try:
some code
except MyException as e:
pass
``` | In addition to Francesco's answer, it seems that one of the (relatively) expensive part of the catch is the exception matching:
```
>>> timeit.timeit('try:\n raise KeyError\nexcept KeyError:\n pass', number=1000000 )
1.1587663322268327
>>> timeit.timeit('try:\n raise KeyError\nexcept:\n pass', number=1000000 )
0.9180641582179874
```
Looking at the (CPython 2) disassembly:
```
>>> def f():
... try:
... raise KeyError
... except KeyError:
... pass
...
>>> def g():
... try:
... raise KeyError
... except:
... pass
...
>>> dis.dis(f)
2 0 SETUP_EXCEPT 10 (to 13)
3 3 LOAD_GLOBAL 0 (KeyError)
6 RAISE_VARARGS 1
9 POP_BLOCK
10 JUMP_FORWARD 17 (to 30)
4 >> 13 DUP_TOP
14 LOAD_GLOBAL 0 (KeyError)
17 COMPARE_OP 10 (exception match)
20 POP_JUMP_IF_FALSE 29
23 POP_TOP
24 POP_TOP
25 POP_TOP
5 26 JUMP_FORWARD 1 (to 30)
>> 29 END_FINALLY
>> 30 LOAD_CONST 0 (None)
33 RETURN_VALUE
>>> dis.dis(g)
2 0 SETUP_EXCEPT 10 (to 13)
3 3 LOAD_GLOBAL 0 (KeyError)
6 RAISE_VARARGS 1
9 POP_BLOCK
10 JUMP_FORWARD 7 (to 20)
4 >> 13 POP_TOP
14 POP_TOP
15 POP_TOP
5 16 JUMP_FORWARD 1 (to 20)
19 END_FINALLY
>> 20 LOAD_CONST 0 (None)
23 RETURN_VALUE
```
Note that the catch block loads the Exception anyway and matches it against a `KeyError`. Indeed, looking at the `except KeyError as ke` case:
```
>>> def f2():
... try:
... raise KeyError
... except KeyError as ke:
... pass
...
>>> dis.dis(f2)
2 0 SETUP_EXCEPT 10 (to 13)
3 3 LOAD_GLOBAL 0 (KeyError)
6 RAISE_VARARGS 1
9 POP_BLOCK
10 JUMP_FORWARD 19 (to 32)
4 >> 13 DUP_TOP
14 LOAD_GLOBAL 0 (KeyError)
17 COMPARE_OP 10 (exception match)
20 POP_JUMP_IF_FALSE 31
23 POP_TOP
24 STORE_FAST 0 (ke)
27 POP_TOP
5 28 JUMP_FORWARD 1 (to 32)
>> 31 END_FINALLY
>> 32 LOAD_CONST 0 (None)
35 RETURN_VALUE
```
The only difference is a single `STORE_FAST` to store the exception value (in case of a match). Similarly, having several exception matches:
```
>>> def f():
... try:
... raise ValueError
... except KeyError:
... pass
... except IOError:
... pass
... except SomeOtherError:
... pass
... except:
... pass
...
>>> dis.dis(f)
2 0 SETUP_EXCEPT 10 (to 13)
3 3 LOAD_GLOBAL 0 (ValueError)
6 RAISE_VARARGS 1
9 POP_BLOCK
10 JUMP_FORWARD 55 (to 68)
4 >> 13 DUP_TOP
14 LOAD_GLOBAL 1 (KeyError)
17 COMPARE_OP 10 (exception match)
20 POP_JUMP_IF_FALSE 29
23 POP_TOP
24 POP_TOP
25 POP_TOP
5 26 JUMP_FORWARD 39 (to 68)
6 >> 29 DUP_TOP
30 LOAD_GLOBAL 2 (IOError)
33 COMPARE_OP 10 (exception match)
36 POP_JUMP_IF_FALSE 45
39 POP_TOP
40 POP_TOP
41 POP_TOP
7 42 JUMP_FORWARD 23 (to 68)
8 >> 45 DUP_TOP
46 LOAD_GLOBAL 3 (SomeOtherError)
49 COMPARE_OP 10 (exception match)
52 POP_JUMP_IF_FALSE 61
55 POP_TOP
56 POP_TOP
57 POP_TOP
9 58 JUMP_FORWARD 7 (to 68)
10 >> 61 POP_TOP
62 POP_TOP
63 POP_TOP
11 64 JUMP_FORWARD 1 (to 68)
67 END_FINALLY
>> 68 LOAD_CONST 0 (None)
71 RETURN_VALUE
```
Will duplicate the exception and try to match it against every exception listed, one by one until it founds a match, which is (probably) what is being hinted at as 'poor catch performance'. |
difference between ways to generate index list in python | 36,647,439 | 4 | 2016-04-15T12:41:01Z | 36,647,879 | 8 | 2016-04-15T13:00:01Z | [
"python",
"indexing"
] | I am reading Joel Grus's data science from scratch book and found something a bit mysterious. Basically, in some sample code, he wrote
```
a = [1, 2 ,3 ,4]
xs = [i for i,_ in enumerate(a)]
```
Why would he prefer to do this way? Instead of
```
xs = range(len(a))
``` | I looked at [the code available on github](https://github.com/joelgrus/data-science-from-scratch) and frankly, I do not see any other reason for this except the personal preference of the author.
However, the result needs to be a `list` in places like [this](https://github.com/joelgrus/data-science-from-scratch/blob/2d4063dbcb19953bcd3f1d488cb148beb4911d14/code-python3/gradient_descent.py#L110):
```
indexes = [i for i, _ in enumerate(data)] # create a list of indexes
random.shuffle(indexes) # shuffle them
for i in indexes: # return the data in that order
yield data[i]
```
Using bare `range(len(data))` in that part on Python 3 would be wrong, because `random.shuffle()` requires *a mutable sequence* as the argument, and the `range` objects in Python 3 are immutable sequences.
---
I personally would use `list(range(len(data)))` on Python 3 in the case that I linked to, as it is guaranteed to be more efficient **and** would fail if a generator/iterator was passed in by accident, instead of a sequence. |
difference between ways to generate index list in python | 36,647,439 | 4 | 2016-04-15T12:41:01Z | 36,659,915 | 12 | 2016-04-16T03:53:56Z | [
"python",
"indexing"
] | I am reading Joel Grus's data science from scratch book and found something a bit mysterious. Basically, in some sample code, he wrote
```
a = [1, 2 ,3 ,4]
xs = [i for i,_ in enumerate(a)]
```
Why would he prefer to do this way? Instead of
```
xs = range(len(a))
``` | Answer: personal preference of the author. I find
`[i for i, _ in enumerate(xs)]`
clearer and more readable than
`list(range(len(xs)))`
which feels clunky to me. (I don't like reading the nested functions.) Your mileage may vary (and apparently does!).
That said, I am pretty sure I didn't say *not to* do the second, I just happen to prefer the first.
Source: I am the author.
P.S. If you're the commenter who had no intention of reading **anything** I write about Python, I apologize if you read this answer by accident. |
Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so | 36,659,453 | 4 | 2016-04-16T02:17:52Z | 36,967,410 | 11 | 2016-05-01T13:49:10Z | [
"python",
"anaconda",
"intel-mkl"
] | I am running a python script and I get this error:
```
Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.
```
Both files are present in the anaconda2/lib directory. How can I fix this error? Thanks. | if you use conda , try with two commands
conda install nomkl numpy scipy scikit-learn numexpr
conda remove mkl mkl-service
It should fix your problem |
Multiple inputs from one input | 36,663,600 | 5 | 2016-04-16T11:34:06Z | 36,663,673 | 8 | 2016-04-16T11:40:02Z | [
"python",
"python-3.x"
] | I'm writing a function to append an input to a list. I want it so that when you input `280 2` the list becomes `['280', '280']` instead of `['280 2']`. | ```
>>> number, factor = input().split()
280 2
>>> [number]*int(factor)
['280', '280']
```
Remember that concatenating a list with itself with the \* operator can have [unexpected results](http://stackoverflow.com/questions/240178/python-list-of-lists-changes-reflected-across-sublists-unexpectedly) if your list contains mutable elements - but in your case it's fine.
edit:
Solution that can handle inputs without a factor:
```
>>> def multiply_input():
... *head, tail = input().split()
... return head*int(tail) if head else [tail]
...
>>> multiply_input()
280 3
['280', '280', '280']
>>> multiply_input()
280
['280']
```
Add error checking as needed (for example for empty inputs) depending on your use case. |
Python: what's the difference - abs and operator.abs | 36,665,110 | 3 | 2016-04-16T13:59:38Z | 36,665,125 | 7 | 2016-04-16T14:01:17Z | [
"python",
"operators"
] | In python what is the difference between :
`abs(a)` and `operator.abs(a)`
They are the very same and they work alike. If they are the very same then why are two separate functions doing the same stuff are made??
If there is some specific functionality for any one of it - please do explain it. | There is no difference. The documentation even says so:
```
>>> import operator
>>> print(operator.abs.__doc__)
abs(a) -- Same as abs(a).
```
It is implemented as a wrapper just so the documentation can be updated:
```
from builtins import abs as _abs
# ...
def abs(a):
"Same as abs(a)."
return _abs(a)
```
(Note, the above Python implementation is only used if the [C module itself](https://hg.python.org/cpython/file/v3.5.1/Modules/_operator.c#l78) can't be loaded).
It is there *purely* to complement the other (mathematical) operators; e.g. if you wanted to do dynamic operator lookups on that module you don't have to special-case `abs()`. |
Bigquery - Insert new data row into table by python | 36,673,456 | 3 | 2016-04-17T06:40:11Z | 36,849,400 | 7 | 2016-04-25T19:26:45Z | [
"python",
"google-bigquery"
] | I read many documents about google bigquery-python, but I can't understand how to manage bigquery data by python code.
At first, I make a new table as below.
```
credentials = GoogleCredentials.get_application_default()
service = build('bigquery', 'v2', credentials = credentials)
project_id = 'my_project'
dataset_id = 'my_dataset'
table_id = 'my_table'
project_ref = {'projectId': project_id}
dataset_ref = {'datasetId': dataset_id,
'projectId': project_id}
table_ref = {'tableId': table_id,
'datasetId': dataset_id,
'projectId': project_id}
dataset = {'datasetReference': dataset_ref}
table = {'tableReference': table_ref}
table['schema'] = {'fields': [
{'name': 'id', 'type': 'string'},
...
]}
table = service.tables().insert(body = table, **dataset_ref).execute()
```
And then I want to insert a data into this table, so I tried to do like below.
```
fetch_list = []
patch = {'key': 'value'}
fetch_list.append(patch)
table = service.tables().patch(body = fetch_list, **table_ref).execute()
```
But nothing happened.
How can I update new data into bigquery table?
Please show me some example codes. | There are a few different ways that you can use to insert data to BQ.
For a deeper understanding of how the python-api works, here's everything you'll need: [bq-python-api](https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/python/latest/) (at first the docs are somewhat scary but after you get a hang of it it's rather quite simple).
There are 2 main methods that I use to insert data to BQ. The first one is [data streaming](https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/python/latest/bigquery_v2.tabledata.html#insertAll) and it's supposed to be used when you can insert row by row in a real time fashion. Code example:
```
import uuid
def stream_data(self, table, data, schema):
# first checks if table already exists. If it doesn't, then create it
r = self.service.tables().list(projectId=your_project_id,
datasetId=your_dataset_id).execute()
table_exists = [row['tableReference']['tableId'] for row in
r['tables'] if
row['tableReference']['tableId'] == table]
if not table_exists:
body = {
'tableReference': {
'tableId': table,
'projectId': your_project_id,
'datasetId': your_dataset_id
},
'schema': schema
}
self.service.tables().insert(projectId=your_project_id,
datasetId=your_dataset_id,
body=body).execute()
# with table created, now we can stream the data
# to do so we'll use the tabledata().insertall() function.
body = {
'rows': [
{
'json': data,
'insertId': str(uuid.uuid4())
}
]
}
self.service.tabledata().insertAll(projectId=your_project_id),
datasetId=your_dataset_id,
tableId=table,
body=body).execute(num_retries=5)
```
Here my `self.service` is correspondent to your `service` object.
An example of input `data` that we have in our project:
```
data = {u'days_validated': '20', u'days_trained': '80', u'navigated_score': '1', u'description': 'First trial of top seller alg. No filter nor any condition is applied. Skus not present in train count as rank=0.5', u'init_cv_date': '2016-03-06', u'metric': 'rank', u'unix_date': '1461610020241117', u'purchased_score': '10', u'result': '0.32677139316724546', u'date': '2016-04-25', u'carted_score': '3', u'end_cv_date': '2016-03-25'}
```
And its correspondent `schema`:
```
schema = {u'fields': [{u'type': u'STRING', u'name': u'date', u'mode': u'NULLABLE'}, {u'type': u'INTEGER', u'name': u'unix_date', u'mode': u'NULLABLE'}, {u'type': u'STRING', u'name': u'init_cv_date', u'mode': u'NULLABLE'}, {u'type': u'STRING', u'name': u'end_cv_date', u'mode': u'NULLABLE'}, {u'type': u'INTEGER', u'name': u'days_trained', u'mode': u'NULLABLE'}, {u'type': u'INTEGER', u'name': u'days_validated', u'mode': u'NULLABLE'}, {u'type': u'INTEGER', u'name': u'navigated_score', u'mode': u'NULLABLE'}, {u'type': u'INTEGER', u'name': u'carted_score', u'mode': u'NULLABLE'}, {u'type': u'INTEGER', u'name': u'purchased_score', u'mode': u'NULLABLE'}, {u'type': u'STRING', u'name': u'description', u'mode': u'NULLABLE'}, {u'type': u'STRING', u'name': u'metric', u'mode': u'NULLABLE'}, {u'type': u'FLOAT', u'name': u'result', u'mode': u'NULLABLE'}]}
```
The other way to insert data is to use the [job insert](https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/python/latest/bigquery_v2.jobs.html#insert) function. As you can see in the documentation, it accepts several sources for your data. I have an example of how you can do so by loading the results of a query into another table:
```
def create_table_from_query(self,
query,
dest_table,
how):
body = {
'configuration': {
'query': {
'destinationTable': {
'projectId': your_project_id,
'tableId': dest_table,
'datasetId': your_dataset_id
},
'writeDisposition': how,
'query': query,
},
}
}
response = self.connector.jobs().insert(projectId=self._project_id,
body=body).execute()
self.wait_job_completion(response['jobReference']['jobId'])
def wait_job_completion(self, job_id):
while True:
response = self.connector.jobs().get(projectId=self._project_id,
jobId=job_id).execute()
if response['status']['state'] == 'DONE':
return
```
The `how` input accepts the available options for this field in the documentation (such as "WRITE\_TRUNCATE", or "WRITE\_APPEND").
You can load the data from a csv file for instance, in this case, the `configuration` variable would be defined something along the lines:
```
"configuration": {
"load": {
"fieldDelimiter": "\t"
"sourceFormat": "CSV"
"destinationTable": {
"projectId": your_project_id,
"tableId": table_id,
"datasetId": your_dataset_id
},
"writeDisposition": "WRITE_TRUNCATE"
"schema": schema,
"sourceUris": file_location_in_google_cloud_storage
},
}
```
(Using as example a csv file delimited by tabs. It could be a json file as well, the documentation will walk you through the available options).
Running jobs() require some time for it to complete (that's why we created the `wait_job_completion` method). It should be cheaper though as compared to real time streaming.
Any questions let us know, |
Decorator for a class method that caches return value after first access | 36,684,319 | 15 | 2016-04-18T01:55:49Z | 36,684,610 | 8 | 2016-04-18T02:35:31Z | [
"python",
"caching",
"decorator",
"python-decorators"
] | # My problem, and why
I'm trying to write a decorator for a class method, `@cachedproperty`. I want it to behave so that when the method is first called, the method is replaced with its return value. I also want it to behave like `@property` so that it doesn't need to be explicitly called. Basically, it should be indistinguishable from `@property` except that **it's faster, because it only calculates the value once and then stores it. My idea is that this would not slow down instantiation like defining it in `__init__` would.** That's why I want to do this.
# What I tried
First, I tried to override the `fget` method of the `property`, but it's read-only.
Next, I figured I'd try to implement a decorator that does needs to be called the first time but then caches the values. This isn't my final goal of a property-type decorator that never needs to be called, but I thought this would be a simpler problem to tackle first. In other words, this is a not-working solution to a slightly simpler problem.
I tried:
```
def cachedproperty(func):
""" Used on methods to convert them to methods that replace themselves
with their return value once they are called. """
def cache(*args):
self = args[0] # Reference to the class who owns the method
funcname = inspect.stack()[0][3] # Name of the function, so that it can be overridden.
setattr(self, funcname, func()) # Replace the function with its value
return func() # Return the result of the function
return cache
```
However, this doesn't seem work. I tested this with:
```
>>> class Test:
... @cachedproperty
... def test(self):
... print "Execute"
... return "Return"
...
>>> Test.test
<unbound method Test.cache>
>>> Test.test()
```
but I get an error about how the class didn't pass itself to the method:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method cache() must be called with Test instance as first argument (got nothing instead)
```
At this point, me and my limited knowledge of deep Python methods are very confused, and I have no idea where my code went wrong or how to fix it. (I've never tried to write a decorator before)
# The question
How can I write a decorator that will return the result of calling a class method the first time it's accessed (like `@property` does), and be replaced with a cached value for all subsequent queries?
I hope this question isn't too confusing, I tried to explain it as well as I could. | If you don't mind alternative solutions, I'd recommend [`lru_cache`](https://docs.python.org/3/library/functools.html#functools.lru_cache)
for example
```
from functools import lru_cache
class Test:
@property
@lru_cache(maxsize=None)
def calc(self):
print("Calculating")
return 1
```
Expected output
```
In [2]: t = Test()
In [3]: t.calc
Calculating
Out[3]: 1
In [4]: t.calc
Out[4]: 1
``` |
Let a class behave like it's a list in Python | 36,688,966 | 28 | 2016-04-18T08:31:31Z | 36,690,145 | 39 | 2016-04-18T09:30:14Z | [
"python",
"list",
"python-3.x"
] | I have a class which is essentially a collection/list of things. But I want to add some extra functions to this list. What I would like, is the following:
* I have an instance `li = MyFancyList()`. Variable `li` should behave as it was a list whenever I use it as a list: `[e for e in li]`, `li.expand(...)`, `for e in li`.
* Plus it should have some special functions like `li.fancyPrint()`, `li.getAMetric()`, `li.getName()`.
I currently use the following approach:
```
class MyFancyList:
def __iter__(self):
return self.li
def fancyFunc(self):
# do something fancy
```
This is ok for usage as iterator like `[e for e in li]`, but I do not have the full list behavior like `li.expand(...)`.
A first guess is to inherit `list` into `MyFancyList`. But is that the recommended pythonic way to do? If yes, what is to consider? If no, what would be a better approach? | If you want only part of the list behavior, use composition (i.e. your instances hold a reference to an actual list) and implement only the methods necessary for the behavior you desire. These methods should delegate the work to the actual list any instance of your class holds a reference to, for example:
```
def __getitem__(self, item):
return self.li[item] # delegate to li.__getitem__
```
Implementing `__getitem__` alone will give you a surprising amount of features, for example iteration and slicing.
```
>>> class WrappedList:
... def __init__(self, lst):
... self._lst = lst
... def __getitem__(self, item):
... return self._lst[item]
...
>>> w = WrappedList([1, 2, 3])
>>> for x in w:
... x
...
1
2
3
>>> w[1:]
[2, 3]
```
If you want the *full* behavior of a list, inherit from [`collections.UserList`](https://docs.python.org/3.2/library/collections.html#collections.UserList). `UserList` is a full Python implementation of the list datatype.
*So why not inherit from `list` directly?*
One major problem with inheriting directly from `list` (or any other builtin written in C) is that the code of the builtins may or may not call special methods overridden in classes defined by the user. Here's a relevant excerpt from the [pypy docs](http://pypy.readthedocs.org/en/latest/cpython_differences.html#subclasses-of-built-in-types):
> Officially, CPython has no rule at all for when exactly overridden method of subclasses of built-in types get implicitly called or not. As an approximation, these methods are never called by other built-in methods of the same object. For example, an overridden `__getitem__` in a subclass of dict will not be called by e.g. the built-in `get` method.
Another quote, from Luciano Ramalho's [Fluent Python](http://shop.oreilly.com/product/0636920032519.do), page 351:
> Subclassing built-in types like dict or list or str directly is error-
> prone because the built-in methods mostly ignore user-defined
> overrides. Instead of subclassing the built-ins, derive your classes
> from UserDict , UserList and UserString from the collections
> module, which are designed to be easily extended.
... and more, page 370+:
> Misbehaving built-ins: bug or feature?
> The built-in dict , list and str types are essential building blocks of Python itself, so
> they must be fast â any performance issues in them would severely impact pretty much
> everything else. Thatâs why CPython adopted the shortcuts that cause their built-in
> methods to misbehave by not cooperating with methods overridden by subclasses.
After playing around a bit, the issues with the `list` builtin seem to be less critical (I tried to break it in Python 3.4 for a while but did not find a really obvious unexpected behavior), but I still wanted to post a demonstration of what can happen in principle, so here's one with a `dict` and a `UserDict`:
```
>>> class MyDict(dict):
... def __setitem__(self, key, value):
... super().__setitem__(key, [value])
...
>>> d = MyDict(a=1)
>>> d
{'a': 1}
>>> class MyUserDict(UserDict):
... def __setitem__(self, key, value):
... super().__setitem__(key, [value])
...
>>> m = MyUserDict(a=1)
>>> m
{'a': [1]}
```
As you can see, the `__init__` method from `dict` ignored the overridden `__setitem__` method, while the `__init__` method from our `UserDict` did not. |
Python - Program to reduce file size is increasing file size | 36,699,062 | 2 | 2016-04-18T16:05:23Z | 36,699,172 | 7 | 2016-04-18T16:10:49Z | [
"python"
] | For University, I'm doing research into compression techniques. One experiment I'm trying to perform is replacing certain Welsh language letters (which are digraphs) with a single character.
It would be my thought that replacing two characters with a single character would reduce the file size (however marginally) or at worst keep the file size the same. I have made a Python script to do this, however it is actually increasing the file size. The original file I tested this on was ~74,400KB, and the output program was ~74,700KB.
Here is my Python code:
```
replacements = {
'ch':'Æ',
'Ch':'â ',
'CH':'â¡',
'dd':'Å',
'Dd':'â¢',
'DD':'Å',
'ff':'¤',
'Ff':'¦',
'FF':'§',
'ng':'±',
'Ng':'µ',
'NG':'¶',
'll':'º',
'Ll':'¿',
'LL':'Ã',
'ph':'Ã',
'Ph':'Ã',
'PH':'Ã',
'rh':'Ã',
'Rh':'Ã',
'RH':'Ã',
'th':'æ',
'Th':'ç',
'TH':'ð',
}
print("Input file location: ")
inLoc = input("> ")
print("Output file location: ")
outLoc = input("> ")
with open(inLoc, "r",encoding="Latin-1") as infile, open(outLoc, "w", encoding="utf-8") as outfile:
for line in infile:
for src, target in replacements.items():
line = line.replace(src, target)
outfile.write(line)
```
When I tested it on a very small text file a few lines long, I looked at the output and it was as expected.
Input.txt:
```
Lle wyt ti heddiw?
Ddoe es i at gogledd Nghymru.
```
Output.txt:
```
¿e wyt ti heÅiw?
â¢oe es i at gogleеhymru.
```
Can anyone explain what is happening? | You're changing the encoding of the file. latin-1 is always 1-byte per character, but utf-8 isn't, so some of your special characters are being encoded with multiple bytes, resulting in the increase in size. |
Django REST Framework + Django REST Swagger + ImageField | 36,701,877 | 15 | 2016-04-18T18:42:05Z | 36,876,574 | 7 | 2016-04-26T22:01:25Z | [
"python",
"django",
"django-rest-framework",
"swagger",
"django-swagger"
] | I created a simple Model with an ImageField and I wanna make an api view with django-rest-framework + django-rest-swagger, that is documented and is able to upload the file.
Here is what I got:
**`models.py`**
```
from django.utils import timezone
from django.db import models
class MyModel(models.Model):
source = models.ImageField(upload_to=u'/photos')
is_active = models.BooleanField(default=False)
created_at = models.DateTimeField(default=timezone.now)
def __unicode__(self):
return u"photo {0}".format(self.source.url)
```
**`serializer.py`**
```
from .models import MyModel
class MyModelSerializer(serializers.ModelSerializer):
class Meta:
model = MyModel
fields = [
'id',
'source',
'created_at',
]
```
**`views.py`**
```
from rest_framework import generics
from .serializer import MyModelSerializer
class MyModelView(generics.CreateAPIView):
serializer_class = MyModelSerializer
parser_classes = (FileUploadParser, )
def post(self, *args, **kwargs):
"""
Create a MyModel
---
parameters:
- name: source
description: file
required: True
type: file
responseMessages:
- code: 201
message: Created
"""
return super(MyModelView, self).post(self, *args, **kwargs)
```
**`urls.py`**
```
from weddings.api.views import MyModelView
urlpatterns = patterns(
'',
url(r'^/api/mymodel/$', MyModelView.as_view()),
)
```
For me this should be pretty simple. However, I can't make the upload work. I always get this error response:
[](http://i.stack.imgur.com/f0RFn.png)
I've read this part of the documentation from [django-rest-framework](http://www.django-rest-framework.org/api-guide/parsers/#fileuploadparser):
`If the view used with FileUploadParser is called with a filename URL keyword argument, then that argument will be used as the filename. If it is called without a filename URL keyword argument, then the client must set the filename in the Content-Disposition HTTP header. For example Content-Disposition: attachment; filename=upload.jpg.`
However the Header is being passed by django-rest-swagger in the Request Payload property (from chrome console).
If any more info is necessary, please let me know.
I'm using `Django==1.8.8`, `djangorestframework==3.3.2` and `django-rest-swagger==0.3.4`. | I got this working by making a couple of changes to your code.
First, in `models.py`, change `ImageField` name to `file` and use relative path to upload folder. When you upload file as binary stream, it's available in `request.data` dictionary under file key (`request.data.get('file')`), so the cleanest option is to map it to the model field with the same name.
```
from django.utils import timezone
from django.db import models
class MyModel(models.Model):
file = models.ImageField(upload_to=u'photos')
is_active = models.BooleanField(default=False)
created_at = models.DateTimeField(default=timezone.now)
def __unicode__(self):
return u"photo {0}".format(self.file.url)
```
In `serializer.py`, rename source field to file:
```
class MyModelSerializer(serializers.ModelSerializer):
class Meta:
model = MyModel
fields = ('id', 'file', 'created_at')
```
In views.py, don't call super, but call create():
```
from rest_framework import generics
from rest_framework.parsers import FileUploadParser
from .serializer import MyModelSerializer
class MyModelView(generics.CreateAPIView):
serializer_class = MyModelSerializer
parser_classes = (FileUploadParser,)
def post(self, request, *args, **kwargs):
"""
Create a MyModel
---
parameters:
- name: file
description: file
required: True
type: file
responseMessages:
- code: 201
message: Created
"""
return self.create(request, *args, **kwargs)
```
I've used Postman Chrome extension to test this. I've uploaded images as binaries and I've manually set two headers:
```
Content-Disposition: attachment; filename=upload.jpg
Content-Type: */*
``` |
Image processing issues with blood vessels | 36,711,627 | 11 | 2016-04-19T07:38:21Z | 36,821,625 | 8 | 2016-04-24T09:54:51Z | [
"python",
"image",
"opencv",
"image-processing"
] | I'm trying to extract the blood vessels from an image, and to do so, I'm first equalizing the image, applying CLAHE histogram to obtain the following result:
```
clahe = cv2.createCLAHE(clipLimit=100.0, tileGridSize=(100,100))
self.cl1 = clahe.apply(self.result_array)
self.cl1 = 255 - self.cl1
```
[](http://i.stack.imgur.com/9DYoA.jpg)
And then I'm using OTSU threshold to extract the blood vessels, but failing to do it well:
```
self.ret, self.thresh = cv2.threshold(self.cl1, 0,255,cv2.THRESH_BINARY + cv2.THRESH_OTSU)
kernel = np.ones((1,1),np.float32)/1
self.thresh = cv2.erode(self.thresh, kernel, iterations=3)
self.thresh = cv2.dilate(self.thresh, kernel, iterations=3)
```
Here's the result:
[](http://i.stack.imgur.com/tiT9M.jpg)
Obviously there's a lot of noise. I've tried using Median blur, but it just clusters the noise and makes it into a blob, in some places. How do I go about removing the noise to get the blood vessels?
This is the original image from which I'm trying to extract the blood vessels:
[](http://i.stack.imgur.com/n67ut.jpg) | Getting really good results is a difficult problem (you'll probably have to somehow model the structure of the blood vessels and the noise) but you can probably still do better than filtering.
One technique for addressing this kind of problems, inspired by the Canny edge detector, is using two thresholds - `[hi,low]` and classifying a pixel `p` with response `r` as belonging to a blood vessel `V` if `r > hi` || (`r > lo` && one of `p`'s neighbors is in `V`).
Also, when it comes to filtering, both bilateral filtering and meanshift filtering are good for noisy images.
```
kernel3 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
kernel5 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
kernel7 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(7,7))
t_lo = 136
t_hi = 224
blured = cv2.pyrMeanShiftFiltering(img, 3, 9)
#blured = cv2.bilateralFilter(img, 9, 32, 72)
clahe = cv2.createCLAHE(clipLimit=128.0, tileGridSize=(64, 64))
cl1 = clahe.apply(blured)
cl1 = 255 - cl1
ret, thresh_hi = cv2.threshold(cl1, t_hi, 255, cv2.THRESH_TOZERO)
ret, thresh_lo = cv2.threshold(cl1, t_lo, 255, cv2.THRESH_TOZERO)
```
[](http://i.stack.imgur.com/vpcui.png)
Low threshold image
[](http://i.stack.imgur.com/54vrR.png)
Hi threshold image
Preparations and cleanup:
```
current = np.copy(thresh_hi)
prev = np.copy(current)
prev[:] = 0
current = cv2.morphologyEx(current, cv2.MORPH_OPEN, kernel5)
iter_num = 0
max_iter = 1000
```
Not the most efficient way to do that... but easy to implement:
```
while np.sum(current - prev) > 0 and iter_num < max_iter:
iter_num = iter_num+1
prev = np.copy(current)
current = cv2.dilate(current, kernel3)
tmp = np.copy(current)
current[np.where(thresh_lo == 0)] = 0
```
[](http://i.stack.imgur.com/lxBK3.png)
Initial mask
Remove small blobs:
```
contours, hierarchy = cv2.findContours(current, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
area = cv2.contourArea(contour)
if area < 256:
cv2.drawContours( current, [contour], 0, [0,0,0], -1 )
```
[](http://i.stack.imgur.com/Z5r5Q.png)
After removing small blobs
Morphological cleanup:
```
opening = cv2.morphologyEx(current, cv2.MORPH_OPEN, kernel7)
cl1[np.where(opening == 0)] = 0
```
[](http://i.stack.imgur.com/gfhQL.png)
Result
This is by no means optimal, but I think it should provide you with enough tools to start. |
List comprehension- fill arbitrary value if list is empty | 36,743,765 | 2 | 2016-04-20T12:26:02Z | 36,743,816 | 9 | 2016-04-20T12:27:59Z | [
"python",
"list",
"list-comprehension"
] | I am using a list comprehension to assign values to an object. In short, I have two lists. One which contains a collection of values and another which contains a collection of indices (from that previous list)
```
values = [1.4,1.5,1.6,1.8]
indices = [0,1]
a.newvalues = [values[i] for i in indices]
```
This works fine when indices isn't empty. However, when indices is empty I would like an arbitrary value (-1) to be assigned to `newvalues`, instead of returning an empty list.
Can anyone think of a way? | Like this?
```
values = [1.4,1.5,1.6,1.8]
indices = [0,1]
a.newvalues = [values[i] for i in indices] if indices else [-1]
```
I'm not sure I follow. When indices is empty, `a.newvalues` should take the value -1, not a list of some length? |
Why do these two python functions return different results? | 36,745,436 | 3 | 2016-04-20T13:30:36Z | 36,745,510 | 8 | 2016-04-20T13:33:10Z | [
"python",
"python-2.7",
"python-3.x"
] | 1-
```
def fib1(n):
a = 0
b = 1
while a < n:
print b
a = b
b = a+b
```
2-
```
def fib2(n):
a, b = 0,1
while a < n:
print b
a,b = b, b+a
```
On execution:
`fib1(10)` I got the wrong answer: `0 1 2 4 8`
`fib2(10)` I got the right answer: `0 1 1 2 3 5 8` | In fib 1
`a = b`
overwrites the value of `a`,
which means `a` is no longer the right value for the statement
`b = a+b`
However, in your second example both those things happen at the same time on the line `a,b = a, b+a` which means `a` is the right value still. |
Automatic type conversions of user defined classes | 36,745,595 | 8 | 2016-04-20T13:36:30Z | 36,745,772 | 12 | 2016-04-20T13:43:45Z | [
"python",
"python-3.x"
] | So what I want to do is create a class that wraps an int and allows some things not normally allowed with int types. I don't really care if its not pythonic or w/e I'm just looking for results. Here is my code:
```
class tInt(int):
def __add__(self, other):
if type(other) == str:
return str(self) + str(other)
elif type(other) == int:
return int(self) + other
elif type(other) == float:
return float(self) + float(other)
else:
return self + other
a = tInt(2)
print (a + "5")
print ("5" + a)
```
The output was.
```
>> 25
Traceback (most recent call last):
File "C:\example.py", line 14, in <module>
print ("5" + a)
TypeError: Can't convert 'tInt' object to str implicitly
```
So, the first print statement ran nicely, and gave what I expected, but the second one gave an error. I think this is because the first one is using tInt's **add** function because a appeared before + "5" and the second one used the string "5"'s **add** function first because it appeared first. I know this but I don't really know how to either force a's **add** function or allow the tInt class to be represented as a string/int/etc.. when a normal type appears before it in an operation. | You need to implement an `__radd__` method to handle the case when an instance of your class is on the right hand side of the addition.
The [docs](https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types) say:
> These methods are called to implement the binary arithmetic operations
> (+, -, \*, @, /, //, %, divmod(), pow(), \*\*, <<, >>, &, ^, |) with
> reflected (swapped) operands. These functions are only called if the
> left operand does not support the corresponding operation and the
> operands are of different types. [2](https://docs.python.org/3/library/constants.html?highlight=notimplemented#NotImplemented) For instance, to evaluate the
> expression x - y, where y is an instance of a class that has an
> **rsub**() method, y.**rsub**(x) is called if x.**sub**(y) returns NotImplemented.
Example:
```
class tInt(int):
def __add__(self, other):
if isinstance(other, str):
return str(self) + str(other)
elif isinstance(other, int):
return int(self) + other
elif isinstance(other, float):
return float(self) + float(other)
else:
return NotImplemented
def __radd__(self, other):
return self.__add__(other)
a = tInt(2)
for x in ["5", 5, 5.0]:
print (a + x)
print (x + a)
25
25
7
7
7.0
7.0
```
As @chepner pointed out in the comments, returning [NotImplemented](https://docs.python.org/3/library/constants.html?highlight=notimplemented#NotImplemented) for cases that your method doesn't handle will cause Python to try other ways of performing the operation, or raise a TypeError if there is no way to perform the requested operation. |
set() not removing duplicates | 36,750,621 | 2 | 2016-04-20T17:07:10Z | 36,750,749 | 11 | 2016-04-20T17:13:26Z | [
"python",
"regex",
"list",
"python-3.x",
"set"
] | I'm trying to find unique instances of IP addresses in a file using regex. I find them fine and try to append them to a list and later try to use `set()` on my list to remove duplicates. I'm finding each item okay and there are duplicates but I can't get the list to simplify. The output of printing my set is the same as printing ips as a list, nothing is removed.
```
ips = [] # make a list
count = 0
count1 = 0
for line in f: #loop through file line by line
match = re.search("\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", line) #find IPs
if match:Â #if there's a match append and keep track of the total number of Ips
ips.append(match)Â #append to list
count = count + 1
ipset = set(ips)
print(ipset, count)
```
This string `<_sre.SRE_Match object; span=(0, 13), match='137.43.92.119'>` shows up 60+ times in the output before and after trying to `set()` the list | You are not storing the matched strings. You are storing the [*`re.Match` objects*](https://docs.python.org/3/library/re.html#match-objects). These don't compare equal even if they matched the same text, so they are all seen as unique by a `set` object:
```
>>> import re
>>> line = '137.43.92.119\n'
>>> match1 = re.search("\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", line)
>>> match1
<_sre.SRE_Match object; span=(0, 13), match='137.43.92.119'>
>>> match2 = re.search("\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", line)
>>> match2
<_sre.SRE_Match object; span=(0, 13), match='137.43.92.119'>
>>> match1 == match2
False
```
Extract the matched text instead:
```
ips.append(match.group()) #append to list
```
[`matchobj.group()`](https://docs.python.org/3/library/re.html#re.match.group) without arguments returns the part of the string that was matched (group 0):
```
>>> match1.group()
'137.43.92.119'
>>> match1.group() == match2.group()
True
``` |
Finding ContiguousCount of items in list? | 36,762,673 | 3 | 2016-04-21T07:29:35Z | 36,762,749 | 8 | 2016-04-21T07:32:47Z | [
"python",
"list",
"counter",
"contiguous"
] | Given a list:
```
>>> l = ['x', 'x', 'y', 'y', 'x']
```
I could get the count of the list by using `collections.Counter`:
```
>>> from collections import Counter
>>> Counter(l)
Counter({'x': 3, 'y': 2})
```
**How can I count contiguous items instead of the global count of the elements in the list?** E.g.
```
>>> l = ['x', 'x', 'y', 'y', 'x']
>>> ContiguousCounter(l)
[('x',2), ('y',2), ('x', 1)]
>>> l = ['x', 'x', 'y', 'y', 'x', 'x', 'x', 'y']
>>> ContiguousCounter(l)
[('x',2), ('y',2), ('x', 3), ('y', 1)]
``` | You could use built-in [`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) function:
```
In [3]: from itertools import groupby
In [4]: l = ['x', 'x', 'y', 'y', 'x']
In [5]: list(groupby(l))
Out[5]:
[('x', <itertools._grouper at 0x7fd94716f1d0>),
('y', <itertools._grouper at 0x7fd94716f208>),
('x', <itertools._grouper at 0x7fd94716f240>)]
In [6]: [(x, len(list(g))) for x, g in groupby(l)]
Out[6]: [('x', 2), ('y', 2), ('x', 1)]
``` |
Why are Python's arrays slow? | 36,778,568 | 129 | 2016-04-21T19:16:49Z | 36,778,655 | 176 | 2016-04-21T19:20:51Z | [
"python",
"arrays",
"performance",
"boxing",
"python-internals"
] | I expected `array.array` to be faster than lists, as arrays seem to be unboxed.
However, I get the following result:
```
In [1]: import array
In [2]: L = list(range(100000000))
In [3]: A = array.array('l', range(100000000))
In [4]: %timeit sum(L)
1 loop, best of 3: 667 ms per loop
In [5]: %timeit sum(A)
1 loop, best of 3: 1.41 s per loop
In [6]: %timeit sum(L)
1 loop, best of 3: 627 ms per loop
In [7]: %timeit sum(A)
1 loop, best of 3: 1.39 s per loop
```
What could be the cause of such a difference? | The *storage* is "unboxed", but every time you access an element Python has to "box" it (embed it in a regular Python object) in order to do anything with it. For example, your `sum(A)` iterates over the array, and boxes each integer, one at a time, in a regular Python `int` object. That costs time. In your `sum(L)`, all the boxing was done at the time the list was created.
So, in the end, an array is generally slower, but requires substantially less memory.
---
Here's the relevant code from a recent version of Python 3, but the same basic ideas apply to all CPython implementations since Python was first released.
Here's the code to access a list item:
```
PyObject *
PyList_GetItem(PyObject *op, Py_ssize_t i)
{
/* error checking omitted */
return ((PyListObject *)op) -> ob_item[i];
}
```
There's very little to it: `somelist[i]` just returns the `i`'th object in the list (and all Python objects in CPython are pointers to a struct whose initial segment conforms to the layout of a `struct PyObject`).
And here's the `__getitem__` implementation for an `array` with type code `l`:
```
static PyObject *
l_getitem(arrayobject *ap, Py_ssize_t i)
{
return PyLong_FromLong(((long *)ap->ob_item)[i]);
}
```
The raw memory is treated as a vector of platform-native `C` `long` integers; the `i`'th `C long` is read up; and then `PyLong_FromLong()` is called to wrap ("box") the native `C long` in a Python `long` object (which, in Python 3, which eliminates Python 2's distinction between `int` and `long`, is actually shown as type `int`).
This boxing has to allocate new memory for a Python `int` object, and spray the native `C long`'s bits into it. In the context of the original example, this object's lifetime is very brief (just long enough for `sum()` to add the contents into a running total), and then more time is required to deallocate the new `int` object.
This is where the speed difference comes from, always has come from, and always will come from in the CPython implementation. |
Why are Python's arrays slow? | 36,778,568 | 129 | 2016-04-21T19:16:49Z | 36,781,207 | 76 | 2016-04-21T21:58:49Z | [
"python",
"arrays",
"performance",
"boxing",
"python-internals"
] | I expected `array.array` to be faster than lists, as arrays seem to be unboxed.
However, I get the following result:
```
In [1]: import array
In [2]: L = list(range(100000000))
In [3]: A = array.array('l', range(100000000))
In [4]: %timeit sum(L)
1 loop, best of 3: 667 ms per loop
In [5]: %timeit sum(A)
1 loop, best of 3: 1.41 s per loop
In [6]: %timeit sum(L)
1 loop, best of 3: 627 ms per loop
In [7]: %timeit sum(A)
1 loop, best of 3: 1.39 s per loop
```
What could be the cause of such a difference? | To add to Tim Peters' excellent answer, arrays implement the [buffer protocol](https://docs.python.org/3/c-api/buffer.html), while lists do not. This means that, *if you are writing a C extension* (or the moral equivalent, such as writing a [Cython](http://cython.org) module), then you can access and work with the elements of an array much faster than anything Python can do. This will give you considerable speed improvements, possibly well over an order of magnitude. However, it has a number of downsides:
1. You are now in the business of writing C instead of Python. Cython is one way to ameliorate this, but it does not eliminate many fundamental differences between the languages; you need to be familiar with C semantics and understand what it is doing.
2. PyPy's C API works [to some extent](http://morepypy.blogspot.ie/2010/04/using-cpython-extension-modules-with.html), but isn't very fast. If you are targeting PyPy, you should probably just write simple code with regular lists, and then let the JITter optimize it for you.
3. C extensions are harder to distribute than pure Python code because they need to be compiled. Compilation tends to be architecture and operating-system dependent, so you will need to ensure you are compiling for your target platform.
Going straight to C extensions may be using a sledgehammer to swat a fly, depending on your use case. You should first investigate [NumPy](http://docs.scipy.org/doc/numpy/about.html) and see if it is powerful enough to do whatever math you're trying to do. It will also be much faster than native Python, if used correctly. |
F# Equivalent of Python range | 36,780,574 | 3 | 2016-04-21T21:12:09Z | 36,780,819 | 8 | 2016-04-21T21:29:10Z | [
"python",
"f#"
] | I've started learning F#, and one thing I've run into is I don't know any way to express the equivalent of the `range` function in Python. I know `[1..12]` is the equivalent of range(1,13). But what I want to be able to do is `range(3, 20, 2)` (I know Haskell has `[3,5..19]`). How can I express this? | ```
seq { 3 .. 2 .. 20 }
```
results in
```
3 5 7 9 11 13 15 17 19
```
<https://msdn.microsoft.com/en-us/library/dd233209.aspx> |
how to properly overload the __add__ method in python | 36,785,417 | 9 | 2016-04-22T05:18:56Z | 36,785,681 | 9 | 2016-04-22T05:37:31Z | [
"python",
"class",
"overloading"
] | I am required to write a class involving dates. I am supposed to overload the + operator to allow days being added to dates. To explain how it works: A Date object is represented as (2016,4,15) in the format year,month, date. Adding integer 10 to this should yield (2016,4,25). The Date class has values self.year,self.month,self.day
My problem is that the code is supposed to work in the form (Date+10) as well as (10+Date). Also Date - 1. should work in the sense of adding a negative number of days. Date(2016,4,25) - 1 returns Date(2016,4,24).
My code works perfectly in the form of (Date+10) but not in the form (10+D) or (D-1).
```
def __add__(self,value):
if type(self) != int and type(self) != Date or (type(value) != int and type(value) != Date):
raise TypeError
if type(self) == Date:
day = self.day
month = self.month
year = self.year
value = value
if type(value) != int:
raise TypeError
days_to_add = value
while days_to_add > 0:
day+=1
if day == Date.days_in(year,month):
month+=1
if month > 12:
day = 0
month = 1
year+=1
day = 0
days_to_add -=1
return(Date(year,month,day))
```
These are the errors I get
TypeError: unsupported operand type(s) for +: 'int' and 'Date'
TypeError: unsupported operand type(s) for -: 'Date' and 'int' | [`__radd__`](http://www.python-course.eu/python3_magic_methods.php) handles right side addition so you need to implement that as well.
I am seeing some flaws in your implementation so I recommend you using [`datetime`](https://docs.python.org/2/library/datetime.html) module *(especially **datetime.timedelta** class)* to at least handle basic date arithmetic correctly:
```
import datetime
class Date(object):
def __init__(self, year, month, day):
self.year = year
self.month = month
self.day = day
def as_date(self):
return datetime.date(self.year, self.month, self.day)
def __add__(self, other):
if isinstance(other, int):
date = self.as_date() + datetime.timedelta(days=other)
return Date(date.year, date.month, date.day)
else:
raise ValueError("int value is required")
def __radd__(self, other):
return self.__add__(other)
def __sub__(self, other):
return self.__add__(-other)
def __rsub__(self, other):
raise RuntimeError("Doesn't make sense.")
def __repr__(self):
return str(self.as_date())
```
---
Demo:
```
>>> date = Date(2015, 10, 23)
>>> print date + 10 # __add__ is called
2015-11-02
>>> print 20 + date # __radd__ is called
2015-11-12
>>> print date - 25 # __sub__ is called
2015-09-28
>>> print 25 - date # __rsub__ is called
RuntimeError: Doesn't make sense
``` |
How to erase line from text file in Python? | 36,785,789 | 6 | 2016-04-22T05:45:00Z | 36,785,843 | 9 | 2016-04-22T05:48:16Z | [
"python",
"io"
] | I'm trying to make a code to rewrite a specific line from a .txt file.
I can get to write in the line i want, but i can't erase the previous text on the line.
Here is my code:
(i'm trying a couple of things)
```
def writeline(file,n_line, text):
f=open(file,'r+')
count=0
for line in f:
count=count+1
if count==n_line :
f.write(line.replace(str(line),text))
#f.write('\r'+text)
```
You can use this code to make a test file for testing:
```
with open('writetest.txt','w') as f:
f.write('1 \n2 \n3 \n4 \n5')
writeline('writetest.txt',4,'This is the fourth line')
```
Edit: For Some reason, if i use 'if count==5:' the code compiles ok (even if it doen't erase the previous text), but if i do 'if count==n\_line: ', the file ends up with a lot of garbage.
The Answers work, but i would like to know what are the problems with my code, and why i can't read and write. Thanks! | You are reading from the file and also writing to it. Don't do that. Instead, you should write to a [`NamedTemporaryFile`](https://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) and then [`rename`](https://docs.python.org/2/library/os.html#os.rename) it over the original file after you finish writing and close it.
Or if the size of the file is guaranteed to be small, you can use `readlines()` to read all of it, then close the file, modify the line you want, and write it back out:
```
def editline(file,n_line,text):
with open(file) as infile:
lines = infile.readlines()
lines[n_line] = text+' \n'
with open(file, 'w') as outfile:
outfile.writelines(lines)
``` |
How to display full output in Jupyter, not only last result? | 36,786,722 | 4 | 2016-04-22T06:43:02Z | 36,835,741 | 8 | 2016-04-25T08:44:44Z | [
"python",
"ipython",
"jupyter"
] | I want Jupyter to print all the interactive output without resorting to print, not only the last result. How to do it?
Example :
```
a=3
a
a+1
```
I would like to display
> 3
> 4 | Thanks to Thomas, here is the solution I was looking for:
```
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
``` |
What exactly is __weakref__ in Python? | 36,787,603 | 22 | 2016-04-22T07:28:38Z | 36,788,031 | 7 | 2016-04-22T07:52:29Z | [
"python",
"python-3.x"
] | Surprisingly, there's no explicit documentation for `__weakref__`. Weak references are explained [here](https://docs.python.org/2/library/weakref.html). `__weakref__` is also shortly mentioned in the documentation of `__slots__`. But I could not find anything about `__weakref__` itself.
What exactly is `__weakref__`?
- Is it just a member acting as a flag: If present, the object may be weakly-referenced?
- Or is it a function/variable that can be overridden/assigned to get a desired behavior? How? | the `__weakref__` variable is an attribute which enable the object in order to support the weak references and preserving the weak references to object.
As you mentioned python documentation has explained the `weakref` [here](https://docs.python.org/2/library/weakref.html):
> when the only remaining references to a referent are weak references, garbage collection is free to destroy the referent and reuse its memory for something else.
So the duty of weak references is supplying the conditions for an object in order to be able to be garbage collected regardless of its type and the scope.
And about the `__slots__` the documentation explains them very well:
> By default, instances of classes have a dictionary for attribute storage. This wastes space for objects having very few instance variables. The space consumption can become acute when creating large numbers of instances.
>
> The default can be overridden by defining `__slots__` in a class definition. The `__slots__` declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because `__dict__` is not created for each instance.
So since by using `__slots__` you will control the demanded storage for your attribute, it actually prevents the automatic creation of `__dict__` and `__weakref__` for each instance. Which the `__weakref__` is the necessary variable of each object in order to be able to deal with weak references.
As the documentation for `object.__slots__` class says:
> This class variable can be assigned a string, iterable, or sequence of strings with variable names used by instances. `__slots__` reserves space for the declared variables and prevents the automatic creation of `__dict__` and `__weakref__` for each instance.
**In a nutshell**, the story is that `__slots__` are for managing the storage allocation manually and since `__weakref__` is the license of accepting the weak references for objects which is related to storage (because of the ability of being garbage collected), therefore `__slots__` will control the `__weakref__` as well as controlling the `__dict__` attribute.
Also documentation has shown you the way of making an object to support the weak references along side of using `__slots__`:
> Without a `__weakref__` variable for each instance, classes defining `__slots__` do not support weak references to its instances. If weak reference support is needed, then add `'__weakref__'` to the sequence of strings in the `__slots__` declaration.
Here is an example in python 3.X:
```
>>> class Test:
... __slots__ = ['a', 'b']
...
>>>
>>> import weakref
>>>
>>> t = Test()
>>>
>>> r = weakref.ref(t)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: cannot create weak reference to 'Test' object
>>>
>>> class Test:
... __slots__ = ['a', 'b', '__weakref__']
...
>>> t = Test()
>>> r = weakref.ref(t)
>>>
>>> t.__weakref__
<weakref at 0x7f735bc55d68; to 'Test' at 0x7f735bc51fc8>
```
But in python 2.7 there, although the documentation is like the aforementioned docs, creating a weak reference from instances that doesn't provide the `__weakref__` variable in their `__slots__` names, doesn't raise a `TypeError`:
```
>>> class Test:
... __slots__ = ['a', 'b']
...
>>> t = Test()
>>>
>>> r = weakref.ref(t)
>>>
>>> r
<weakref at 0x7fe49f4185d0; to 'instance' at 0x7fe4a3e75f80>
``` |
What exactly is __weakref__ in Python? | 36,787,603 | 22 | 2016-04-22T07:28:38Z | 36,788,452 | 9 | 2016-04-22T08:12:59Z | [
"python",
"python-3.x"
] | Surprisingly, there's no explicit documentation for `__weakref__`. Weak references are explained [here](https://docs.python.org/2/library/weakref.html). `__weakref__` is also shortly mentioned in the documentation of `__slots__`. But I could not find anything about `__weakref__` itself.
What exactly is `__weakref__`?
- Is it just a member acting as a flag: If present, the object may be weakly-referenced?
- Or is it a function/variable that can be overridden/assigned to get a desired behavior? How? | [Edit 1: Explain the linked list nature and when weakrefs are re-used]
Interestingly enough, the [official documentation](https://docs.python.org/3/reference/datamodel.html#slots) is somewhat non-enlightening on this topic:
> Without a `__weakref__` variable for each instance, classes defining `__slots__` do not support weak references to its instances. If weak reference support is needed, then add `__weakref__` to the sequence of strings in the `__slots__` declaration.
The [`type` object documentation](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_weaklistoffset) on the topic does not seem to help things along too much:
> When a typeâs `__slots__` declaration contains a slot named `__weakref__`, that slot becomes the weak reference list head for instances of the type, and the slotâs offset is stored in the typeâs `tp_weaklistoffset`.
Weak references form a linked list. The head of that list (the first weak reference to an object) is available via `__weakref__`. Weakrefs are re-used whenever possible, so the list (not a Python list!) typically is either empty or contains a single element.
**Example**:
When you first use `weakref.ref()`, you create a new weak reference chain for the target object. The head of this chain is the new weakref and gets stored in the target object's `__weakref__`:
```
>>> a = A()
>>> b = weakref.ref(a)
>>> c = weakref.ref(b)
>>> print(b is c is a.__weakref__)
True
```
As we can see, `b` is re-used. We can force python to create a new weakref, by e.g. adding a callback parameter:
```
>>> def callback():
>>> pass
>>> a = A()
>>> b = weakref.ref(a)
>>> c = weakref.ref(b, callback)
>>> print(b is c is a.__weakref__)
False
```
Now `b is a.__weakref__`, and `c` is the second reference in the chain. The reference chain is not directly accessible from Python code. We see only the head element of the chain (`b`), but not how the chain continues (`b` -> `c`).
So `__weakref__` is the head of the internal linked list of all the weak references to the object. I cannot find any piece of official documentation where this role of `__weakref__` is concisely explained, so one should probably not rely on this behavior, as it is an implementation detail. |
What exactly is __weakref__ in Python? | 36,787,603 | 22 | 2016-04-22T07:28:38Z | 36,789,779 | 9 | 2016-04-22T09:18:11Z | [
"python",
"python-3.x"
] | Surprisingly, there's no explicit documentation for `__weakref__`. Weak references are explained [here](https://docs.python.org/2/library/weakref.html). `__weakref__` is also shortly mentioned in the documentation of `__slots__`. But I could not find anything about `__weakref__` itself.
What exactly is `__weakref__`?
- Is it just a member acting as a flag: If present, the object may be weakly-referenced?
- Or is it a function/variable that can be overridden/assigned to get a desired behavior? How? | `__weakref__` is just an opaque object that references all the weak references to the current object. In actual fact it's an instance of `weakref` (or sometimes `weakproxy`) which is both a weak reference to the object and part of a doubly linked list to all weak references for that object.
It's just an implementation detail that allows the garbage collector to inform weak references that it's referent has been collected, and to not allow access to it's underlying pointer any more.
The weak reference can't rely on checking the reference count of the object it refers to This is because that memory may have been reclaimed and now being used by another object. Best case scenario the VM will crash, worst case the weak reference will allow access to an object it wasn't originally referring to. This is why the garbage collector must inform the weak reference it's referent is no longer valid.
See [weakrefobject.h](https://github.com/python-git/python/blob/715a6e5035bb21ac49382772076ec4c630d6e960/Include/weakrefobject.h) for the structure and C-API for this object. And the implementation detail is [here](https://github.com/python-git/python/blob/715a6e5035bb21ac49382772076ec4c630d6e960/Objects/weakrefobject.c) |
Boring Factorials in python | 36,792,027 | 7 | 2016-04-22T11:00:59Z | 36,792,665 | 7 | 2016-04-22T11:29:14Z | [
"python",
"algorithm"
] | I am trying to understand and solve the following problem :
> > Sameer and Arpit want to overcome their fear of Maths and so they have been recently practicing Maths problems a lot. Aman, their friend
> > has been helping them out. But as it goes, Sameer and Arpit have got
> > bored of problems involving factorials. Reason being, the factorials
> > are too easy to calculate in problems as they only require the residue
> > modulo some prime and that is easy to calculate in linear time. So to
> > make things interesting for them, Aman - The Mathemagician, gives them
> > an interesting task. He gives them a prime number P and an integer N
> > close to P, and asks them to find N! modulo P. He asks T such queries.
>
> **Input :**
>
> First line contains an integer T, the number of queries asked.
>
> Next T lines contains T queries of the form âN Pâ. (quotes for
> clarity)
>
> **Output:**
>
> Output exactly T lines, containing N! modulo P.
>
> ```
> Example
> Input:
> 3
>
> 2 5
>
> 5 11
>
> 21 71
>
> Output:
> 2
>
> 10
>
> 6
>
>
>
> Constraints:
>
> 1 <= T <= 1000
>
> 1 < P <= 2*10^9
>
> 1 <= N <= 2*10^9
>
>
> Abs(N-P) <= 1000
> ```
now to this I wrote a solution :
```
def factorial(c):
n1=1
n2=2
num=1
while num!=c:
n1=(n1)*(n2)
n2+=1
num+=1
return n1
for i in range(int(raw_input())):
n,p=map(int,raw_input().split())
print factorial(n)%p
```
but as you can see this is inefficient solution so I started searching for a better solution than I came to know that this can be solved using wilson's and fermet's theorem.But I am unable to understand what the author is trying to say
He says:
\*\*In number theory, Wilson's theorem states that a natural number n > 1 is a prime number if and only if
[](http://i.stack.imgur.com/ylJe3.png)
Now from this we can write:
```
(p-1)! â¡ -1 (mod p)
1*2*3*.........*(n-1)*(n)*..............*(p-1) â¡ -1 (mod p)
n!*(n+1)*...........*(p-1) â¡ -1 (mod p)
n! â¡ -1*[(n+1)*...............(p-2)*(p-1)]^-1 (mod p)
let a=[(n+1)*...............(p-2)*(p-1)]
so
n!â¡-1*a^-1(mod p)
From Fermat's Theorem:
a^(p-1) â¡ 1(mod p)
multiply both side by a^-1
a^(p-2) â¡ a^-1(mod p)
now simply we have to find a^(p-2) mod p
```
\*\*
so I implemented this:
```
def factorial1(n,p): # to calculate a=[(n+1)*...............(p-2)*(p-1)]
n0=n+1
n1=n0+1
while n1<=(p-1):
n0=n1*n0
n1+=1
return n0
# print nf(2,5)
for i in range(10):
n,p=map(int,raw_input().split())
if n>p:
print 0
elif n==p-1:
print p-1
else:
print (factorial1(n,p)**(p-2))%p #a^(p-2) mod p
```
But from the output which I am getting I think I misunderstood what he wrote .Can someone tell me what is he telling me to calculate and how do I write the code for what he is saying. | This is not a straight-forward application of the Wilson's theorem. Along with it use the following facts:
* if `n >= p` then `n! = 0 (mod p)`
* if `n < p` then `n! = (p-1)!/[(n+1)(n+2)..(p-1)]`. Now use the fact that `(p-1)! = -1 (mod p)`. All that is left for you to find is the [modular multiplicative inverse](https://en.wikipedia.org/wiki/Modular_multiplicative_inverse) (using [extended Euclidean algorithm](https://en.wikipedia.org/wiki/Modular_multiplicative_inverse#Extended_Euclidean_algorithm) for example) of the numbers `n+1, n+2, ... , p-1` which number is at most `1000` from the fact that `abs(n-p) <= 1000`. Multiply `(p-1)! = -1 (mod p)` with all modular multiplicative inverse of the numbers `n+1, n+2, ... , p-1` and you get the answer. (as John Coleman pointed out you could also do a inverse of the the product and not product of the inverse as an optimization)
In your case `n=2, p=5` (just to see how it works)
```
n! = 2! = 4!/(3*4) = (-1)*2*4 = 2 (mod 5)
# 2 is modular inverse of 3 since 2*3 = 1 (mod 5)
# 4 is modular inverse of 4 since 4*4 = 1 (mod 5)
``` |
Python linebreak '\n' is not working when I include something at end | 36,792,582 | 2 | 2016-04-22T11:25:05Z | 36,792,708 | 7 | 2016-04-22T11:31:15Z | [
"python",
"line-breaks"
] | The Python linebreak command \n doesn't work for me on Python 2.7 when I include something in the statement, like an int or a numpy array. Is there a way to do this? Here are some examples:
```
print("These \n linebreaks don't work:\n", 1)
"These \n linebreaks don't work:\n", 1
print("These \n work fine\n")
These
work fine
``` | If you want to use `print` like a function, import the one from Python3.
```
>>> from __future__ import print_function
>>> print("These \n linebreaks don't work:\n", 1)
These
linebreaks don't work:
1
```
Now they actually do and you won't have to change anything. |
PEP8 E226 recommendation | 36,794,533 | 5 | 2016-04-22T12:58:57Z | 36,794,875 | 7 | 2016-04-22T13:14:34Z | [
"python",
"pep8"
] | The [E226](http://pep8.readthedocs.org/en/latest/intro.html#error-codes) error code is about *"missing whitespace around arithmetic operator"*.
I use [Anaconda](http://damnwidget.github.io/anaconda/)'s package in Sublime which will highlight as a PEP8 E226 violation for example this line:
```
hypot2 = x*x + y*y
```
But in [Guido's PEP8 style guide](https://www.python.org/dev/peps/pep-0008/) that line is actually shown as an example of [recommended use](https://www.python.org/dev/peps/pep-0008/#other-recommendations) of spaces within operators.
Question: which is the correct guideline? Always spaces around operators or just in some cases (as Guido's recommendation shows)?
Also: who decides what goes into PEP8? I would've thought Guido's recommendation would pretty much determine how that works. | The maintainers of the PEP8 tool decide what goes into it.
As you noticed, these do not always match the PEP8 style guide exactly. In this particular case, I don't know whether it's an oversite by the maintainers, or a deliberate decision. You'd have to ask them to find out, or you might find the answer in the commit history.
Guido recently asked the maintainers of pep8 and pep257 tools to rename them, to avoid this confusion. [See this issue for example](https://github.com/PyCQA/pycodestyle/issues/466). As a result, the tools are getting renamed to pycodestyle and pydocstyle, respectively. |
On what CPU cores are my Python processes running? | 36,795,086 | 19 | 2016-04-22T13:23:32Z | 36,799,994 | 9 | 2016-04-22T17:29:38Z | [
"python",
"multithreading",
"python-3.x",
"multiprocessing"
] | **The setup**
I have written a pretty complex piece of software in Python (on a Windows PC). My software starts basically two Python interpreter shells. The first shell starts up (I suppose) when you double click the `main.py` file. Within that shell, other threads are started in the following way:
```
# Start TCP_thread
TCP_thread = threading.Thread(name = 'TCP_loop', target = TCP_loop, args = (TCPsock,))
TCP_thread.start()
# Start UDP_thread
UDP_thread = threading.Thread(name = 'UDP_loop', target = UDP_loop, args = (UDPsock,))
TCP_thread.start()
```
The `Main_thread` starts a `TCP_thread` and a `UDP_thread`. Although these are separate threads, they all run within one single Python shell.
The `Main_thread`also starts a subprocess. This is done in the following way:
```
p = subprocess.Popen(['python', mySubprocessPath], shell=True)
```
From the Python documentation, I understand that this subprocess is running *simultaneously (!)* in a separate Python interpreter session/shell. The `Main_thread`in this subprocess is completely dedicated to my GUI. The GUI starts a `TCP_thread` for all its communications.
I know that things get a bit complicated. Therefore I have summarized the whole setup in this figure:
[](http://i.stack.imgur.com/XFN5o.png)
---
I have several questions concerning this setup. I will list them down here:
**Question 1** [*Solved*]
Is it true that a Python interpreter uses only one CPU core at a time to run all the threads? In other words, will the `Python interpreter session 1` (from the figure) run all 3 threads (`Main_thread`, `TCP_thread` and `UDP_thread`) on one CPU core?
*Answer: yes, this is true. The GIL (Global Interpreter Lock) ensures that all threads run on one CPU core at a time.*
**Question 2** [*Not yet solved*]
Do I have a way to track which CPU core it is?
**Question 3** [*Partly solved*]
For this question we forget about *threads*, but we focus on the *subprocess* mechanism in Python. Starting a new subprocess implies starting up a new Python interpreter **instance**. Is this correct?
*Answer: Yes this is correct. At first there was some confusion about whether the following code would create a new Python interpreter instance:*
```
p = subprocess.Popen(['python', mySubprocessPath], shell = True)
```
*The issue has been clarified. This code indeed starts a new Python interpreter instance.*
Will Python be smart enough to make that separate Python interpreter instance run on a different CPU core? Is there a way to track which one, perhaps with some sporadic print statements as well?
**Question 4** [*New question*]
The community discussion raised a new question. There are apparently two approaches when spawning a new process (within a new Python interpreter instance):
```
# Approach 1(a)
p = subprocess.Popen(['python', mySubprocessPath], shell = True)
# Approach 1(b) (J.F. Sebastian)
p = subprocess.Popen([sys.executable, mySubprocessPath])
# Approach 2
p = multiprocessing.Process(target=foo, args=(q,))
```
The second approach has the obvious downside that it targets just a function - whereas I need to open up a new Python script. Anyway, are both approaches similar in what they achieve? | > **Q:** Is it true that a Python interpreter uses only one CPU core at a time to run all the threads?
No. GIL and CPU affinity are unrelated concepts. GIL can be released during blocking I/O operations, long CPU intensive computations inside a C extension anyway.
If a thread is blocked on GIL; it is probably not on any CPU core and therefore it is fair to say that pure Python multithreading code may use only one CPU core at a time on CPython implementation.
> **Q:** In other words, will the Python interpreter session 1 (from the figure) run all 3 threads (Main\_thread, TCP\_thread and UDP\_thread) on one CPU core?
I don't think CPython manages CPU affinity implicitly. It is likely relies on OS scheduler to choose where to run a thread. Python threads are implemented on top of real OS threads.
> **Q:** Or is the Python interpreter able to spread them over multiple cores?
To find out the number of usable CPUs:
```
>>> import os
>>> len(os.sched_getaffinity(0))
16
```
Again, whether or not threads are scheduled on different CPUs does not depend on Python interpreter.
> **Q:** Suppose that the answer to Question 1 is 'multiple cores', do I have a way to track on which core each thread is running, perhaps with some sporadic print statements? If the answer to Question 1 is 'only one core', do I have a way to track which one it is?
I imagine, a specific CPU may change from one time-slot to another. You could [look at something like `/proc/<pid>/task/<tid>/status` on old Linux kernels](http://stackoverflow.com/q/8032372/4279). On my machine, [`task_cpu` can be read from `/proc/<pid>/stat` or `/proc/<pid>/task/<tid>/stat`](https://www.kernel.org/doc/Documentation/filesystems/proc.txt):
```
>>> open("/proc/{pid}/stat".format(pid=os.getpid()), 'rb').read().split()[-14]
'4'
```
For a current portable solution, see whether [`psutil`](https://pythonhosted.org/psutil/) exposes such info.
You could restrict the current process to a set of CPUs:
```
os.sched_setaffinity(0, {0}) # current process on 0-th core
```
> **Q:** For this question we forget about threads, but we focus on the subprocess mechanism in Python. Starting a new subprocess implies starting up a new Python interpreter session/shell. Is this correct?
Yes. `subprocess` module creates new OS processes. If you run `python` executable then it starts a new Python interpeter. If you run a bash script then no new Python interpreter is created i.e., running `bash` executable does not start a new Python interpreter/session/etc.
> **Q:** Supposing that it is correct, will Python be smart enough to make that separate interpreter session run on a different CPU core? Is there a way to track this, perhaps with some sporadic print statements as well?
See above (i.e., OS decides where to run your thread and there could be OS API that exposes where the thread is run).
> `multiprocessing.Process(target=foo, args=(q,)).start()`
`multiprocessing.Process` also creates a new OS process (that runs a new Python interpreter).
> In reality, my subprocess is another file. So this example won't work for me.
Python uses modules to organize the code. If your code is in `another_file.py` then `import another_file` in your main module and pass `another_file.foo` to `multiprocessing.Process`.
> Nevertheless, how would you compare it to p = subprocess.Popen(..)? Does it matter if I start the new process (or should I say 'python interpreter instance') with subprocess.Popen(..)versus multiprocessing.Process(..)?
`multiprocessing.Process()` is likely implemented on top of `subprocess.Popen()`. `multiprocessing` provides API that is similar to `threading` API and it abstracts away details of communication between python processes (how Python objects are serialized to be sent between processes).
If there are no CPU intensive tasks then you could run your GUI and I/O threads in a single process. If you have a series of CPU intensive tasks then to utilize multiple CPUs at once, either use multiple threads with C extensions such as `lxml`, `regex`, `numpy` (or your own one created using [Cython](http://cython.org)) that can release GIL during long computations or offload them into separate processes (a simple way is to use a process pool such as provided by [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html)).
> **Q:** The community discussion raised a new question. There are apparently two approaches when spawning a new process (within a new Python interpreter instance):
>
> ```
> # Approach 1(a)
> p = subprocess.Popen(['python', mySubprocessPath], shell = True)
>
> # Approach 1(b) (J.F. Sebastian)
> p = subprocess.Popen([sys.executable, mySubprocessPath])
>
> # Approach 2
> p = multiprocessing.Process(target=foo, args=(q,))
> ```
*"Approach 1(a)"* is wrong on POSIX (though it may work on Windows). For portability, use *"Approach 1(b)"* unless you know you need `cmd.exe` (pass a string in this case, to make sure that the correct command-line escaping is used).
> The second approach has the obvious downside that it targets just a function - whereas I need to open up a new Python script. Anyway, are both approaches similar in what they achieve?
`subprocess` creates new processes, *any* processes e.g., you could run a bash script. `multprocessing` is used to run Python code in another process. It is more flexible to *import* a Python module and run its function than to run it as a script. See [Call python script with input with in a python script using subprocess](http://stackoverflow.com/q/30076185/4279). |
Upgraded to Ubuntu 16.04 now MySQL-python dependencies are broken | 36,796,167 | 8 | 2016-04-22T14:13:37Z | 36,835,229 | 10 | 2016-04-25T08:17:17Z | [
"python",
"mysql",
"ubuntu",
"pip",
"ubuntu-16.04"
] | I just upgraded my Ubuntu install to 16.04 and this seems to have broken my mysql dependencies in the MySQL-python package.
Here is my error message:
```
File "/opt/monitorenv/local/lib/python2.7/site-packages/sqlalchemy/engine/__init__.py", line 386, in create_engine
return strategy.create(*args, **kwargs)
File "/opt/monitorenv/local/lib/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 75, in create
dbapi = dialect_cls.dbapi(**dbapi_args)
File "/opt/monitorenv/local/lib/python2.7/site-packages/sqlalchemy/dialects/mysql/mysqldb.py", line 92, in dbapi
return __import__('MySQLdb')
File "/opt/monitorenv/local/lib/python2.7/site-packages/MySQLdb/__init__.py", line 19, in <module>
import _mysql
ImportError: libmysqlclient.so.18: cannot open shared object file: No such file or directory
```
So basically the import\_mysql is looking for an `so` file that doesn't exist because in Ubuntu 16.04, I have `libmysqlclient20` installed.
And libmysqlclient18 is not available.
As far as I am aware (or at least I believe) my python libraries are up to date with the latest versions.
(I tried running `pip install --upgrade mysql-python` which indicated it was up to date).
Do you guys have any suggestions ? | I ended up finding the solution to my problems with `pip install --no-binary MySQL-python MySQL-python`
as stated in this thread : [Python's MySQLdb canât find libmysqlclient.dylib with Homebrewed MySQL](http://stackoverflow.com/questions/34536914/pythons-mysqldb-can-t-find-libmysqlclient-dylib-with-homebrewed-mysql) |
How to remove an extension to a blob caused by morphology | 36,800,444 | 8 | 2016-04-22T17:57:37Z | 36,801,788 | 8 | 2016-04-22T19:21:47Z | [
"python",
"opencv",
"image-processing",
"scipy"
] | I have an image that I'm eroding and dilating like so:
```
kernel = np.ones((5,5),np.float32)/1
eroded_img = cv2.erode(self.inpainted_adjusted_image, kernel, iterations=10)
dilated_img = cv2.dilate(eroded_img, kernel, iterations=10)
```
Here's the result of the erosion and dilation:
[](http://i.stack.imgur.com/HXSGM.png)
and then I'm taking a threshold of it like so:
```
self.thresh = cv2.threshold(dilated_img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
```
But the threshold gives me an unwanted extension that I've marked in the image below (The region above the red line is the unwanted region):
[](http://i.stack.imgur.com/gJX7i.png)
How do I get rid of this unwanted region? Is there a better way to do what I'm doing? | Working with a different type of threshold (adaptive threshold, which takes local brigthness into account) will already get rid of your problem: The adaptive threshold result is what you are looking for.
[](http://i.stack.imgur.com/QWbWD.png)
[EDIT: I have taken the liberty of adding some code on Hough circles. I admit that I have played with the parameters for this single image to get a nice looking result, though I do not know what type of accuracy you are needing for such a type of problem]
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('image.png',0)
thresh = cv2.threshold(img, 210, 255, cv2.ADAPTIVE_THRESH_MEAN_C)[1]
canny = cv2.Canny(thresh,50,150)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(canny,cv2.HOUGH_GRADIENT,1,20, param1=50,param2=23,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(255,0,0),3)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
titles = ['Original Image', 'Adaptive Thresholding', "Canny", "Hough Circle"]
images = [img, thresh, canny, cimg]
for i in xrange(4):
plt.subplot(2,2,i+1),plt.imshow(images[i],'gray')
plt.title(titles[i])
plt.xticks([]),plt.yticks([])
plt.show()
```
Let us know if this is not yet sufficient. |
Dictionary comprehension with lambda functions gives wrong results | 36,805,071 | 15 | 2016-04-23T00:09:09Z | 36,805,118 | 12 | 2016-04-23T00:16:40Z | [
"python",
"dictionary",
"lambda"
] | I tried the following code in Python 3.5.1:
```
>>> f = {x: (lambda y: x) for x in range(10)}
>>> f[5](3)
9
```
It's obvious that this should return `5`. I don't understand where the other value comes from, and I wasn't able to find anything.
It seems like it's something related to reference - it always returns the answer of `f[9]`, which is the last function assigned.
What's the error here, and how should this be done so that it works properly? | **Python scoping is lexical**. A closure will refer to the name and scope of the variable, not the actual object/value of the variable.
What happens is that each lambda is capturing the variable `x` *not* the value of `x`.
At the end of the loop the variable `x` is bound to 9, therefore **every lambda will refer to this `x` whose value is 9.**
Why @ChrisP's answer works:
> `make_func` forces the value of `x` to be evaluated (as it is passed
> into a function). Thus, the lambda is made with value of `x` currently
> and we avoid the above scoping issue.
```
def make_func(x):
return lambda y: x
f = {x: make_func(x) for x in range(10)}
``` |
Python Recursive Search of Dict with Nested Keys | 36,808,260 | 6 | 2016-04-23T08:03:19Z | 36,808,400 | 7 | 2016-04-23T08:18:44Z | [
"python",
"list",
"dictionary",
"recursion",
"global"
] | I recently had to solve a problem in a real data system with a nested dict/list combination. I worked on this for quite a while and came up with a solution, but I am very unsatisfied. I had to resort to using `globals()` and a named temporary global parameter.
I do not like to use globals. That's just asking for an injection vulnerability. I feel that there must be a better way to perform this task without resorting to globals.
Problem Dataset:
```
d = {
"k":1,
"stuff":"s1",
"l":{"m":[
{
"k":2,
"stuff":"s2",
"l":None
},
{
"k":3,
"stuff":"s3",
"l":{"m":[
{
"k":4,
"stuff":"s4",
"l":None
},
{
"k":5,
"stuff":"s5",
"l":{"m":[
{
"k":6,
"stuff":"s6",
"l":None
},
]}
},
]}
},
]}
}
```
Desired Output:
```
[{'k': 1, 'stuff': 's1'},
{'k': 2, 'stuff': 's2'},
{'k': 3, 'stuff': 's3'},
{'k': 4, 'stuff': 's4'},
{'k': 5, 'stuff': 's5'},
{'k': 6, 'stuff': 's6'}]
```
My Solution:
```
def _get_recursive_results(d, iter_key, get_keys):
if not 'h' in globals():
global h
h = []
h.append({k:d.get(k) for k in get_keys})
d2 = d.copy()
for k in iter_key:
if not d2:
continue
d2 = d2.get(k)
for td in d2:
d3 = td.copy()
for k in iter_key:
if not d3:
continue
d3 = d3.get(k)
if d3:
return _get_recursive_results(td, iter_key, get_keys)
h.append({k:td.get(k) for k in get_keys})
else:
l = [k for k in h]
del globals()['h']
return l
```
Calling my function as follows returns the desired result:
```
_get_recursively(d, ['l','m'], ['k','stuff'])
```
How would I build a better solution? | This is a slightly modified version without using globals. Set `h` to `None`
as default and create a new list for the first call to `_get_recursive_results()`. Later provide `h` as an argument in the recursive calls to `_get_recursive_results()`:
```
def _get_recursive_results(d, iter_key, get_keys, h=None):
if h is None:
h = []
h.append({k:d.get(k) for k in get_keys})
d2 = d.copy()
for k in iter_key:
if not d2:
continue
d2 = d2.get(k)
for td in d2:
d3 = td.copy()
for k in iter_key:
if not d3:
continue
d3 = d3.get(k)
if d3:
return _get_recursive_results(td, iter_key, get_keys, h)
h.append({k:td.get(k) for k in get_keys})
else:
l = [k for k in h]
return l
```
Now:
```
>>> _get_recursive_results(d, ['l','m'], ['k','stuff'])
[{'k': 1, 'stuff': 's1'},
{'k': 2, 'stuff': 's2'},
{'k': 3, 'stuff': 's3'},
{'k': 4, 'stuff': 's4'},
{'k': 5, 'stuff': 's5'},
{'k': 6, 'stuff': 's6'}]
```
There is no need for the copying of intermediate dicts. This is a further modified version without copying:
```
def _get_recursive_results(d, iter_key, get_keys, h=None):
if h is None:
h = []
h.append({k: d.get(k) for k in get_keys})
for k in iter_key:
if not d:
continue
d = d.get(k)
for td in d:
d3 = td
for k in iter_key:
if not d3:
continue
d3 = d3.get(k)
if d3:
return _get_recursive_results(td, iter_key, get_keys, h)
h.append({k: td.get(k) for k in get_keys})
else:
return h
``` |
Calculate average of every x rows in a table and create new table | 36,810,595 | 5 | 2016-04-23T12:06:50Z | 36,810,658 | 7 | 2016-04-23T12:13:12Z | [
"python",
"python-3.x",
"pandas",
"numpy"
] | I have a long table of data (~200 rows by 50 columns) and I need to create a code that can calculate the mean values of every two rows and for each column in the table with the final output being a new table of the mean values. This is obviously crazy to do in Excel! I use python3 and I am aware of some similar questions:[here](http://stackoverflow.com/questions/30379311/fast-way-to-take-average-of-every-n-rows-in-a-npy-array), [here](http://stackoverflow.com/questions/22463609/calculating-an-average-for-every-x-number-of-lines) and [here](http://stackoverflow.com/questions/15956309/averaging-over-every-n-elements-of-a-numpy-array). But none of these helps as I need some elegant code to work with multiple columns and produces an organised data table. By the way my original datatable has been imported using pandas and is defined as a dataframe but could not find an easy way to do this in pandas. Help is much appreciated.
An example of the table (short version) is:
```
a b c d
2 50 25 26
4 11 38 44
6 33 16 25
8 37 27 25
10 28 48 32
12 47 35 45
14 8 16 7
16 12 16 30
18 22 39 29
20 9 15 47
```
Expected mean table:
```
a b c d
3 30.5 31.5 35
7 35 21.5 25
11 37.5 41.5 38.5
15 10 16 18.5
19 15.5 27 38
``` | You can create an artificial group using `df.index//2` (or as @DSM pointed out, using `np.arange(len(df))//2` - so that it works for all indices) and then use groupby:
```
df.groupby(np.arange(len(df))//2).mean()
Out[13]:
a b c d
0 3.0 30.5 31.5 35.0
1 7.0 35.0 21.5 25.0
2 11.0 37.5 41.5 38.5
3 15.0 10.0 16.0 18.5
4 19.0 15.5 27.0 38.0
``` |
Reading Very Large One Liner Text File | 36,820,605 | 5 | 2016-04-24T07:52:49Z | 36,820,782 | 10 | 2016-04-24T08:15:18Z | [
"python",
"python-3.x",
"numbers",
"text-files",
"large-files"
] | I have a 30MB .txt file, with ***one*** line of data *(30 Million Digit Number)*
Unfortunately, every method I've tried (`mmap.read()`, `readline()`, allocating 1GB of RAM, for loops) takes 45+ minutes to completely read the file.
Every method I found on the internet seems to work on the fact that each line is small, therefore the memory consumption is only as big as the biggest line in the file. Here's the code I've been using.
```
start = time.clock()
z = open('Number.txt','r+')
m = mmap.mmap(z.fileno(), 0)
global a
a = int(m.read())
z.close()
end = time.clock()
secs = (end - start)
print("Number read in","%s" % (secs),"seconds.", file=f)
print("Number read in","%s" % (secs),"seconds.")
f.flush()
del end,start,secs,z,m
```
Other than splitting the number from one line to various lines; which I'd rather not do, is there a cleaner method which won't require the better part of an hour?
By the way, I don't necessarily have to use text files.
I have: Windows 8.1 64-Bit, 16GB RAM, Python 3.5.1 | The file read is quick (<1s):
```
with open('number.txt') as f:
data = f.read()
```
Converting a 30-million-digit string to an integer, that's slow:
```
z=int(data) # still waiting...
```
If you store the number as raw big- or little-endian binary data, then `int.from_bytes(data,'big')` is much quicker.
If I did my math right (Note `_` means "last line's answer" in Python's interactive interpreter):
```
>>> import math
>>> math.log(10)/math.log(2) # Number of bits to represent a base 10 digit.
3.3219280948873626
>>> 30000000*_ # Number of bits to represent 30M-digit #.
99657842.84662087
>>> _/8 # Number of bytes to represent 30M-digit #.
12457230.35582761 # Only ~12MB so file will be smaller :^)
>>> import os
>>> data=os.urandom(12457231) # Generate some random bytes
>>> z=int.from_bytes(data,'big') # Convert to integer (<1s)
99657848
>>> math.log10(z) # number of base-10 digits in number.
30000001.50818886
```
**EDIT**: FYI, my math wasn't right, but I fixed it. Thanks for 10 upvotes without noticing :^) |
Mapping two list without looping | 36,822,478 | 5 | 2016-04-24T11:25:42Z | 36,822,504 | 9 | 2016-04-24T11:28:40Z | [
"python"
] | I have two lists of equal length. The first list `l1` contains data.
```
l1 = [2, 3, 5, 7, 8, 10, ... , 23]
```
The second list `l2` contains the category the data in `l1` belongs to:
```
l2 = [1, 1, 2, 1, 3, 4, ... , 3]
```
How can I partition the first list based on the positions defined by numbers such as `1, 2, 3, 4` in the second list, using a *list comprehension* or *lambda function*. For example, `2, 3, 7` from the first list belongs to the same partition as they have corresponding values in the second list.
The number of partitions is known at the beginning. | You can use a dictionary:
```
>>> l1 = [2, 3, 5, 7, 8, 10, 23]
>>> l2 = [1, 1, 2, 1, 3, 4, 3]
>>> d = {}
>>> for i, j in zip(l1, l2):
... d.setdefault(j, []).append(i)
...
>>>
>>> d
{1: [2, 3, 7], 2: [5], 3: [8, 23], 4: [10]}
``` |
Mapping two list without looping | 36,822,478 | 5 | 2016-04-24T11:25:42Z | 36,822,526 | 8 | 2016-04-24T11:31:10Z | [
"python"
] | I have two lists of equal length. The first list `l1` contains data.
```
l1 = [2, 3, 5, 7, 8, 10, ... , 23]
```
The second list `l2` contains the category the data in `l1` belongs to:
```
l2 = [1, 1, 2, 1, 3, 4, ... , 3]
```
How can I partition the first list based on the positions defined by numbers such as `1, 2, 3, 4` in the second list, using a *list comprehension* or *lambda function*. For example, `2, 3, 7` from the first list belongs to the same partition as they have corresponding values in the second list.
The number of partitions is known at the beginning. | If a `dict` is fine, I suggest using a [`defaultdict`](https://docs.python.org/2/library/collections.html#collections.defaultdict):
```
>>> from collections import defaultdict
>>> d = defaultdict(list)
>>> for number, category in zip(l1, l2):
... d[category].append(number)
...
>>> d
defaultdict(<type 'list'>, {1: [2, 3, 7], 2: [5], 3: [8, 23], 4: [10]})
```
Consider using [`itertools.izip`](https://docs.python.org/2/library/itertools.html#itertools.izip) for memory efficiency if you are using Python 2.
This is basically the same solution as Kasramvd's, but I think the `defaultdict` makes it a little easier to read. |
PEP8 â import not at top of file with sys.path | 36,827,962 | 5 | 2016-04-24T19:35:39Z | 38,338,146 | 7 | 2016-07-12T20:16:57Z | [
"python",
"python-3.x",
"pep8"
] | # Problem
PEP8 has a rule about putting imports at the top of a file:
> Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants.
However, in certain cases, I might want to do something like:
```
import sys
sys.path.insert("..", 0)
import my_module
```
In this case, the `pep8` command line utility flags my code:
> E402 module level import not at top of file
What is the best way to achieve PEP8 compliance with `sys.path` modifications?
# Why
I have this code because I'm following [the project structure](https://github.com/kennethreitz/samplemod) given in [The Hitchhiker's Guide to Python](http://docs.python-guide.org/en/latest/writing/structure/#structure-of-the-repository).
That guide suggests that I have a `my_module` folder, separate from a `tests` folder, both of which are in the same directory. If I want to access `my_module` from `tests`, I think I need to add `..` to the `sys.path` | If there are just a few imports, you can just ignore PEP8 on those `import` lines:
```
import sys
sys.path.insert("..", 0)
import my_module # noqa
``` |
Modifying ancestor nested Meta class in descendant | 36,835,177 | 3 | 2016-04-25T08:14:44Z | 36,835,198 | 7 | 2016-04-25T08:15:42Z | [
"python",
"django"
] | Suppose I have:
```
class A(object):
class Meta:
a = "a parameter"
class B(A):
class Meta:
a = "a parameter"
b = "b parameter"
```
How can I avoid having to rewrite the whole Meta class, when I only want to append `b = "b parameter"` to it? | You could subclass `A.Meta`:
```
class B(A):
class Meta(A.Meta):
b = "b parameter"
```
Now `B.Meta` inherits all attributes from `A.Meta`, and all you have to do is declare overrides or new attributes. |
Lowercasing script in Python vs Perl | 36,840,612 | 11 | 2016-04-25T12:29:25Z | 36,987,626 | 7 | 2016-05-02T16:55:39Z | [
"python",
"string",
"perl",
"file",
"lowercase"
] | In Perl, to lowercase a textfile, I could do the following [`lowercase.perl`](https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/lowercase.perl):
```
#!/usr/bin/env perl
use warnings;
use strict;
binmode(STDIN, ":utf8");
binmode(STDOUT, ":utf8");
while(<STDIN>) {
print lc($_);
}
```
And on the command line: `perl lowercase.perl < infile.txt > lowered.txt`
In `Python`, I could do with `lowercase.py`:
```
#!/usr/bin/env python
import io
import sys
with io.open(sys.argv[1], 'r', 'utf8') as fin:
with io.open(sys.argv[2], 'r', 'utf8') as fout:
fout.write(fin.read().lower())
```
And on the command line: `python lowercase.py infile.txt lowered.txt`
**Is the Perl `lowercase.perl` different from the Python `lowercase.py`?**
**Does it stream the input and lowercase it as it outputs? Or does it read the whole file like the Python's `lowercase.py`?**
**Instead of reading in a whole file, is there a way to stream the input into Python and output the lowered case byte by byte or char by char?**
Is there a way to control the command-line syntax such that it follows the Perl STDIN and STDOUT? E.g. `python lowercase.py < infile.txt > lowered.txt`? | Python 3.x equivalent for your Perl code may look as follows:
```
#!/usr/bin/env python3.4
import sys
for line in sys.stdin:
print(line[:-1].lower(), file=sys.stdout)
```
It reads stdin line-by-line and could be used in shell pipeline |
Split the result of 'counter' | 36,850,550 | 7 | 2016-04-25T20:34:17Z | 36,850,601 | 8 | 2016-04-25T20:37:06Z | [
"python",
"count"
] | I count the occurrences of items in a list using
```
timesCrime = Counter(districts)
```
Which gives me this:
```
Counter({3: 1575, 2: 1462, 6: 1359, 4: 1161, 5: 1159, 1: 868})
```
I want to separate the parts of the list items (3 and 1575 for example) and store them in a list of lists.
How do I do this? | `Counter` is a `dict`, so you have the usual `dict` methods available:
```
>>> from collections import Counter
>>> counter = Counter({3: 1575, 2: 1462, 6: 1359, 4: 1161, 5: 1159, 1: 868})
>>> counter.items()
[(1, 868), (2, 1462), (3, 1575), (4, 1161), (5, 1159), (6, 1359)]
```
If you wanted them stored column major, just use some `zip` magic:
```
>>> zip(*counter.items())
[(1, 2, 3, 4, 5, 6), (868, 1462, 1575, 1161, 1159, 1359)]
``` |
Is it possible to know if two python functions are functionally equivalent? | 36,852,912 | 5 | 2016-04-25T23:33:32Z | 36,852,937 | 11 | 2016-04-25T23:36:04Z | [
"python",
"function"
] | Let's say I have two python functions `f` and `g`:
```
def f(x):
y = x**2 + 1
return y
def g(x):
a = x**2
b = a + 1
return b
```
These two functions are clearly functionally equivalent (both return `x**2 + 1`).
My definition of functionally equivalent is as follows:
**If two functions `f` and `g` always produce the same output given the same input, then `f` and `g` are functionally equivalent.**
Further, let's say no global variables are involved in `f` and `g`.
Is it possible to automatically determine (without human inspection) if python functions `f` and `g` are functionally equivalent? | By [Rice's Theorem](https://en.wikipedia.org/wiki/Rice%27s_theorem), no. If you could do this, you could solve the [halting problem](https://en.wikipedia.org/wiki/Halting_problem). (This is true even if `f` and `g` are always guaranteed to halt.) |
Is it possible to run a command that is in a list? | 36,868,392 | 5 | 2016-04-26T14:47:35Z | 36,868,461 | 10 | 2016-04-26T14:50:31Z | [
"python"
] | I am trying to make a program that will pick a random number, and run a corresponding command to that number. I put multiple commands in a list as seen below
```
list = [cmd1(), cmd2(), cmd3(), cmd4()]
x = randint(0, len(list-1))
list[x]
```
Is there any way to run a command this way?
(I am using python 3.5) | Yes, functions and methods are first class objects, you can assign them, pass them as arguments, etc...:
```
commands = [cmd1, cmd2, cmd3, cmd4] # omit the parenthesis (call)
current_command = random.choice(commands)
current_command()
``` |
Appropriate Deep Learning Structure for multi-class classification | 36,885,474 | 13 | 2016-04-27T09:19:02Z | 36,887,827 | 12 | 2016-04-27T10:58:34Z | [
"python",
"machine-learning",
"scikit-learn",
"tensorflow",
"deep-learning"
] | I have the following data
```
feat_1 feat_2 ... feat_n label
gene_1 100.33 10.2 ... 90.23 great
gene_2 13.32 87.9 ... 77.18 soso
....
gene_m 213.32 63.2 ... 12.23 quitegood
```
The size of `M` is large ~30K rows, and `N` is much smaller ~10 columns.
My question is what is the appropriate Deep Learning structure to learn
and test the data like above.
At the end of the day, the user will give a vector of genes with expression.
```
gene_1 989.00
gene_2 77.10
...
gene_N 100.10
```
And the system will label which label does each gene apply e.g. great or soso, etc...
By structure I mean one of these:
* Convolutional Neural Network (CNN)
* Autoencoder
* Deep Belief Network (DBN)
* Restricted Boltzman Machine | To expand a little on @sung-kim 's comment:
* CNN's are used primarily for problems in computer imaging, such as
classifying images. They are modelled on animals visual cortex, they
basically have a connection network such that there are tiles of
features which have some overlap. Typically they require a lot of
data, more than 30k examples.
* Autoencoder's are used for feature generation and dimensionality reduction. They start with lots of neurons on each layer, then this number is reduced, and then increased again. Each object is trained on itself. This results in the middle layers (low number of neurons) providing a meaningful projection of the feature space in a low dimension.
* While I don't know much about DBN's they appear to be a supervised extension of the Autoencoder. Lots of parameters to train.
* Again I don't know much about Boltzmann machines, but they aren't widely used for this sort of problem (to my knowledge)
As with all modelling problems though, I would suggest starting from the most basic model to look for signal. Perhaps a good place to start is [Logistic Regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression) before you worry about deep learning.
If you have got to the point where you want to try deep learning, for whatever reasons. Then for this type of data a basic feed-forward network is the best place to start. In terms of deep-learning, 30k data points is not a large number, so always best start out with a small network (1-3 hidden layers, 5-10 neurons) and then get bigger. Make sure you have a decent validation set when performing parameter optimisation though. If your a fan of the `scikit-learn` API, I suggest that [Keras](http://keras.io/) is a good place to start
One further comment, you will want to use a [OneHotEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder) on your class labels before you do any training.
**EDIT**
I see from the bounty and the comments that you want to see a bit more about how these networks work. Please see the example of how to build a feed-forward model and do some simple parameter optisation
```
import numpy as np
from sklearn import preprocessing
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
# Create some random data
np.random.seed(42)
X = np.random.random((10, 50))
# Similar labels
labels = ['good', 'bad', 'soso', 'amazeballs', 'good']
labels += labels
labels = np.array(labels)
np.random.shuffle(labels)
# Change the labels to the required format
numericalLabels = preprocessing.LabelEncoder().fit_transform(labels)
numericalLabels = numericalLabels.reshape(-1, 1)
y = preprocessing.OneHotEncoder(sparse=False).fit_transform(numericalLabels)
# Simple Keras model builder
def buildModel(nFeatures, nClasses, nLayers=3, nNeurons=10, dropout=0.2):
model = Sequential()
model.add(Dense(nNeurons, input_dim=nFeatures))
model.add(Activation('sigmoid'))
model.add(Dropout(dropout))
for i in xrange(nLayers-1):
model.add(Dense(nNeurons))
model.add(Activation('sigmoid'))
model.add(Dropout(dropout))
model.add(Dense(nClasses))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='sgd')
return model
# Do an exhaustive search over a given parameter space
for nLayers in xrange(2, 4):
for nNeurons in xrange(5, 8):
model = buildModel(X.shape[1], y.shape[1], nLayers, nNeurons)
modelHist = model.fit(X, y, batch_size=32, nb_epoch=10,
validation_split=0.3, shuffle=True, verbose=0)
minLoss = min(modelHist.history['val_loss'])
epochNum = modelHist.history['val_loss'].index(minLoss)
print '{0} layers, {1} neurons best validation at'.format(nLayers, nNeurons),
print 'epoch {0} loss = {1:.2f}'.format(epochNum, minLoss)
```
Which outputs
```
2 layers, 5 neurons best validation at epoch 0 loss = 1.18
2 layers, 6 neurons best validation at epoch 0 loss = 1.21
2 layers, 7 neurons best validation at epoch 8 loss = 1.49
3 layers, 5 neurons best validation at epoch 9 loss = 1.83
3 layers, 6 neurons best validation at epoch 9 loss = 1.91
3 layers, 7 neurons best validation at epoch 9 loss = 1.65
``` |
How do I identify sequences of values in a boolean array? | 36,894,822 | 13 | 2016-04-27T15:51:29Z | 36,894,977 | 9 | 2016-04-27T15:58:14Z | [
"python",
"list",
"python-3.x",
"boolean"
] | I have a long boolean array:
```
bool_array = [ True, True, True, True, True, False, False, False, False, False, True, True, True, False, False, True, True, True, True, False, False, False, False, False, False, False ]
```
I need to figure out where the values flips, i.e., the addresses where sequences of `True` and `False` begin. In this particular case, I would want to get
```
index = [0, 5, 10, 13, 15, 19, 26]
```
Is there an easy way to do without manually looping to check every ith element with the (i+1)th? | This will tell you where:
```
>>> import numpy as np
>>> np.argwhere(np.diff(bool_array)).squeeze()
array([ 4, 9, 12, 14, 18])
```
---
`np.diff` calculates the difference between each element and the next. For booleans, it essentially interprets the values as integers (0: False, non-zero: True), so differences appear as +1 or -1 values, which then get mapped back to booleans (True when there is a change).
The `np.argwhere` function then tells you where the values are True --- which are now the changes. |
How do I identify sequences of values in a boolean array? | 36,894,822 | 13 | 2016-04-27T15:51:29Z | 36,895,305 | 14 | 2016-04-27T16:11:21Z | [
"python",
"list",
"python-3.x",
"boolean"
] | I have a long boolean array:
```
bool_array = [ True, True, True, True, True, False, False, False, False, False, True, True, True, False, False, True, True, True, True, False, False, False, False, False, False, False ]
```
I need to figure out where the values flips, i.e., the addresses where sequences of `True` and `False` begin. In this particular case, I would want to get
```
index = [0, 5, 10, 13, 15, 19, 26]
```
Is there an easy way to do without manually looping to check every ith element with the (i+1)th? | As a more efficient approach for large datasets, in python 3.X you can use [`accumulate`](https://docs.python.org/3.5/library/itertools.html#itertools.accumulate) and [`groupby`](https://docs.python.org/3.5/library/itertools.html#itertools.groupby) function from `itertools` module.
```
>>> from itertools import accumulate, groupby
>>> [0] + list(accumulate(sum(1 for _ in g) for _,g in groupby(bool_array)))
[0, 5, 10, 13, 15, 19, 26]
```
---
The logic behind the code:
This code, categorizes the sequential duplicate items using `groupby()` function, then loops over the iterator returned by `groupby()` which is contains pairs of keys (that we escaped it using under line instead of a throw away variable) and these categorized iterators.
```
>>> [list(g) for _, g in groupby(bool_array)]
[[True, True, True, True, True], [False, False, False, False, False], [True, True, True], [False, False], [True, True, True, True], [False, False, False, False, False, False, False]]
```
So all we need is calculating the length of these iterators and sum each length with its previous length, in order to get the index of first item which is exactly where the item is changed, that is exactly what that `accumulate()` function is for. |
Why is this valid Python? | 36,897,247 | 4 | 2016-04-27T17:48:54Z | 36,897,278 | 9 | 2016-04-27T17:50:29Z | [
"python"
] | This code:
```
bounding_box = (
-122.43687629699707, 37.743774801147126
-122.3822021484375, 37.80123932755579
)
```
produces the following value:
```
(-122.43687629699707, -84.63842734729037, 37.80123932755579)
```
There are three values because I forgot a trailing comma on the first line. Surprisingly, Python accepts this and adds the second and third numbers together!
Is this something like [string literal concatenation](https://docs.python.org/2.0/ref/string-catenation.html) but for numbers? Why would this ever be the desired behavior? | What happens is simple. In the following assignment
```
bounding_box = (
-122.43687629699707, 37.743774801147126
-122.3822021484375, 37.80123932755579
)
```
Is equivalent to
```
bounding_box = (-122.43687629699707, **37.743774801147126-122.3822021484375**, 37.80123932755579)
```
So, the two values are just being subtracted, and hence produce a 3-tuple. |
In Python, when are two objects the same? | 36,898,917 | 36 | 2016-04-27T19:14:23Z | 36,899,010 | 19 | 2016-04-27T19:18:40Z | [
"python",
"oop",
"python-3.x",
"object",
"reference"
] | It seems that `2 is 2` and `3 is 3` will always be true in python, and in general, any reference to an integer is the same as any other reference to the same integer. The same happens to `None` (i.e., `None is None`). I know that this does *not* happen to user-defined types, or mutable types. But it sometimes fails on immutable types too:
```
>>> () is ()
True
>>> (2,) is (2,)
False
```
That is: two independent constructions of the empty tuple yield references to the same object in memory, but two independent constructions of identical one-(immutable-)element tuples end up creating two identical objects. I tested, and `frozenset`s work in a manner similar to tuples.
What determines if an object will be duplicated in memory or will have a single instance with lots of references? Does it depend on whether the object is "atomic" in some sense? Does it vary according to implementation? | It varies according to implementation.
CPython caches some immutable objects in memory. This is true of "small" integers like 1 and 2 (-5 to 255, as noted in the comments below). CPython does this for performance reasons; small integers are commonly used in most programs, so it saves memory to only have one copy created (and is safe because integers are immutable).
This is also true of "singleton" objects like `None`; there is only ever one `None` in existence at any given time.
Other objects (such as the empty tuple, `()`) may be implemented as singletons, or they may not be.
In general, you shouldn't necessarily *assume* that immutable objects will be implemented this way. CPython does so for performance reasons, but other implementations may not, and CPython may even stop doing it at some point in the future. (The only exception might be `None`, as `x is None` is a common Python idiom and is likely to be implemented across different interpreters and versions.)
Usually you want to use `==` instead of `is`. Python's `is` operator isn't used often, except when checking to see if a variable is `None`. |
In Python, when are two objects the same? | 36,898,917 | 36 | 2016-04-27T19:14:23Z | 36,899,294 | 35 | 2016-04-27T19:33:37Z | [
"python",
"oop",
"python-3.x",
"object",
"reference"
] | It seems that `2 is 2` and `3 is 3` will always be true in python, and in general, any reference to an integer is the same as any other reference to the same integer. The same happens to `None` (i.e., `None is None`). I know that this does *not* happen to user-defined types, or mutable types. But it sometimes fails on immutable types too:
```
>>> () is ()
True
>>> (2,) is (2,)
False
```
That is: two independent constructions of the empty tuple yield references to the same object in memory, but two independent constructions of identical one-(immutable-)element tuples end up creating two identical objects. I tested, and `frozenset`s work in a manner similar to tuples.
What determines if an object will be duplicated in memory or will have a single instance with lots of references? Does it depend on whether the object is "atomic" in some sense? Does it vary according to implementation? | Python has some types that it guarantees will only have one instance. Examples of these instances are `None`, `NotImplemented`, and `Ellipsis`. These are (by definition) singletons and so things like `None is None` are guaranteed to return `True` because there is no way to create a new instance of `NoneType`.
It also supplies a few doubletons 1 `True`, `False` 2 -- All references to `True` point to the same object. Again, this is because there is no way to create a new instance of `bool`.
The above things are all guaranteed by the python language. However, as you have noticed, there are some types (all immutable) that store some instances for reuse. This is allowed by the language, but different implementations may choose to use this allowance or not -- depending on their optimization strategies. Some examples that fall into this category are small integers (-5 -> 255), the empty `tuple` and empty `frozenset`.
Finally, Cpython `intern`s certain immutable objects during parsing...
e.g. if you run the following script with Cpython, you'll see that it returns `True`:
```
def foo():
return (2,)
if __name__ == '__main__':
print foo() is foo()
```
This seems *really* odd. The trick that Cpython is playing is that whenever it constructs the function `foo`, it sees a tuple-literal that contains other simple (immutable) literals. Rather than create this tuple (or it's equivalents) over and over, python just creates it once. There's no danger of that object being changed since the whole deal is immutable. This can be a big win for performance where the same tight loop is called over and over. Small strings are interned as well. The real win here is in dictionary lookups. Python can do a (blazingly fast) pointer compare and then fall back on slower string comparisons when checking hash collisions. Since so much of python is built on dictionary lookups, this can be a big optimization for the language as a whole.
---
1I might have just made up that word ... But hopefully you get the idea...
2Under normal circumstances, you don't *need* do check if the object is a reference to `True` -- Usually you just care if the object is "truthy" -- e.g. if `if some_instance: ...` will execute the branch. But, I put that in here just for completeness.
---
Note that `is` can be used to compare things that aren't singletons. One common use is to create a sentinel value:
```
sentinel = object()
item = next(iterable, sentinel)
if items is sentinel:
# iterable exhausted.
```
Or:
```
_sentinel = object()
def function(a, b, none_is_ok_value_here=_sentinel):
if none_is_ok_value_here is sentinel:
# Treat the function as if `none_is_ok_value_here` was not provided.
```
**The moral of this story is to always say what you mean.** If you want to check if a value *is* another value, then use the `is` operator. If you want to check if a value *is equal to* another value (but possibly distinct), then use `==`. For more details on the difference between `is` and `==` (and when to use which), consult one of the following posts:
* [Is there a difference between `==` and `is` in Python?](http://stackoverflow.com/questions/132988/is-there-a-difference-between-and-is-in-python)
* [Python None comparison: should I use "is" or ==?](http://stackoverflow.com/questions/14247373/python-none-comparison-should-i-use-is-or/14247383#14247383)
---
## Addendum
We've talked about these CPython implementation details and we've claimed that they're optimizations. It'd be nice to try to measure just what we get from all this optimizing (other than a little added confusion when working with the `is` operator).
### String "interning" and dictionary lookups.
Here's a small script that you can run to see how much faster dictionary lookups are if you use the same string to look up the value instead of a different string. Note, I use the term "interned" in the variable names -- These values aren't necessarily interned (though they could be). I'm just using that to indicate that the "interned" string *is* the string in the dictionary.
```
import timeit
interned = 'foo'
not_interned = (interned + ' ').strip()
assert interned is not not_interned
d = {interned: 'bar'}
print('Timings for short strings')
number = 100000000
print(timeit.timeit(
'd[interned]',
setup='from __main__ import interned, d',
number=number))
print(timeit.timeit(
'd[not_interned]',
setup='from __main__ import not_interned, d',
number=number))
####################################################
interned_long = interned * 100
not_interned_long = (interned_long + ' ').strip()
d[interned_long] = 'baz'
assert interned_long is not not_interned_long
print('Timings for long strings')
print(timeit.timeit(
'd[interned_long]',
setup='from __main__ import interned_long, d',
number=number))
print(timeit.timeit(
'd[not_interned_long]',
setup='from __main__ import not_interned_long, d',
number=number))
```
The exact values here shouldn't matter too much, but on my computer, the short strings show about 1 part in 7 faster. The *long* strings are almost 2x faster (because the string comparison takes longer if the string has more characters to compare). The differences aren't quite as striking on python3.x, but they're still definitely there.
### Tuple "interning"
Here's a small script you can play around with:
```
import timeit
def foo_tuple():
return (2, 3, 4)
def foo_list():
return [2, 3, 4]
assert foo_tuple() is foo_tuple()
number = 10000000
t_interned_tuple = timeit.timeit('foo_tuple()', setup='from __main__ import foo_tuple', number=number)
t_list = (timeit.timeit('foo_list()', setup='from __main__ import foo_list', number=number))
print(t_interned_tuple)
print(t_list)
print(t_interned_tuple / t_list)
print('*' * 80)
def tuple_creation(x):
return (x,)
def list_creation(x):
return [x]
t_create_tuple = timeit.timeit('tuple_creation(2)', setup='from __main__ import tuple_creation', number=number)
t_create_list = timeit.timeit('list_creation(2)', setup='from __main__ import list_creation', number=number)
print(t_create_tuple)
print(t_create_list)
print(t_create_tuple / t_create_list)
```
This one is a bit trickier to time (and I'm happy to take any better ideas how to time it in comments). The gist of this is that on average (and on my computer), a tuple takes about 60% as long to create as a list does. However, `foo_tuple()` takes on average about 40% the time that `foo_list()` takes. That shows that we really do gain a little bit of a speedup from these interns. The time savings seem to increase as the tuple gets larger (creating a longer list takes longer -- The tuple "creation" takes constant time since it was already created).
Also note that I've called this "interning". It actually isn't (at least not in the same sense the strings are interned). We can see the difference in this simple script:
```
def foo_tuple():
return (2,)
def bar_tuple():
return (2,)
def foo_string():
return 'foo'
def bar_string():
return 'foo'
print(foo_tuple() is foo_tuple()) # True
print(foo_tuple() is bar_tuple()) # False
print(foo_string() is bar_string()) # True
```
We see that the strings are really "interned" -- Different invocations using the same literal notation return the same object. The tuple "interning" seems to be specific to a single line. |
What does Python mean by printing "[...]" for an object reference? | 36,904,351 | 49 | 2016-04-28T02:46:00Z | 36,904,367 | 48 | 2016-04-28T02:48:58Z | [
"python",
"recursion",
"ellipsis",
"recursive-datastructures"
] | I'm printing a value of a what I thought was a list, but the output that I get is:
```
[...]
```
What does this represent? How do I test for it? I've tried:
```
myVar.__repr__() != '[...]'
```
and
```
myVar.__repr_() != Ellipsis
```
but no dice...
Here's a cutdown of the code that's giving the issue:
```
def buildPaths(graph, start, end, path=[], totalPaths=[]):
"""
returns list of all possible paths from start node to the end node
"""
path = path + [start]
if start == end:
return path
for nextNode in graph.childrenOf(start):
if nextNode not in path:
newPath = buildPaths(graph, nextNode, end, path, totalPaths)
if newPath != []: # test
totalPaths.append(newPath)
return totalPaths
```
totalPaths contains a LOT of [...] supposedly recursive lists, but I can't see why. I've altered the test at #test to prevent this.
I've also tried:
```
def buildPaths(graph, thisNode, end, path=[], totalPaths=None):
"""
returns list of all possible paths from start node to the end node
"""
path = path + [thisNode]
if thisNode == end:
return path
for nextNode in graph.childrenOf(thisNode):
if nextNode not in path:
newPath = buildPaths(graph, nextNode, end, path, totalPaths)
if newPath != None:
if totalPaths == None:
totalPaths = [newPath]
else:
totalPaths.append(newPath)
return totalPaths
```
in order to explicitly return `None` for empty paths. | It represents an infinite loop within the structure. An example:
```
In [1]: l = [1, 2]
In [2]: l[0] = l
In [3]: l
Out[3]: [[...], 2]
```
`l`'s first item is itself. It's a recursive reference, and so python can't reasonably display its contents. Instead it shows `[...]` |
What does Python mean by printing "[...]" for an object reference? | 36,904,351 | 49 | 2016-04-28T02:46:00Z | 36,904,370 | 11 | 2016-04-28T02:49:13Z | [
"python",
"recursion",
"ellipsis",
"recursive-datastructures"
] | I'm printing a value of a what I thought was a list, but the output that I get is:
```
[...]
```
What does this represent? How do I test for it? I've tried:
```
myVar.__repr__() != '[...]'
```
and
```
myVar.__repr_() != Ellipsis
```
but no dice...
Here's a cutdown of the code that's giving the issue:
```
def buildPaths(graph, start, end, path=[], totalPaths=[]):
"""
returns list of all possible paths from start node to the end node
"""
path = path + [start]
if start == end:
return path
for nextNode in graph.childrenOf(start):
if nextNode not in path:
newPath = buildPaths(graph, nextNode, end, path, totalPaths)
if newPath != []: # test
totalPaths.append(newPath)
return totalPaths
```
totalPaths contains a LOT of [...] supposedly recursive lists, but I can't see why. I've altered the test at #test to prevent this.
I've also tried:
```
def buildPaths(graph, thisNode, end, path=[], totalPaths=None):
"""
returns list of all possible paths from start node to the end node
"""
path = path + [thisNode]
if thisNode == end:
return path
for nextNode in graph.childrenOf(thisNode):
if nextNode not in path:
newPath = buildPaths(graph, nextNode, end, path, totalPaths)
if newPath != None:
if totalPaths == None:
totalPaths = [newPath]
else:
totalPaths.append(newPath)
return totalPaths
```
in order to explicitly return `None` for empty paths. | It's a recursive reference as your list contains itself. Python doesn't try to recursively print this which would lead to an infinite loop.
`repr` detects this. So, if you looked at the internal representation of your list object you would see (where the ellipsis occur) "Reference to the same list object at address \*" where \* is the address of the original list object in memory. Hence, the infinite loop. |
What does Python mean by printing "[...]" for an object reference? | 36,904,351 | 49 | 2016-04-28T02:46:00Z | 36,904,384 | 20 | 2016-04-28T02:50:34Z | [
"python",
"recursion",
"ellipsis",
"recursive-datastructures"
] | I'm printing a value of a what I thought was a list, but the output that I get is:
```
[...]
```
What does this represent? How do I test for it? I've tried:
```
myVar.__repr__() != '[...]'
```
and
```
myVar.__repr_() != Ellipsis
```
but no dice...
Here's a cutdown of the code that's giving the issue:
```
def buildPaths(graph, start, end, path=[], totalPaths=[]):
"""
returns list of all possible paths from start node to the end node
"""
path = path + [start]
if start == end:
return path
for nextNode in graph.childrenOf(start):
if nextNode not in path:
newPath = buildPaths(graph, nextNode, end, path, totalPaths)
if newPath != []: # test
totalPaths.append(newPath)
return totalPaths
```
totalPaths contains a LOT of [...] supposedly recursive lists, but I can't see why. I've altered the test at #test to prevent this.
I've also tried:
```
def buildPaths(graph, thisNode, end, path=[], totalPaths=None):
"""
returns list of all possible paths from start node to the end node
"""
path = path + [thisNode]
if thisNode == end:
return path
for nextNode in graph.childrenOf(thisNode):
if nextNode not in path:
newPath = buildPaths(graph, nextNode, end, path, totalPaths)
if newPath != None:
if totalPaths == None:
totalPaths = [newPath]
else:
totalPaths.append(newPath)
return totalPaths
```
in order to explicitly return `None` for empty paths. | If your list contains self references Python will display that as `[...]` rather than trying to recursively print it out, which would lead to an infinte loop:
```
>>> l = [1, 2, 3]
>>> print(l)
[1, 2, 3]
>>> l.append(l)
>>> print(l)
[1, 2, 3, [...]]
>>> print(l[-1]) # print the last item of list l
[1, 2, 3, [...]]
>>> print(l[-1][-1]) # print the last item of the last item of list l
[1, 2, 3, [...]]
```
ad infinitum.
A similar situation arises with dictionaries:
```
>>> d = {}
>>> d['key'] = d
>>> print(d)
{'key': {...}}
>>> d['key']
{'key': {...}}
>>> d['key']['key']
{'key': {...}}
``` |
What does Python mean by printing "[...]" for an object reference? | 36,904,351 | 49 | 2016-04-28T02:46:00Z | 36,904,517 | 32 | 2016-04-28T03:03:46Z | [
"python",
"recursion",
"ellipsis",
"recursive-datastructures"
] | I'm printing a value of a what I thought was a list, but the output that I get is:
```
[...]
```
What does this represent? How do I test for it? I've tried:
```
myVar.__repr__() != '[...]'
```
and
```
myVar.__repr_() != Ellipsis
```
but no dice...
Here's a cutdown of the code that's giving the issue:
```
def buildPaths(graph, start, end, path=[], totalPaths=[]):
"""
returns list of all possible paths from start node to the end node
"""
path = path + [start]
if start == end:
return path
for nextNode in graph.childrenOf(start):
if nextNode not in path:
newPath = buildPaths(graph, nextNode, end, path, totalPaths)
if newPath != []: # test
totalPaths.append(newPath)
return totalPaths
```
totalPaths contains a LOT of [...] supposedly recursive lists, but I can't see why. I've altered the test at #test to prevent this.
I've also tried:
```
def buildPaths(graph, thisNode, end, path=[], totalPaths=None):
"""
returns list of all possible paths from start node to the end node
"""
path = path + [thisNode]
if thisNode == end:
return path
for nextNode in graph.childrenOf(thisNode):
if nextNode not in path:
newPath = buildPaths(graph, nextNode, end, path, totalPaths)
if newPath != None:
if totalPaths == None:
totalPaths = [newPath]
else:
totalPaths.append(newPath)
return totalPaths
```
in order to explicitly return `None` for empty paths. | Depending on the context here it could different things:
# indexing/slicing with `Ellipsis`
I think it's not implemented for any python class but it *should* represent an arbitary number of data structure nestings (*as much needed*).
So for example: `a[..., 1]` should return all the second elements of the innermost nested structure:
```
>>> import numpy as np
>>> a = np.arange(27).reshape(3,3,3) # 3dimensional array
>>> a[..., 1] # this returns a slice through the array in the third dimension
array([[ 1, 4, 7],
[10, 13, 16],
[19, 22, 25]])
>>> a[0, ...] # This returns a slice through the first dimension
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
```
and to check for this `...` you compare it to an `Ellipsis` (this is a singleton so recommended is using `is`:
```
>>> ... is Ellipsis
True
>>> Ellipsis in [...]
True
# Another (more or less) equivalent alternative to the previous line:
>>> any(i is Ellipsis for i in [1, ..., 2])
True
```
# Recursive Datastructures
The other case in which you see an `[...]` in **your output** is if you have the sequence inside the sequence itself. Here it stands for an *infinite* deeply nested sequence (that's not printable). For example:
```
>>> alist = ['a', 'b', 'c']
>>> alist[0] = alist
>>> alist
[[...], 'b', 'c']
# Infinite deeply nested so you can use as many leading [0] as you want
>>> alist[0][1]
'b'
>>> alist[0][0][0][0][0][1]
'b'
>>> alist[0][0][0][0][0][0][0][0][0][0][0][0][0][0][0][1]
'b'
```
You can even replace it several times:
```
>>> alist[2] = alist
>>> alist
[[...], 'b', [...]]
>>> alist[1] = alist
>>> alist
[[...], [...], [...]]
```
To test if you have any such recursion in your output you need to check if the data-structure itself is also one of the elements:
```
>>> alist in alist
True
>>> any(i is alist for i in alist)
True
```
Another way to get a more meaningful output is using `pprint.pprint`:
```
>>> import pprint
>>> pprint.pprint(alist) # Assuming you only replaced the first element:
[<Recursion on list with id=1628861250120>, 'b', 'c']
``` |
Pairwise circular Python 'for' loop | 36,917,042 | 74 | 2016-04-28T14:02:56Z | 36,917,162 | 48 | 2016-04-28T14:07:53Z | [
"python"
] | Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
* 1 - 2
* 2 - 3
* 3 - 1 | I would use a [`deque`](https://docs.python.org/3/library/collections.html#collections.deque) with [`zip`](https://docs.python.org/3/library/functions.html#zip) to achieve this.
```
>>> from collections import deque
>>>
>>> l = [1,2,3]
>>> d = deque(l)
>>> d.rotate(-1)
>>> zip(l, d)
[(1, 2), (2, 3), (3, 1)]
``` |
Pairwise circular Python 'for' loop | 36,917,042 | 74 | 2016-04-28T14:02:56Z | 36,917,173 | 27 | 2016-04-28T14:08:09Z | [
"python"
] | Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
* 1 - 2
* 2 - 3
* 3 - 1 | There are more efficient ways (that don't built temporary lists), but I think this is the most concise:
```
> l = [1,2,3]
> zip(l, (l+l)[1:])
[(1, 2), (2, 3), (3, 1)]
``` |
Pairwise circular Python 'for' loop | 36,917,042 | 74 | 2016-04-28T14:02:56Z | 36,917,441 | 21 | 2016-04-28T14:19:06Z | [
"python"
] | Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
* 1 - 2
* 2 - 3
* 3 - 1 | I would use a list comprehension, and take advantage of the fact that `l[-1]` is the last element.
```
>>> l = [1,2,3]
>>> [(l[i-1],l[i]) for i in range(len(l))]
[(3, 1), (1, 2), (2, 3)]
```
You don't need a temporary list that way. |
Pairwise circular Python 'for' loop | 36,917,042 | 74 | 2016-04-28T14:02:56Z | 36,917,579 | 106 | 2016-04-28T14:24:38Z | [
"python"
] | Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
* 1 - 2
* 2 - 3
* 3 - 1 | A Pythonic way to access a list pairwise is: `zip(L, L[1:])`. To connect the last item to the first one:
```
>>> L = [1, 2, 3]
>>> zip(L, L[1:] + L[:1])
[(1, 2), (2, 3), (3, 1)]
``` |
Pairwise circular Python 'for' loop | 36,917,042 | 74 | 2016-04-28T14:02:56Z | 36,917,655 | 39 | 2016-04-28T14:28:11Z | [
"python"
] | Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
* 1 - 2
* 2 - 3
* 3 - 1 | I'd use a slight modification to the `pairwise` recipe from the [`itertools` documentation](https://docs.python.org/3/library/itertools.html#itertools-recipes):
```
def pairwise_circle(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ... (s<last>,s0)"
a, b = itertools.tee(iterable)
first_value = next(b, None)
return itertools.zip_longest(a, b,fillvalue=first_value)
```
This will simply keep a reference to the first value and when the second iterator is exhausted, `zip_longest` will fill the last place with the first value.
(Also note that it works with iterators like generators as well as iterables like lists/tuples.)
Note that [@Barry's solution](http://stackoverflow.com/a/36927946/5827215) is very similar to this but a bit easier to understand in my opinion. |
Pairwise circular Python 'for' loop | 36,917,042 | 74 | 2016-04-28T14:02:56Z | 36,918,720 | 19 | 2016-04-28T15:11:06Z | [
"python"
] | Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
* 1 - 2
* 2 - 3
* 3 - 1 | > # Pairwise circular Python 'for' loop
If you like the accepted answer,
```
zip(L, L[1:] + L[:1])
```
you can go much more memory light with semantically the same code using `itertools`:
```
from itertools import islice, chain #, izip as zip # uncomment if Python 2
```
And this barely materializes anything in memory beyond the original list (assuming the list is relatively large):
```
zip(l, chain(islice(l, 1, None), islice(l, None, 1)))
```
To use, just consume (for example, with a list):
```
>>> list(zip(l, chain(islice(l, 1, None), islice(l, None, 1))))
[(1, 2), (2, 3), (3, 1)]
```
This can be made extensible to any width:
```
def cyclical_window(l, width=2):
return zip(*[chain(islice(l, i, None), islice(l, None, i)) for i in range(width)])
```
and usage:
```
>>> l = [1, 2, 3, 4, 5]
>>> cyclical_window(l)
<itertools.izip object at 0x112E7D28>
>>> list(cyclical_window(l))
[(1, 2), (2, 3), (3, 4), (4, 5), (5, 1)]
>>> list(cyclical_window(l, 4))
[(1, 2, 3, 4), (2, 3, 4, 5), (3, 4, 5, 1), (4, 5, 1, 2), (5, 1, 2, 3)]
```
## Unlimited generation with `itertools.tee` with `cycle`
You can also use `tee` to avoid making a redundant cycle object:
```
from itertools import cycle, tee
ic1, ic2 = tee(cycle(l))
next(ic2) # must still queue up the next item
```
and now:
```
>>> [(next(ic1), next(ic2)) for _ in range(10)]
[(1, 2), (2, 3), (3, 1), (1, 2), (2, 3), (3, 1), (1, 2), (2, 3), (3, 1), (1, 2)]
```
This is incredibly efficient, an expected usage of `iter` with `next`, and elegant usage of `cycle`, `tee`, and `zip`.
Don't pass `cycle` directly to `list` unless you have saved your work and have time for your computer to creep to a halt as you max out its memory - if you're lucky, after a while your OS will kill the process before it crashes your computer.
## Pure Python Builtin Functions
Finally, no standard lib imports, but this only works for up to the length of original list (IndexError otherwise.)
```
>>> [(l[i], l[i - len(l) + 1]) for i in range(len(l))]
[(1, 2), (2, 3), (3, 1)]
```
You can continue this with modulo:
```
>>> len_l = len(l)
>>> [(l[i % len_l], l[(i + 1) % len_l]) for i in range(10)]
[(1, 2), (2, 3), (3, 1), (1, 2), (2, 3), (3, 1), (1, 2), (2, 3), (3, 1), (1, 2)]
``` |
Pairwise circular Python 'for' loop | 36,917,042 | 74 | 2016-04-28T14:02:56Z | 36,918,890 | 37 | 2016-04-28T15:18:36Z | [
"python"
] | Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
* 1 - 2
* 2 - 3
* 3 - 1 | I would pair [`itertools.cycle`](https://docs.python.org/3.5/library/itertools.html#itertools.cycle) with `zip`:
```
import itertools
def circular_pairwise(l):
second = itertools.cycle(l)
next(second)
return zip(l, second)
```
`cycle` returns an iterable that yields the values of its argument in order, looping from the last value to the first.
We skip the first value, so it starts at position `1` (rather than `0`).
Next, we `zip` it with the original, unmutated list. `zip` is good, because it stops when any of its argument iterables are exhausted.
Doing it this way avoids the creation of any intermediate lists: `cycle` holds a reference to the original, but doesn't copy it. `zip` operates in the same way.
It's important to note that this will break if the input is an `iterator`, such as a `file`, (or a `map` or `zip` in [python-3](/questions/tagged/python-3 "show questions tagged 'python-3'")), as advancing in one place (through `next(second)`) will automatically advance the iterator in all the others. This is easily solved using [`itertools.tee`](https://docs.python.org/3.5/library/itertools.html#itertools.tee), which produces two independently operating iterators over the original iterable:
```
def circular_pairwise(it):
first, snd = itertools.tee(it)
second = itertools.cycle(snd)
next(second)
return zip(first, second)
```
`tee` *can* use large amounts of additional storage, for example, if one of the returned iterators is used up before the other is touched, but as we only ever have one step difference, the additional storage is minimal. |
Pairwise circular Python 'for' loop | 36,917,042 | 74 | 2016-04-28T14:02:56Z | 36,920,630 | 7 | 2016-04-28T16:38:33Z | [
"python"
] | Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
* 1 - 2
* 2 - 3
* 3 - 1 | This one will work even if the list `l` has consumed most of the system's memory. (If something guarantees this case to be impossible, then zip as posted by chepner is fine)
```
l.append( l[0] )
for i in range( len(l)-1):
pair = l[i],l[i+1]
# stuff involving pair
del l[-1]
```
or more generalizably (works for any offset `n` i.e. `l[ (i+n)%len(l) ]` )
```
for i in range( len(l)):
pair = l[i], l[ (i+1)%len(l) ]
# stuff
```
provided you are on a system with decently fast modulo division (i.e. not some pea-brained embedded system).
There seems to be a often-held belief that indexing a list with an integer subscript is un-pythonic and best avoided. Why? |
Pairwise circular Python 'for' loop | 36,917,042 | 74 | 2016-04-28T14:02:56Z | 36,924,406 | 10 | 2016-04-28T20:00:03Z | [
"python"
] | Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
* 1 - 2
* 2 - 3
* 3 - 1 | I like a solution that does not modify the original list and does not copy the list to temporary storage:
```
def circular(a_list):
for index in range(len(a_list) - 1):
yield a_list[index], a_list[index + 1]
yield a_list[-1], a_list[0]
for x in circular([1, 2, 3]):
print x
```
Output:
```
(1, 2)
(2, 3)
(3, 1)
```
I can imagine this being used on some very large in-memory data. |
Pairwise circular Python 'for' loop | 36,917,042 | 74 | 2016-04-28T14:02:56Z | 36,927,946 | 18 | 2016-04-29T01:08:22Z | [
"python"
] | Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
* 1 - 2
* 2 - 3
* 3 - 1 | Amazing how many different ways there are to solve this problem.
Here's one more. You can use the `pairwise` recipe but instead of zipping with `b`, `chain` it with the first element that you already popped off. Don't need to `cycle` when we just need a single extra value:
```
from itertools import chain, izip, tee
def pairwise_circle(iterable):
a, b = tee(iterable)
first = next(b, None)
return izip(a, chain(b, (first,)))
``` |
How int() object using "==" operator without __eq__() method in python2? | 36,921,558 | 13 | 2016-04-28T17:25:56Z | 36,922,306 | 7 | 2016-04-28T18:04:47Z | [
"python",
"python-2.7",
"python-internals"
] | after I read the "Fluent python" book, I understand how `==` operator works with python objects, using `__eq__()` method. But how it works with `int` instances in python2?
```
>>> a = 1
>>> b = 1
>>> a == b
True
>>> a.__eq__(b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'int' object has no attribute '__eq__'
```
in python3 all `a.__eq__(b)` returns `True` | Python [prefers to use rich comparison functions](https://docs.python.org/2/reference/datamodel.html#object.__ge__) (`__eq__`, `__lt__`, `__neq__`, etc.), but if those don't exist, it falls back to using a single comparison function (`__cmp__`, removed in Python 3):
> These are the so-called ârich comparisonâ methods, and are called for comparison operators in preference to `__cmp__()` below.
The [Python 2 integer type](https://github.com/python/cpython/blob/2.7/Objects/intobject.c#L1414-L1455) doesn't implement a rich comparison function:
```
PyTypeObject PyInt_Type = {
...
(cmpfunc)int_compare, /* tp_compare */
...
0, /* tp_richcompare */
```
In Python 3, the [integer type](https://github.com/python/cpython/blob/2.7/Objects/intobject.c#L1414-L1455) (now a long) implements only a rich comparison function, since Python 3 dropped support for `__cmp__`:
```
PyTypeObject PyLong_Type = {
...
long_richcompare, /* tp_richcompare */
```
This is why `(123).__eq__` doesn't exist. Instead, Python 2 falls back to `(123).__cmp__` when testing the equality of two integers:
```
>>> (1).__eq__(2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'int' object has no attribute '__eq__'
>>> (1).__cmp__(2)
-1
``` |
Can't seem to iterate over a sorted dictionary where the keys are number strings. How do you sort a dictioanry to iterate? | 36,946,649 | 2 | 2016-04-29T20:08:21Z | 36,946,676 | 8 | 2016-04-29T20:10:12Z | [
"python",
"sorting",
"dictionary"
] | I have this dictionary (dic) where the keys are strings, but the strings are actually just numbers.
I can't find a way to iterate over the sorted string (since sorting the dictionary will not sort numerically)
```
for j in sorted([int(k) for k in dic.iteritems()]):
print dic[str(j)] #converting the integer back into a string for the key
```
it gives me
```
KeyError
```
Intuitively this should work, but I just dont get why it doesnt. | `dict.iteritems()` returns 2-tuples, which cannot be converted into ints.
```
for j in sorted(dic, key=int):
print dic[j]
``` |
Removing white space from txt with python | 36,957,908 | 9 | 2016-04-30T17:20:32Z | 36,958,049 | 7 | 2016-04-30T17:33:16Z | [
"python",
"regex",
"python-2.7",
"whitespace",
"shlex"
] | I have a .txt file (scraped as pre-formatted text from a website) where the data looks like this:
```
B, NICKOLAS CT144531X D1026 JUDGE ANNIE WHITE JOHNSON
ANDREWS VS BALL JA-15-0050 D0015 JUDGE EDWARD A ROBERTS
```
I'd like to remove all extra spaces (they're actually different number of spaces, not tabs) in between the columns. I'd also then like to replace it with some delimiter (tab or pipe since there's commas within the data), like so:
```
ANDREWS VS BALL|JA-15-0050|D0015|JUDGE EDWARD A ROBERTS
```
Looked around and found that the best options are using regex or shlex to split. Two similar scenarios:
* [Python Regular expression must strip whitespace except between quotes](http://stackoverflow.com/questions/3609596/python-regular-expression-must-strip-whitespace-except-between-quotes),
* [Remove white spaces from dict : Python](http://stackoverflow.com/questions/13152585/remove-white-spaces-from-dict-python). | You can apply the regex `'\s{2,}'` (two or more whitespace characters) to each line and substitute the matches with a single `'|'` character.
```
>>> import re
>>> line = 'ANDREWS VS BALL JA-15-0050 D0015 JUDGE EDWARD A ROBERTS '
>>> re.sub('\s{2,}', '|', line.strip())
'ANDREWS VS BALL|JA-15-0050|D0015|JUDGE EDWARD A ROBERTS'
```
Stripping any leading and trailing whitespace from the line before applying `re.sub` ensures that you won't get `'|'` characters at the start and end of the line.
Your actual code should look similar to this:
```
import re
with open(filename) as f:
for line in f:
subbed = re.sub('\s{2,}', '|', line.strip())
# do something here
``` |
Ansible roles/packages - Ansible Galaxy - error on instalation MAC OSX | 36,958,125 | 13 | 2016-04-30T17:41:49Z | 36,987,168 | 59 | 2016-05-02T16:29:29Z | [
"python",
"ansible",
"ansible-galaxy"
] | Im trying to install ansible-galaxy roles on Mac OS X El Capitan via CLI
```
$ ansible-galaxy install -r requirements.yml
```
I am getting this error:
```
ERROR! Unexpected Exception: (setuptools 1.1.6 (/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python), Requirement.parse('setuptools>=11.3'))
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-galaxy", line 73, in <module>
mycli = getattr(__import__("ansible.cli.%s" % sub, fromlist=[myclass]), myclass)
File "/Library/Python/2.7/site-packages/ansible/cli/galaxy.py", line 38, in <module>
from ansible.galaxy.role import GalaxyRole
File "/Library/Python/2.7/site-packages/ansible/galaxy/role.py", line 35, in <module>
from ansible.playbook.role.requirement import RoleRequirement
File "/Library/Python/2.7/site-packages/ansible/playbook/__init__.py", line 25, in <module>
from ansible.playbook.play import Play
File "/Library/Python/2.7/site-packages/ansible/playbook/play.py", line 27, in <module>
from ansible.playbook.base import Base
File "/Library/Python/2.7/site-packages/ansible/playbook/base.py", line 35, in <module>
from ansible.parsing.dataloader import DataLoader
File "/Library/Python/2.7/site-packages/ansible/parsing/dataloader.py", line 32, in <module>
from ansible.parsing.vault import VaultLib
File "/Library/Python/2.7/site-packages/ansible/parsing/vault/__init__.py", line 67, in <module>
from cryptography.hazmat.primitives.hashes import SHA256 as c_SHA256
File "/Library/Python/2.7/site-packages/cryptography/hazmat/primitives/hashes.py", line 15, in <module>
from cryptography.hazmat.backends.interfaces import HashBackend
File "/Library/Python/2.7/site-packages/cryptography/hazmat/backends/__init__.py", line 7, in <module>
import pkg_resources
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2797, in <module>
parse_requirements(__requires__), Environment()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 580, in resolve
raise VersionConflict(dist,req) # XXX put more info here
VersionConflict: (setuptools 1.1.6 (/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python), Requirement.parse('setuptools>=11.3'))
```
Any thoughts? | Run the following to upgrade setuptools under the `python` user:
```
pip install --upgrade setuptools --user python
```
For some reason, the way things are installed inside OS X (and in my case, under CentOS 7 inside a Docker container), the setuptools package doesn't get installed correctly under the right user. |
Zip list of tuples with flat list | 36,961,738 | 4 | 2016-05-01T00:32:43Z | 36,961,763 | 7 | 2016-05-01T00:35:38Z | [
"python",
"python-3.x"
] | I'm wondering if there's an easy way to do the following in Python 3.x. Say I have two lists structured as follows:
```
list_a = [(1,2), (1,2), (1,2), ...]
list_b = [3, 3, 3, ...]
```
What's the simplest way to produce a generator (here represented by calling a function `funky_zip`) that would let me iterate through these two lists like so:
```
>>> for a, b, c, in funky_zip(list_a, list_b):
>>> print(a, b, c)
...
1 2 3
1 2 3
1 2 3
# and so on
```
I could just do
```
for aa, b in zip(list_a, list_b):
print(aa[0], aa[1], b)
```
but I'm wondering if there's a nice way to do this without having to unpack the tuples. Thanks! | You just need parentheses:
```
list_a = [(1,2), (1,2), (1,2)]
list_b = [3, 3, 3]
for (a, b), c in zip(list_a, list_b):
print(a, b, c)
```
Result:
```
1 2 3
1 2 3
1 2 3
``` |
Theano CNN on CPU: AbstractConv2d Theano optimization failed | 36,965,010 | 8 | 2016-05-01T09:15:23Z | 37,475,827 | 9 | 2016-05-27T05:56:39Z | [
"python",
"neural-network",
"cpu",
"theano",
"blas"
] | I'm trying to train a CNN for object detection on images with the CIFAR10 dataset for a seminar at my university but I get the following Error:
> AssertionError: AbstractConv2d Theano optimization failed: there is no
> implementation available supporting the requested options. Did you
> exclude both "conv\_dnn" and "conv\_gemm" from the optimizer? If on GPU,
> is cuDNN available and does the GPU support it? If on CPU, do you have
> a BLAS library installed Theano can link against?
I am running Anaconda 2.7 within a Jupyter notebook (CNN training on CPU) from a Windows 10 machine. As I already have updated to the newest theano version using git clone I tried the following things:
* exclude dnn and gemm directly from within the code `THEANO_FLAGS='optimizer_excluding=conv_dnn, optimizer_excluding=conv_gemm'`
* exclude dnn and gemm directly from cmd typing `THEANO_FLAGS='...' python <myscript>.py` which not suprisingly gives an "unknown command" error.
* exclude dnn and gemm from a .theanorc.txt which I put into C:/user/myusername
Unfortunately, I still get the same error and when I call `print(teano.config)` the terms "conv\_dnn" and "conv\_gemm" do not appear.
* Furthermore I tried to find out what BLAS my numpy package is using (which generally works well for) and if that package is static using a tool from dependencywalker.com but I failed miserably
So here's my question: How on earth can I set the theano flags properly and how can I check if I suceeded in doing so? If that doesn't help, how can I check what BLAS I am building? Which one should I use and how can I change the dependency for theano?
As you might have guessed I am not an expert when it comes to all this package, dependency, built and other fancy computer science stuff and the documentation I find only is just not noob proof so I would be most grateful I you guys could help me out!
Best
Jonas | Add one line to .theanorc file
```
optimizer = None
```
as a global configuration. |
Appending to list of tuples | 36,966,839 | 3 | 2016-05-01T12:49:00Z | 36,966,881 | 8 | 2016-05-01T12:53:08Z | [
"python"
] | I have a list of tuple which looks like this:
```
my_list = [(["$"], 1.5)]
```
And I also have these valuables stored as variables:
```
val1 = "#"
val2 = 3.0
```
I want to be able to append val1 to the list within the tuple, and multiply val2 with the second element in the tuple. It should look like this:
```
[(["$", "#"], 4.5)]
```
so far I have this:
```
for item in my_list:
for i in item:
i[0].append(val1)
i[1] = i[1] * val2
```
But so far this is not working, Is there a another way I can do this? | Tuples are immutable. Therefore, you must create a new one:
```
for i, item in enumerate(my_list):
item[0].append("#")
my_list[i] = item[0], item[1] * 3
``` |
Python basic Calculator program doesn't return answer | 36,972,518 | 4 | 2016-05-01T22:07:06Z | 36,972,552 | 8 | 2016-05-01T22:10:57Z | [
"python",
"calculator"
] | So I am trying to figure out how to make a calculator with the things that I have learned in python, but I just can't make it give me an answer.
This is the code I have so far:
```
def add(x, y):
return x + y
def subtract (x, y):
return x - y
def divide (x, y):
return x / y
def multiply (x, y):
return x / y
print("What calculation would you like to make?")
print("Add")
print("Subtract")
print("Divide")
print("Multiply")
choice = input("Enter choice (add/subtract/divide/multiply)\n")
num1 = float(input("Enter first number: "))
num2 = float(input("Enter second number: "))
if choice == ("add, Add"):
print(add(num1,num2))
elif choice == ("subtract, Subtract"):
print(subtract(num1,num2))
elif choice == ("divide, Divide"):
print(divide(num1,num2))
elif choice == ("multiply, Multiply"):
print(multiply(num1,num2))`
``` | Instead of:
```
choice == ("add, Add")
```
You want:
```
choice in ["add", "Add"]
```
Or more likely:
```
choice.lower() == "add"
```
Why? You're trying to check that the choice input is equal to the tuple ("add, Add") in your code which is not what you want. You instead want to check that the choice input is in the list ["add", "Add"]. Alternatively the better way to handle this input would be to lowercase the input and compare it to the string you want. |
How to crop biggest rectangle out of an image | 36,982,736 | 32 | 2016-05-02T12:41:51Z | 36,988,763 | 10 | 2016-05-02T18:04:50Z | [
"python",
"opencv",
"image-processing"
] | I have a few images of pages on a table. I would like to crop the pages out of the image. Generally, the page will be the biggest rectangle in the image, however, all four sides of the rectangle might not be visible in some cases.
I am doing the following but not getting desired results:
```
import cv2
import numpy as np
im = cv2.imread('images/img5.jpg')
gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,127,255,0)
_,contours,_ = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cnt=contours[max_index]
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(im,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow("Show",im)
cv2.imwrite("images/img5_rect.jpg", im)
cv2.waitKey(0)
```
Below are a few examples:
**1st Example**: I can find the rectangle in this image , however, would like if the remaining part of the wood can be cropped out as well.
[](http://i.stack.imgur.com/OBF2r.jpg)
[](http://i.stack.imgur.com/hx8vL.jpg)
**2nd Example**: Not finding the correct dimensions of the rectangle in this image.
[](http://i.stack.imgur.com/z8TQm.jpg)
[](http://i.stack.imgur.com/uWx6n.jpg)
**3rd Example**: Not able to find the correct dimensions in this image either.
[](http://i.stack.imgur.com/ofWJG.jpg)
[](http://i.stack.imgur.com/AaJ5r.jpg)
**4th Example**: Same with this as well.
[](http://i.stack.imgur.com/dSIqG.jpg)
[](http://i.stack.imgur.com/VginS.jpg) | That's a pretty complicated task which cannot be solved by simply searching contours. The Economist cover for example only shows 1 edge of the magazine which splits the image in half. How should your computer know which one is the magazine and which one is the table? So you have to add much more intelligence to your program.
You might look for lines in your image. Hough transform for example. Then find sets of more or less parallel or orthogonal lines, lines of a certain length...
Find prints by checking for typical print colours or colours that you usually don't find on a table. Search for high contrast frequencies as created by printed texts...
Imagine how you as a human recognize a printed paper...
All in all this is a too broad question for StackOverflow. Try to break it down into smaller sub-problems, try to solve them and if you hit a wall, come back here. |
How to crop biggest rectangle out of an image | 36,982,736 | 32 | 2016-05-02T12:41:51Z | 37,176,835 | 15 | 2016-05-12T03:38:58Z | [
"python",
"opencv",
"image-processing"
] | I have a few images of pages on a table. I would like to crop the pages out of the image. Generally, the page will be the biggest rectangle in the image, however, all four sides of the rectangle might not be visible in some cases.
I am doing the following but not getting desired results:
```
import cv2
import numpy as np
im = cv2.imread('images/img5.jpg')
gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,127,255,0)
_,contours,_ = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cnt=contours[max_index]
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(im,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow("Show",im)
cv2.imwrite("images/img5_rect.jpg", im)
cv2.waitKey(0)
```
Below are a few examples:
**1st Example**: I can find the rectangle in this image , however, would like if the remaining part of the wood can be cropped out as well.
[](http://i.stack.imgur.com/OBF2r.jpg)
[](http://i.stack.imgur.com/hx8vL.jpg)
**2nd Example**: Not finding the correct dimensions of the rectangle in this image.
[](http://i.stack.imgur.com/z8TQm.jpg)
[](http://i.stack.imgur.com/uWx6n.jpg)
**3rd Example**: Not able to find the correct dimensions in this image either.
[](http://i.stack.imgur.com/ofWJG.jpg)
[](http://i.stack.imgur.com/AaJ5r.jpg)
**4th Example**: Same with this as well.
[](http://i.stack.imgur.com/dSIqG.jpg)
[](http://i.stack.imgur.com/VginS.jpg) | As I have previously done something similar, I have experienced with hough transforms, but they were much harder to get right for my case than using contours. I have the following suggestions to help you get started:
1. Generally paper (edges, at least) is white, so you may have better luck by going to a colorspace like YUV which better separates luminosity:
```
image_yuv = cv2.cvtColor(image,cv2.COLOR_BGR2YUV)
image_y = np.zeros(image_yuv.shape[0:2],np.uint8)
image_y[:,:] = image_yuv[:,:,0]
```
2. The text on the paper is a problem. Use a blurring effect, to (hopefully) remove these high frequency noises. You may also use morphological operations like dilation as well.
```
image_blurred = cv2.GaussianBlur(image_y,(3,3),0)
```
3. You may try to apply a canny edge-detector, rather than a simple threshold. Not necessarily, but may help you:
```
edges = cv2.Canny(image_blurred,100,300,apertureSize = 3)
```
4. Then find the contours. In my case I only used the extreme outer contours. You may use CHAIN\_APPROX\_SIMPLE flag to compress the contour
```
contours,hierarchy = cv2.findContours(edges,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
```
5. Now you should have a bunch of contours. Time to find the right ones. For each contour `cnt`, first find the convex hull, then use `approaxPolyDP` to simplify the contour as much as possible.
```
hull = cv2.convexHull(cnt)
simplified_cnt = cv2.approxPolyDP(hull,0.001*cv2.arcLength(hull,True),True)
```
6. Now we should use this simplified contour to find the enclosing quadrilateral. You may experiment with lots of rules you come up with. The simplest method is picking the four longest longest segments of the contour, and then create the enclosing quadrilateral by intersecting these four lines. Based on your case, you can find these lines based on the contrast the line makes, the angle they make and similar things.
7. Now you have a bunch of quadrilaterals. You can now perform a two step method to find your required quadrilateral. First you remove those ones that are probably wrong. For example one angle of the quadrilateral is more than 175 degrees. Then you can pick the one with the biggest area as the final result. You can see the orange contour as one of the results I got at this point:
[](http://i.stack.imgur.com/xLGJw.jpg)
8. The final step after finding (hopefully) the right quadrilateral, is transforming back to a rectangle. For this you can use `findHomography` to come up with a transformation matrix.
```
(H,mask) = cv2.findHomography(cnt.astype('single'),np.array([[[0., 0.]],[[2150., 0.]],[[2150., 2800.]],[[0.,2800.]]],dtype=np.single))
```
The numbers assume projecting to letter paper. You may come up with better and more clever numbers to use. You also need to reorder the contour points to match the order of coordinates of the letter paper. Then you call `warpPerspective` to create the final image:
```
final_image = cv2.warpPerspective(image,H,(2150, 2800))
```
This warping should result in something like the following (from my results before):
[](http://i.stack.imgur.com/cU57Y.jpg)
I hope this helps you to find an appropriate approach in your case. |
How to eliminate all strings from a list | 37,004,138 | 5 | 2016-05-03T12:28:05Z | 37,004,241 | 9 | 2016-05-03T12:31:58Z | [
"python",
"python-3.x"
] | my question is how to eliminate all strings from a list, for example if I have `list=['hello',1,2,3,4,'goodbye','help']` and the outcome to be `list=[1,2,3,4]` | You need to use [`isinstance`](https://docs.python.org/3/library/functions.html#isinstance) to filter out those elements that are string. Also don't name your variable `list` it will shadow the built in `list`
```
>>> from numbers import Real
>>> lst = ['hello', 1, 2, 3, 4, 'goodbye', 'help']
>>> [element for element in lst if isinstance(element, Real)]
[1, 2, 3, 4]
```
or
```
>>> [element for element in lst if isinstance(element, int)]
[1, 2, 3, 4]
```
or
```
>>> [element for element in lst if not isinstance(element, str)]
[1, 2, 3, 4]
``` |
What is the fastest way to upload a big csv file in notebook to work with python pandas? | 37,010,212 | 7 | 2016-05-03T17:06:48Z | 37,012,035 | 12 | 2016-05-03T18:45:37Z | [
"python",
"csv",
"pandas",
"dataframe"
] | I'm trying to upload a csv file, which is 250MB. Basically 4 million rows and 6 columns of time series data (1min). The usual procedure is:
```
location = r'C:\Users\Name\Folder_1\Folder_2\file.csv'
df = pd.read_csv(location)
```
This procedure takes about 20 minutes !!!. Very preliminary I have explored the following options
* [Upload in chunks and then put the chunks together.](http://stackoverflow.com/questions/14262433/large-data-work-flows-using-pandas/14268804#14268804)
* [HDF5](http://docs.h5py.org/en/latest/)
* ['feather'](https://blog.cloudera.com/blog/2016/03/feather-a-fast-on-disk-format-for-data-frames-for-r-and-python-powered-by-apache-arrow/)
* ['pickle'](https://docs.python.org/2/library/pickle.html)
I wonder if anybody has compared these options (or more) and there's a clear winner. If nobody answers, In the future I will post my results. I just don't have time right now. | Here are results of my read and write comparison for the DF (shape: 4000000 x 6, size in memory 183.1 MB, size of uncompressed CSV - 492 MB).
Comparison for the following storage formats: (`CSV`, `CSV.gzip`, `Pickle`, `HDF5` [various compression]):
```
read_s write_s size_ratio_to_CSV
storage
CSV 17.900 69.00 1.000
CSV.gzip 18.900 186.00 0.047
Pickle 0.173 1.77 0.374
HDF_fixed 0.196 2.03 0.435
HDF_tab 0.230 2.60 0.437
HDF_tab_zlib_c5 0.845 5.44 0.035
HDF_tab_zlib_c9 0.860 5.95 0.035
HDF_tab_bzip2_c5 2.500 36.50 0.011
HDF_tab_bzip2_c9 2.500 36.50 0.011
```
reading
[](http://i.stack.imgur.com/f7liH.png)
writing/saving
[](http://i.stack.imgur.com/yM1NB.png)
file size ratio in relation to uncompressed CSV file
[](http://i.stack.imgur.com/2DDyv.png)
**RAW DATA:**
CSV:
```
In [68]: %timeit df.to_csv(fcsv)
1 loop, best of 3: 1min 9s per loop
In [74]: %timeit pd.read_csv(fcsv)
1 loop, best of 3: 17.9 s per loop
```
CSV.gzip:
```
In [70]: %timeit df.to_csv(fcsv_gz, compression='gzip')
1 loop, best of 3: 3min 6s per loop
In [75]: %timeit pd.read_csv(fcsv_gz)
1 loop, best of 3: 18.9 s per loop
```
Pickle:
```
In [66]: %timeit df.to_pickle(fpckl)
1 loop, best of 3: 1.77 s per loop
In [72]: %timeit pd.read_pickle(fpckl)
10 loops, best of 3: 173 ms per loop
```
HDF (`format='fixed'`) [Default]:
```
In [67]: %timeit df.to_hdf(fh5, 'df')
1 loop, best of 3: 2.03 s per loop
In [73]: %timeit pd.read_hdf(fh5, 'df')
10 loops, best of 3: 196 ms per loop
```
HDF (`format='table'`):
```
In [37]: %timeit df.to_hdf('D:\\temp\\.data\\37010212_tab.h5', 'df', format='t')
1 loop, best of 3: 2.6 s per loop
In [38]: %timeit pd.read_hdf('D:\\temp\\.data\\37010212_tab.h5', 'df')
1 loop, best of 3: 230 ms per loop
```
HDF (`format='table', complib='zlib', complevel=5`):
```
In [40]: %timeit df.to_hdf('D:\\temp\\.data\\37010212_tab_compress_zlib5.h5', 'df', format='t', complevel=5, complib='zlib')
1 loop, best of 3: 5.44 s per loop
In [41]: %timeit pd.read_hdf('D:\\temp\\.data\\37010212_tab_compress_zlib5.h5', 'df')
1 loop, best of 3: 854 ms per loop
```
HDF (`format='table', complib='zlib', complevel=9`):
```
In [36]: %timeit df.to_hdf('D:\\temp\\.data\\37010212_tab_compress_zlib9.h5', 'df', format='t', complevel=9, complib='zlib')
1 loop, best of 3: 5.95 s per loop
In [39]: %timeit pd.read_hdf('D:\\temp\\.data\\37010212_tab_compress_zlib9.h5', 'df')
1 loop, best of 3: 860 ms per loop
```
HDF (`format='table', complib='bzip2', complevel=5`):
```
In [42]: %timeit df.to_hdf('D:\\temp\\.data\\37010212_tab_compress_bzip2_l5.h5', 'df', format='t', complevel=5, complib='bzip2')
1 loop, best of 3: 36.5 s per loop
In [43]: %timeit pd.read_hdf('D:\\temp\\.data\\37010212_tab_compress_bzip2_l5.h5', 'df')
1 loop, best of 3: 2.5 s per loop
```
HDF (`format='table', complib='bzip2', complevel=9`):
```
In [42]: %timeit df.to_hdf('D:\\temp\\.data\\37010212_tab_compress_bzip2_l9.h5', 'df', format='t', complevel=9, complib='bzip2')
1 loop, best of 3: 36.5 s per loop
In [43]: %timeit pd.read_hdf('D:\\temp\\.data\\37010212_tab_compress_bzip2_l9.h5', 'df')
1 loop, best of 3: 2.5 s per loop
```
PS i can't test `feather` on my *Windows* notebook
DF info:
```
In [49]: df.shape
Out[49]: (4000000, 6)
In [50]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4000000 entries, 0 to 3999999
Data columns (total 6 columns):
a datetime64[ns]
b datetime64[ns]
c datetime64[ns]
d datetime64[ns]
e datetime64[ns]
f datetime64[ns]
dtypes: datetime64[ns](6)
memory usage: 183.1 MB
In [41]: df.head()
Out[41]:
a b c \
0 1970-01-01 00:00:00 1970-01-01 00:01:00 1970-01-01 00:02:00
1 1970-01-01 00:01:00 1970-01-01 00:02:00 1970-01-01 00:03:00
2 1970-01-01 00:02:00 1970-01-01 00:03:00 1970-01-01 00:04:00
3 1970-01-01 00:03:00 1970-01-01 00:04:00 1970-01-01 00:05:00
4 1970-01-01 00:04:00 1970-01-01 00:05:00 1970-01-01 00:06:00
d e f
0 1970-01-01 00:03:00 1970-01-01 00:04:00 1970-01-01 00:05:00
1 1970-01-01 00:04:00 1970-01-01 00:05:00 1970-01-01 00:06:00
2 1970-01-01 00:05:00 1970-01-01 00:06:00 1970-01-01 00:07:00
3 1970-01-01 00:06:00 1970-01-01 00:07:00 1970-01-01 00:08:00
4 1970-01-01 00:07:00 1970-01-01 00:08:00 1970-01-01 00:09:00
```
File sizes:
```
{ .data } » ls -lh 37010212.* /d/temp/.data
-rw-r--r-- 1 Max None 492M May 3 22:21 37010212.csv
-rw-r--r-- 1 Max None 23M May 3 22:19 37010212.csv.gz
-rw-r--r-- 1 Max None 214M May 3 22:02 37010212.h5
-rw-r--r-- 1 Max None 184M May 3 22:02 37010212.pickle
-rw-r--r-- 1 Max None 215M May 4 10:39 37010212_tab.h5
-rw-r--r-- 1 Max None 5.4M May 4 10:46 37010212_tab_compress_bzip2_l5.h5
-rw-r--r-- 1 Max None 5.4M May 4 10:51 37010212_tab_compress_bzip2_l9.h5
-rw-r--r-- 1 Max None 17M May 4 10:42 37010212_tab_compress_zlib5.h5
-rw-r--r-- 1 Max None 17M May 4 10:36 37010212_tab_compress_zlib9.h5
```
**Conclusion:**
`Pickle` and `HDF5` are much faster, but `HDF5` is more convenient - you can store multiple tables/frames inside, you can read your data conditionally (look at `where` parameter in [read\_hdf()](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_hdf.html)), you can also store your data compressed (`zlib` - is faster, `bzip2` - provides better compression ratio), etc.
PS if you can build/use `feather-format` - it should be even faster compared to `HDF5` and `Pickle`
**PPS:** don't use Pickle for big data frames, as you may end up with [SystemError: error return without exception set](http://stackoverflow.com/questions/37553397/python-applymap-taking-time-to-run) error message. It's also described [here](https://github.com/pydata/pandas/issues/12712) and [here](https://github.com/pydata/pandas/issues/3699). |
How to tell if a single line of python is syntactically valid? | 37,012,947 | 15 | 2016-05-03T19:37:17Z | 37,014,121 | 13 | 2016-05-03T20:50:31Z | [
"python",
"validation"
] | It is very similar to this:
[How to tell if a string contains valid Python code](http://stackoverflow.com/questions/11854745/how-to-tell-if-a-string-contains-valid-python-code)
The only difference being instead of the entire program being given altogether, I am interested in a single line of code at a time.
Formally, we say a line of python is "syntactically valid" if there exists any syntactically valid python program that uses that particular line.
For instance, I would like to identify these as syntactically valid lines:
```
for i in range(10):
x = 1
```
Because one can use these lines in some syntactically valid python programs.
I would like to identify these lines as syntactically invalid lines:
```
for j in range(10 in range(10(
x =++-+ 1+-
```
Because no syntactically correct python programs could ever use these lines
The check does not need to be too strict, it just need to be good enough to filter out obviously bogus statements (like the ones shown above). The line is given as a string, of course. | This uses [`codeop.compile_command`](https://docs.python.org/3/library/codeop.html#codeop.compile_command) to attempt to compile the code. This is the same logic that the [`code`](https://docs.python.org/3/library/code.html) module [does](https://github.com/python/cpython/blob/master/Lib/code.py) to determine whether to ask for another line or immediately fail with a syntax error.
```
import codeop
def is_valid_code(line):
try:
codeop.compile_command(line)
except SyntaxError:
return False
else:
return True
```
It can be used as follows:
```
>>> is_valid_code('for i in range(10):')
True
>>> is_valid_code('')
True
>>> is_valid_code('x = 1')
True
>>> is_valid_code('for j in range(10 in range(10(')
True
>>> is_valid_code('x = ++-+ 1+-')
False
```
I'm sure at this point, you're saying "what gives? `for j in range(10 in range(10(` was supposed to be *invalid!*" The problem with this line is that `10()` is technically *syntactically* valid, at least according to the Python interpreter. In the REPL, you get this:
```
>>> 10()
Traceback (most recent call last):
File "<pyshell#22>", line 1, in <module>
10()
TypeError: 'int' object is not callable
```
Notice how this is a `TypeError`, *not* a `SyntaxError`. [`ast.parse`](https://docs.python.org/3/library/ast.html#ast.parse) says it is valid as well, and just treats it as a call with the function being an `ast.Num`.
These kinds of things can't easily be caught until they actually run.
If some kind of monster managed to modify the value of the cached `10` value (which would technically be possible), you might be able to do `10()`. It's still allowed by the syntax.
What about the unbalanced parentheses? This fits the same bill as `for i in range(10):`. This line is invalid on its own, but may be the first line in a multi-line expression. For example, see the following:
```
>>> is_valid_code('if x ==')
False
>>> is_valid_code('if (x ==')
True
```
The second line is `True` because the expression could continue like this:
```
if (x ==
3):
print('x is 3!')
```
and the expression would be complete. In fact, [`codeop.compile_command`](https://docs.python.org/3/library/codeop.html#codeop.compile_command) distinguishes between these different situations by returning a code object if it's a valid self-contained line, `None` if the line is expected to continue for a full expression, and throwing a `SyntaxError` on an invalid line.
However, you can also get into a much more complicated problem than initially stated. For example, consider the line `)`. If it's the start of the module, or the previous line is `{`, then it's invalid. However, if the previous line is `(1,2,`, it's completely valid.
The solution given here will work if you only work forward, and append previous lines as context, which is what the [code](https://docs.python.org/3/library/code.html) module [does](https://github.com/python/cpython/blob/master/Lib/code.py) for an interactive session. Creating something that can always accurately identify whether a single line could *possibly* exist in a Python file without considering surrounding lines is going to be extremely difficult, as the Python grammar interacts with newlines in non-trivial ways. This answer responds with whether a given line could be at the beginning of a module and continue on to the next line without failing.
It would be better to identify what the purpose of recognizing single lines is and solve that problem in a different way than trying to solve this for every case. |
Python dictionary doesn't have all the keys assigned, or items | 37,018,085 | 26 | 2016-05-04T03:50:06Z | 37,018,119 | 28 | 2016-05-04T03:53:55Z | [
"python",
"dictionary"
] | I created the following dictionary
```
exDict = {True: 0, False: 1, 1: 'a', 2: 'b'}
```
and when I print `exDict.keys()`, well, it gives me a generator. Ok, so I coerce it to a list, and it gives me
```
[False, True, 2]
```
Why isn't 1 there? When I print `exDict.items()` it gives me
```
[(False, 1), (True, 'a'), (2, 'b')]
```
Anyone have a guess about what's going on here? I'm stumped. | This happens because `True == 1` (and `False == 0`, but you didn't have `0` as a key). You'll have to refactor your code or data somehow, because a `dict` considers keys to be the same if they are "equal" (rather than `is`). |
Python dictionary doesn't have all the keys assigned, or items | 37,018,085 | 26 | 2016-05-04T03:50:06Z | 37,018,123 | 12 | 2016-05-04T03:54:18Z | [
"python",
"dictionary"
] | I created the following dictionary
```
exDict = {True: 0, False: 1, 1: 'a', 2: 'b'}
```
and when I print `exDict.keys()`, well, it gives me a generator. Ok, so I coerce it to a list, and it gives me
```
[False, True, 2]
```
Why isn't 1 there? When I print `exDict.items()` it gives me
```
[(False, 1), (True, 'a'), (2, 'b')]
```
Anyone have a guess about what's going on here? I'm stumped. | What you are seeing is python coercing the `1` to be equal to the `True`.
You'll see that the dictionary you print is:
```
False 1
True a
2 b
```
Where the value `a` was meant to be assigned to the `1`, but instead the value for `True` got reassigned to `a`.
According to the [Python 3 Documentation](https://docs.python.org/release/3.0.1/reference/datamodel.html#the-standard-type-hierarchy):
> The Boolean type is a subtype of the integer type, and **Boolean values behave like the values 0 and 1**, respectively, in almost all contexts, the exception being that when converted to a string, the strings "False" or "True" are returned, respectively.
Emphasis mine.
Note: In python 2.X `True` and `False` can be re-assigned, so this behavior cannot be guaranteed. |
Lambda use case confusion | 37,046,966 | 9 | 2016-05-05T09:26:23Z | 37,047,177 | 8 | 2016-05-05T09:35:58Z | [
"python"
] | I've been playing around with Celery / Django. In their example celery.py file there is the following line
```
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS, force=True)
```
Where `lambda:settings.INSTALLED_APPS` is the actual parameter for the formal parameter `packages` in `autodiscover_tasks()`. And `settings.INSTALLED_APPS` is a tuple.
`autodiscover_tasks()` then either calls the function it was passed or directly assigns the variable it was given in one of its first lines...
```
packages = packages() if callable(packages) else packages
```
So my question is. I just don't get why this was done this way. It seems very redundant. Why not just pass `settings.INSTALLED_APPS` as the tuple god wanted it to be. Why pass an anonymous function that calls it instead? What am I missing here? | Since Celery is asyncrhonous it is not fixed that `settings.Installed_Apps` will not change while performing other computations, so wrapping it inside a `lambda` encapsulates it value as a reference until it gets called.
**EDIT (adding an example as commented):**
```
setting.INSTALLED_APPS = 10
app.autodiscover_tasks(settings.INSTALLED_APPS, force=True) #is called with installed_apps = 10, so it give you an output.
```
now think of this, while `app.autodiscover_tasks`is being called and its internall computations are made some other thing is being computed, and `setting.INSTALLED_APPS` now `= 8`, since you did use the variable, your call is using `10` instead of '8', but encapsulating it into the `lambda` (`app.autodiscover_tasks(lambda: settings.INSTALLED_APPS, force=True)`) it will get the value **when** he needs it, synchronizing with its actual value wich should be `8`. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.