title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
dynamic module does not define init function (PyInit_fuzzy) | 29,657,319 | 7 | 2015-04-15T17:51:53Z | 29,685,650 | 9 | 2015-04-16T20:49:24Z | [
"python",
"cython"
] | I am using Python3.4 and I am trying to install the module fuzzy
```
https://pypi.python.org/pypi/Fuzzy.
```
Since it is mentioned it works only for Python2, I tried to convert it using cython. These are the steps that I followed:
1. cython fuzzy.pyx
2. gcc -g -02 -fpic `python-config --cflags` -c fuzzy.c -o fuzzy.o
3. did the same for double\_metaphone.c
4. gcc -shared -o fuzzy.so fuzzy.o double\_metaphone.o `python-config --libs`
When I tried to import fuzzy I got an error:
```
dynamic module does not define init function (PyInit_fuzzy)
```
What is the issue? Is this because of the python2 and python3 clash? How to resolve this? | This was solved with a quick comment, but posted as an answer for the sake of giving a bit more detail...
The very short answer is to replace all instances of `python-config` for `python3-config` or `python3.4-config`.
*Unnecessary detail follows*
OP was trying to use a Pyrex module in Python 3 (this isn't especially clear from the question), and hence rebuilding it in Cython is a sensible approach to take, since Cython was originally based on Pyrex.
Cython generates code that should compile to work in Python 2 or 3, depending on which headers are included. `python-config` generates relevant compiler/linker options for the default version of Python on the system which at the time of writing is typically Python 2 (on my system it includes `-I/usr/include/python2.7 -I/usr/include/x86_64-linux-gnu/python2.7`). Therefore it builds the module for Python 2. Using the `python3.4-config` ensures that the right version is included.
In the changeover from Python 2 to Python 3 the function called when C modules are imported was changed from `init<modulename>` to `PyInit_<modulename>`, presumably to help ensure that you can only import modules built for the correct version. Therefore when the module is built with Python 2 it only creates `initfuzzy`, and therefore fails to find `PyInit_fuzzy` on import. |
google-app-engine 1.9.19 deploy failure | 29,657,557 | 9 | 2015-04-15T18:04:49Z | 32,111,611 | 13 | 2015-08-20T07:04:19Z | [
"python",
"google-app-engine"
] | When trying to deploy using the new Python GoogleAppEngine-1.9.19 in the form of the GAE launcher on Windows 7, I'm not asked to authenticate. Instead, I get a page for accepting GAE management, and then I'm running locally. Moreover, clicking the close box on the launcher does nothing and I have to kill it externally.
I don't understand the error messages or what to do. I don't have a file named old\_run.py. I uninstalled 1.9.19 and reinstalled 1.9.18, and deploy works as always. Is 1.9.19 broken?
```
2015-04-15 11:27:47 Running command: "['C:\\Python27\\python.exe', '-u', 'C:\\Program Files (x86)\\Google\\google_appengine\\appcfg.py', '--oauth2', 'update', 'M:\\HP_Cdrive\\workspace\\glowscript']"
11:27 AM Application: glowscript; version: 1
11:27 AM Host: appengine.google.com
11:27 AM
Starting update of app: glowscript, version: 1
11:27 AM Getting current resource limits.
2015-04-15 11:27:50,838 WARNING old_run.py:88 This function, oauth2client.tools.run(), and the use of the gflags library are deprecated and will be removed in a future version of the library.
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform&redirect_uri=http%3A%2F%2Flocalhost%3A8080%2F&response_type=code&client_id=550516889912.apps.googleusercontent.com&access_type=offline
If your browser is on a different machine then exit and re-run
this application with the command-line parameter
--noauth_local_webserver
``` | It's an issue with Google App Engine SDK, which doesn't allow the user authentication process to be completed, if local server is running.
Step 1. Stop the local server.
Step 2. Click on 'Deploy'
Step 3. You should get a message `"The authentication flow has completed."`
Step 4. Close the Window.
Step 5. Deploy again using Google App Engine. You should receive a message `You
can close this window now.` |
Create vertical numpy arrays in python | 29,658,567 | 5 | 2015-04-15T18:58:01Z | 29,658,610 | 9 | 2015-04-15T19:00:26Z | [
"python",
"arrays",
"numpy"
] | I'm using `NumPy` in `Python` to work with arrays. This is the way I'm using to create a vertical array:
```
import numpy as np
a = np.array([[1],[2],[3]])
```
Is there any simple and more direct way to create vertical arrays? | You can use `reshape` or [`vstack`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html) :
```
>>> a=np.arange(1,4)
>>> a
array([1, 2, 3])
>>> a.reshape(3,1)
array([[1],
[2],
[3]])
>>> np.vstack(a)
array([[1],
[2],
[3]])
```
Also, you can use [*broadcasting*](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) in order to reshape your array:
```
In [32]: a = np.arange(10)
In [33]: a
Out[33]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [34]: a[:,None]
Out[34]:
array([[0],
[1],
[2],
[3],
[4],
[5],
[6],
[7],
[8],
[9]])
``` |
How to extract the decimal value of float in python | 29,661,465 | 3 | 2015-04-15T21:43:13Z | 29,661,500 | 11 | 2015-04-15T21:45:51Z | [
"python"
] | I have a program that is a converter for times in minutes and seconds and returns a float value with a decimal, for example:
`6.57312`
I would like to extract the `.57312` part in order to convert it to seconds.
How can I get python to take only the value after the decimal point and put it into a variable that I can then use for the conversion? | You can do just a simple operation
```
dec = 6.57312 % 1
``` |
How to extract the decimal value of float in python | 29,661,465 | 3 | 2015-04-15T21:43:13Z | 29,661,504 | 7 | 2015-04-15T21:46:05Z | [
"python"
] | I have a program that is a converter for times in minutes and seconds and returns a float value with a decimal, for example:
`6.57312`
I would like to extract the `.57312` part in order to convert it to seconds.
How can I get python to take only the value after the decimal point and put it into a variable that I can then use for the conversion? | [`math.modf`](https://docs.python.org/2/library/math.html#math.modf) does that. It also has the advantage that you get the whole part in the same operation.
```
import math
f,i = math.modf(6.57312)
# f == .57312, i==6.0
```
Example program:
```
import math
def dec_to_ms(value):
frac,whole = math.modf(value)
return "%d:%02d"%(whole, frac*60)
print dec_to_ms(6.57312)
``` |
Normalize numpy array columns in python | 29,661,574 | 11 | 2015-04-15T21:51:13Z | 29,661,707 | 23 | 2015-04-15T22:02:04Z | [
"python",
"numpy",
"normalize"
] | I have a numpy array where each cell of a specific row represents a value for a feature. I store all of them in an 100\*4 matrix.
```
A B C
1000 10 0.5
765 5 0.35
800 7 0.09
```
Any idea how I can normalize rows of this numpy.array where each value is between 0 and 1?
My desired output is:
```
A B C
1 1 1
0.765 0.5 0.7
0.8 0.7 0.18(which is 0.09/0.5)
```
Thanks in advance :) | If I understand correctly, what you want to do is divide by the maximum value in each column. You can do this easily using [broadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
Starting with your example array:
```
import numpy as np
x = np.array([[1000, 10, 0.5],
[ 765, 5, 0.35],
[ 800, 7, 0.09]])
x_normed = x / x.max(axis=0)
print(x_normed)
# [[ 1. 1. 1. ]
# [ 0.765 0.5 0.7 ]
# [ 0.8 0.7 0.18 ]]
```
`x.max(0)` takes the maximum over the 0th dimension (i.e. rows). This gives you a vector of size `(ncols,)` containing the maximum value in each column. You can then divide `x` by this vector in order to normalize your values such that the maximum value in each column will be scaled to 1.
---
If `x` contains negative values you would need to subtract the minimum first:
```
x_normed = (x - x.min(0)) / x.ptp(0)
```
Here, `x.ptp(0)` returns the "peak-to-peak" (i.e. the range, max - min) along axis 0. This normalization also guarantees that the minimum value in each column will be 0. |
Comparing each element between 2 sets? | 29,661,739 | 3 | 2015-04-15T22:04:37Z | 29,661,754 | 8 | 2015-04-15T22:05:17Z | [
"python",
"set"
] | I'm trying to compare 2 sets of data
`set1 = [a,b,c,d,e,f]
set2 = [a,c,e]`
How would I go about finding out which variable from `set2` are in `set1` then output this to another set showing the results? | Use [intersection](http://en.wikibooks.org/wiki/Python_Programming/Sets#Intersection):
```
set2.intersection(set1)
```
Or `set(set2).intersection(set1)` if you actually have a list.
Based on your comment maybe a dict where the values are booleans based on whether each element from set1 is in the intersection set:
```
set1 = ["a","b","c","d","e","f"]
set2 = ["a","c","e"]
inter = set(set2).intersection(set1)
vals = {k:k in inter for k in set1}
print(vals)
{'a': True, 'c': True, 'b': False, 'e': True, 'd': False, 'f': False}
```
If all you want is a mapping then maybe creating a set from set2 is sufficient:
```
set1 = ["a","b","c","d","e","f"]
set2 = ["a","c","e"]
st2 = set(set2)
vals = {k: k in st2 for k in set1}
```
Or to get two sets, for the common and different elements:
```
st2 = set(set2)
inter = st2.intersection(set1)
diff = st2.difference(set1)
``` |
Strange if statement | 29,663,428 | 24 | 2015-04-16T00:43:35Z | 29,663,465 | 33 | 2015-04-16T00:48:29Z | [
"python",
"python-2.7",
"if-statement"
] | I found this strange `if`-statement in somebody elseâs code:
```
if variable & 1 == 0:
```
I don't understand it. It should have two `==`, right?
Can somebody explain this? | The conditional is a [bitwise operator](https://wiki.python.org/moin/BitwiseOperators) comparison:
```
>>> 1 & 1
1
>>> 0 & 1
0
>>> a = 1
>>> a & 1 == 0
False
>>> b = 0
>>> b & 1 == 0
True
```
---
As many of the comments say, for integers this conditional is True for evens and False for odds. The prevalent way to write this is `if variable % 2 == 0:` or `if not variable % 2:`
Using `timeit` we can see that there isn't much difference in performance.
**`n & 1`**`("== 0" and "not")`
```
>>> timeit.Timer("bitwiseIsEven(1)", "def bitwiseIsEven(n): return n & 1 == 0").repeat(4, 10**6)
[0.2037370204925537, 0.20333600044250488, 0.2028651237487793, 0.20192503929138184]
>>> timeit.Timer("bitwiseIsEven(1)", "def bitwiseIsEven(n): return not n & 1").repeat(4, 10**6)
[0.18392395973205566, 0.18273091316223145, 0.1830739974975586, 0.18445897102355957]
```
**`n % 2`**`("== 0" and "not")`
```
>>> timeit.Timer("modIsEven(1)", "def modIsEven(n): return n % 2 == 0").repeat(4, 10**6)
[0.22193098068237305, 0.22170782089233398, 0.21924591064453125, 0.21947598457336426]
>>> timeit.Timer("modIsEven(1)", "def modIsEven(n): return not n % 2").repeat(4, 10**6)
[0.20426011085510254, 0.2046220302581787, 0.2040550708770752, 0.2044820785522461]
```
**Overloaded Operators:**
Both the `%` and `&` operators are overloaded.
The bitwise and operator is overloaded for [`set`](https://docs.python.org/2/library/sets.html). `s.intersection(t)` is equivalent to `s & t` and returns a "new set with elements common to s and t".
```
>>> {1} & {1}
set([1])
```
This doesn't effect our conditional:
```
>>> def bitwiseIsEven(n):
... return n & 1 == 0
>>> bitwiseIsEven('1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in bitwiseIsEven
TypeError: unsupported operand type(s) for &: 'str' and 'int'
>>> bitwiseIsEven({1})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in bitwiseIsEven
TypeError: unsupported operand type(s) for &: 'set' and 'int'
```
The modulo operator will also throw `TypeError: unsupported operand type(s)` for most non-ints.
```
>>> def modIsEven(n):
... return n % 2 == 0
>>> modIsEven({1})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in modIsEven
TypeError: unsupported operand type(s) for %: 'set' and 'int'
```
It is overloaded as a string interpolation operator for the old [`%-formatting`](https://docs.python.org/2/library/stdtypes.html#string-formatting). It throws `TypeError: not all arguments converted during string formatting` if a string is used for the comparison.
```
>>> modIsEven('1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in modIsEven
TypeError: not all arguments converted during string formatting
```
This won't throw if the string includes a valid conversion specifier.
```
>>> modIsEven('%d')
False
``` |
Strange if statement | 29,663,428 | 24 | 2015-04-16T00:43:35Z | 29,663,482 | 29 | 2015-04-16T00:50:04Z | [
"python",
"python-2.7",
"if-statement"
] | I found this strange `if`-statement in somebody elseâs code:
```
if variable & 1 == 0:
```
I don't understand it. It should have two `==`, right?
Can somebody explain this? | This code just checks if the lowest bit of `variable` is a 0. Based on operator precedence this is:
```
if (variable & 1) == 0:
```
First AND the lowest bit with one (extract just the lowest bit), then check if it is 0. |
Strange if statement | 29,663,428 | 24 | 2015-04-16T00:43:35Z | 29,663,494 | 14 | 2015-04-16T00:51:27Z | [
"python",
"python-2.7",
"if-statement"
] | I found this strange `if`-statement in somebody elseâs code:
```
if variable & 1 == 0:
```
I don't understand it. It should have two `==`, right?
Can somebody explain this? | The & is a [bitwise operator](https://wiki.python.org/moin/BitwiseOperators). It returns an integer with 1 bit for every bit of its two operands that are both 1, and 0 in all other places. For example:
```
a = 10 # 0b1010
b = 6 # 0b0110
a & b # 0b0010
```
Now, if you have `variable & 1`, you're comparing `variable` against `0b1` which will only return 1 if that last digit in the binary representation is a 1, otherwise a 0. |
Python app does not print anything when running detached in docker | 29,663,459 | 19 | 2015-04-16T00:47:16Z | 29,745,541 | 16 | 2015-04-20T10:37:19Z | [
"python",
"docker",
"dockerfile"
] | I have a Python (2.7) app which is started in my dockerfile:
```
CMD ["python","main.py"]
```
*main.py* prints some strings when it is started and goes into a loop afterwards:
```
print "App started"
while True:
time.sleep(1)
```
As long as I start the container with the -it flag, everything works as expected:
```
$ docker run --name=myapp -it myappimage
> App started
```
And I can see the same output via logs later:
```
$ docker logs myapp
> App started
```
If I try to run the same container with the -d flag, the container seems to start normally, but I can't see any output:
```
$ docker run --name=myapp -d myappimage
> b82db1120fee5f92c80000f30f6bdc84e068bafa32738ab7adb47e641b19b4d1
$ docker logs myapp
$ (empty)
```
But the container still seems to run;
```
$ docker ps
Container Status ...
myapp up 4 minutes ...
```
Attach does not display anything either:
```
$ docker attach --sig-proxy=false myapp
(working, no output)
```
Any ideas whats going wrong? Does "print" behave differently when ran in background?
Docker version:
```
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.2
Git commit (client): a8a31ef
OS/Arch (client): linux/arm
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.2
Git commit (server): a8a31ef
``` | Finally I found a solution to see Python output when running daemonized in Docker, thanks to @ahmetalpbalkan over at [GitHub](https://github.com/docker/docker/issues/12447#issuecomment-94417192). Answering it here myself for further reference :
Using unbuffered output with
```
CMD ["python","-u","main.py"]
```
instead of
```
CMD ["python","main.py"]
```
solves the problem; you can see the output (both, stderr and stdout) via
```
docker logs myapp
```
now! |
Determine if an image exists within a larger image, and if so, find it, using Python | 29,663,764 | 4 | 2015-04-16T01:22:57Z | 29,669,787 | 12 | 2015-04-16T08:52:50Z | [
"python",
"opencv",
"image-processing",
"numpy",
"graphics"
] | I need a Python program that I am working on to be able to take a small image, determine if it exists inside a larger image, and if so, report its location. If not, report that. (In my case, the large image will be a screenshot, and the small image an image that may or may not be on the screen, in an HTML5 canvas.) Looking on line, I found out about template matching in OpenCV, which does have excellent Python bindings. I tried the following, based on very similar code I found on line, using numpy as well:
```
import cv2
import numpy as np
image = cv2.imread("screenshot.png")
template = cv2.imread("button.png")
result = cv2.matchTemplate(image,template,cv2.TM_CCOEFF_NORMED)
StartButtonLocation = np.unravel_index(result.argmax(),result.shape)
```
This doesn't do what I need it to do, because it ALWAYS returns a point in the larger image; the point where the match is closest, no matter how terrible a match it is. I want something that finds an exact, pixel for pixel match of the smaller image in the larger image, and if none exists, raises an exception, or returns `False`, or something like that. And, it needs to be fairly quick. Does anyone have a good idea about how to do this? | I will propose an answer that works fast and perfectly if you are looking for `exact match` both in size and in image values.
The idea is to calculate a brute force search of the wanted `h x w` *template* in a larger `H x W` image. The bruteforce approach would consist in looking at all the possible `h x w` windows over the image and check for pixel by pixel correspondence within the template. This however is very computationally expensive, but it can be accelerated.
```
im = np.atleast_3d(im)
H, W, D = im.shape[:3]
h, w = tpl.shape[:2]
```
By using the smart [integral images](http://en.wikipedia.org/wiki/Summed_area_table) one can calculate really fast the sum inside of a `h x w` window starting at every pixel. An integral image is a summed area table (cumulative summed array), that can be calculated with numpy really fast as:
```
sat = im.cumsum(1).cumsum(0)
```
and it has really nice properties, such as the calculation of the sum of all the values within a window with only 4 arithmetic operations:

Thus, by calculating the sum of the template and matching it with the sum of `h x w` windows over the integral image, it is easy to find a list of "possible windows" where sum of inside values is the same as the sum of the values in the template (a quick approximation).
```
iA, iB, iC, iD = sat[:-h, :-w], sat[:-h, w:], sat[h:, :-w], sat[h:, w:]
lookup = iD - iB - iC + iA
```
The above is a numpy vectorization of the operation of shown in the image for all the possible `h x w` rectangles over the image (thus, really quick).
This will reduce a lot the number of possible windows (to 2 in one of my tests). The last step, would be to check for exact matches with the template:
```
posible_match = np.where(np.logical_and(*[lookup[..., i] == tplsum[i] for i in range(D)]))
for y, x in zip(*posible_match):
if np.all(im[y+1:y+h+1, x+1:x+w+1] == tpl):
return (y+1, x+1)
```
Note that here `y` and `x` coordinates correspond to the A point in the image, which is the previous row and column to the template.
Putting all together:
```
def find_image(im, tpl):
im = np.atleast_3d(im)
tpl = np.atleast_3d(tpl)
H, W, D = im.shape[:3]
h, w = tpl.shape[:2]
# Integral image and template sum per channel
sat = im.cumsum(1).cumsum(0)
tplsum = np.array([tpl[:, :, i].sum() for i in range(D)])
# Calculate lookup table for all the possible windows
iA, iB, iC, iD = sat[:-h, :-w], sat[:-h, w:], sat[h:, :-w], sat[h:, w:]
lookup = iD - iB - iC + iA
# Possible matches
possible_match = np.where(np.logical_and(*[lookup[..., i] == tplsum[i] for i in range(D)]))
# Find exact match
for y, x in zip(*possible_match):
if np.all(im[y+1:y+h+1, x+1:x+w+1] == tpl):
return (y+1, x+1)
raise Exception("Image not found")
```
It works with both grayscale and color images and runs in `7ms` for a `303x384` color image with a `50x50` template.
A practical example:
```
>>> from skimage import data
>>> im = gray2rgb(data.coins())
>>> tpl = im[170:220, 75:130].copy()
>>> y, x = find_image(im, tpl)
>>> y, x
(170, 75)
```
And to ilustrate the result:

Left original image, right the template. And here the exact match:
```
>>> fig, ax = plt.subplots()
>>> imshow(im)
>>> rect = Rectangle((x, y), tpl.shape[1], tpl.shape[0], edgecolor='r', facecolor='none')
>>> ax.add_patch(rect)
```

And last, just an example of the `possible_matches` for the test:

The sum over the two windows in the image is the same, but the last step of the function filters the one that doesn't exactly match the template. |
Does the order of addition expressions matter in Python? | 29,665,382 | 4 | 2015-04-16T04:32:59Z | 29,665,413 | 8 | 2015-04-16T04:36:34Z | [
"python",
"methods"
] | This sounds kind of stupid but I'm not talking about `1 + 2 = 2 + 1`. I am talking about where an object with an `__add__` method is added to a number. An example will be:
```
>>> class num:
... def __add__(self,x):
... return 1+x
...
>>> n = num()
>>> 1+n
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'int' and 'instance'
>>> n+1
2
>>>
```
I don't understand why the first one returns an error and the second one works like normal | Addition isn't assumed to be commutative - for example, `[1] + [2] != [2] + [1]` - so there's a separate method you need to implement when your object is on the right side of a `+` and the thing on the left doesn't know how to handle it.
```
def __radd__(self, other):
# Called for other + self when other can't handle it or self's
# type subclasses other's type.
```
Similar methods exist for all the other binary operations, all named by sticking an `r` in the same place. |
Django error: relation "users_user" does not exist | 29,672,190 | 3 | 2015-04-16T10:36:33Z | 29,672,770 | 7 | 2015-04-16T11:03:07Z | [
"python",
"django"
] | I'm getting the following error during migration:
> django.db.utils.ProgrammingError: relation "users\_user" does not exist
```
File "/Users/user/Documents/workspace/api/env/lib/python2.7/site-packages/django/db/backends/utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/Users/user/Documents/workspace/api/env/lib/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/Users/user/Documents/workspace/api/env/lib/python2.7/site-packages/django/db/utils.py", line 97, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/Users/user/Documents/workspace/api/env/lib/python2.7/site-packages/django/db/backends/utils.py", line 62, in execute
return self.cursor.execute(sql)
```
This is my model:
```
from django.db import models
from django.contrib.auth.models import AbstractBaseUser, PermissionsMixin
from ..managers.user import UserManager
class User(AbstractBaseUser, PermissionsMixin):
# Email identifier, primary key, unique identifier for the user.
email = models.EmailField(verbose_name='email address', max_length=254, unique=True, db_index=True)
is_admin = models.BooleanField(default=False)
is_active = models.BooleanField(default=False)
objects = UserManager()
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = []
class Meta:
verbose_name = 'User'
app_label = "users"
def __unicode__(self):
return self.email
@property
def get_full_name(self):
return self.email
@property
def get_short_name(self):
return self.email
def has_module_perms(self, app_label):
"""
Does the user have permissions to view the app `app_label`
"""
# Simplest possible answer: Yes, always
return True
@property
def is_staff(self):
# Simplest possible answer: All admins are staff
return self.is_admin
```
Settings:
```
AUTH_USER_MODEL = 'users.User'
```
Anything I have missed? | Inside your user app, you should have a folder `migrations`. It should only contain `0001_initial.py` and `__init__.py`. Is that correct?
Try running `./manage.py sqlmigrate user 0001_initial` and see what it does, because thats where the error comes from |
Histogram in matplotlib, time on x-Axis | 29,672,375 | 2 | 2015-04-16T10:45:28Z | 29,679,443 | 9 | 2015-04-16T15:32:52Z | [
"python",
"time",
"matplotlib",
"plot",
"histogram"
] | I am new to matplotlib (1.3.1-2) and I cannot find a decent place to start.
I want to plot the distribution of points over time in a histogram with matplotlib.
Basically I want to plot the cumulative sum of the occurrence of a date.
```
date
2011-12-13
2011-12-13
2013-11-01
2013-11-01
2013-06-04
2013-06-04
2014-01-01
...
```
That would make
```
2011-12-13 -> 2 times
2013-11-01 -> 3 times
2013-06-04 -> 2 times
2014-01-01 -> once
```
Since there will be many points over many years, I want to set the `start date` on my `x-Axis` and the `end date`, and then mark `n-time steps`(i.e. 1 year steps) and finally decide how many `bins` there will be.
How would I achieve that? | Matplotlib uses its own format for dates/times, but also provides simple functions to convert which are provided in the `dates` module. It also provides various `Locators` and `Formatters` that take care of placing the ticks on the axis and formatting the corresponding labels. This should get you started:
```
import random
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
# generate some random data (approximately over 5 years)
data = [float(random.randint(1271517521, 1429197513)) for _ in range(1000)]
# convert the epoch format to matplotlib date format
mpl_data = mdates.epoch2num(data)
# plot it
fig, ax = plt.subplots(1,1)
ax.hist(mpl_data, bins=50, color='lightblue')
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d.%m.%y'))
plt.show()
```
Result:
 |
Unlike Numpy, Pandas doesn't seem to like memory strides | 29,673,396 | 6 | 2015-04-16T11:31:55Z | 29,674,364 | 10 | 2015-04-16T12:11:55Z | [
"python",
"numpy",
"pandas"
] | Pandas seems to be missing a R-style matrix-level rolling window function (`rollapply(..., by.column = FALSE)`), providing only the vector based version. Thus I tried to follow [this question](http://stackoverflow.com/questions/26371509/n-dimensional-sliding-window-with-pandas-or-numpy) and it works beautifully with the example which can be replicated, but it doesn't work with pandas `DataFrame`s even when using the (seemingly identical) underlying Numpy array.
Artificial problem replication:
```
import numpy as np
import pandas as pd
from numpy.lib.stride_tricks import as_strided
test = [[x * y for x in range(1, 10)] for y in [10**z for z in range(5)]]
mm = np.array(test, dtype = np.int64)
pp = pd.DataFrame(test).values
```
`mm` and `pp` look identical:

The numpy directly-derived matrix gives me what I want perfectly:
```
as_strided(mm, (mm.shape[0] - 3 + 1, 3, mm.shape[1]), (mm.shape[1] * 8, mm.shape[1] * 8, 8))
```
That is, it gives me 3 strides of 3 rows each, in a 3d matrix, allowing me to perform computations on a submatrix moving down by one row at a time.

But the pandas-derived version (identical call with `mm` replaced by `pp`):
```
as_strided(pp, (pp.shape[0] - 3 + 1, 3, pp.shape[1]), (pp.shape[1] * 8, pp.shape[1] * 8, 8))
```

is all weird like it's transposed somehow. Is this to do with column/row major order stuff?
I need to do matrix sliding windows in Pandas, and this seems my best shot, especially because it is really fast. What's going on here? How do I get the underlying Pandas array to behave like Numpy? | It seems that the `.values` returns the underlying data in Fortran order (as you speculated):
```
>>> mm.flags # NumPy array
C_CONTIGUOUS : True
F_CONTIGUOUS : False
...
>>> pp.flags # array from DataFrame
C_CONTIGUOUS : False
F_CONTIGUOUS : True
...
```
This confuses `as_strided` which expects the data to be arranged in C order in memory.
To fix things, you could copy the data in C order and use the same strides as in your question:
```
pp = pp.copy('C')
```
Alternatively, if you want to avoid copying large amounts of data, adjust the strides to acknowledge the column-order layout of the data:
```
as_strided(pp, (pp.shape[0] - 3 + 1, 3, pp.shape[1]), (8, 8, pp.shape[0]*8))
``` |
Get the index of the minimium N elements of a list in Python | 29,677,673 | 3 | 2015-04-16T14:24:38Z | 29,677,986 | 7 | 2015-04-16T14:35:41Z | [
"python",
"arrays",
"list"
] | I want to get the index of the minimum N elements of a list. It would be great if I can get that output on another list.
For example:
```
[1, 1, 10, 5, 3, 5]
output = [0, 1]
[10, 5, 12, 5, 0, 10]
output = [4]
[9, 2, 8, 2, 3, 4, 2]
output = [1, 3, 6]
[10, 10, 10, 10, 10, 10]
output = [0, 1, 2, 3, 4, 5]
```
I know `.index` returns the first index for the minimum value in a list, but I don't know how to return all the indexes for the minimum value when it occurs more than once. | ```
>>> L = [9, 2, 8, 2, 3, 4, 2]
>>> minL = min(L)
>>> [i for i, x in enumerate(L) if x == minL]
[1, 3, 6]
```
Currently, the other solutions *will* call `min` during the iteration, resulting in a poor and unnecessary *O(n^2)* complexity.
---
Edit for Kasra: evidence of n^2 complexity of the naive solution:
```
>>> L1000 = [randint(0, 100) for _ in xrange(1000)]
>>> L2000 = [randint(0, 100) for _ in xrange(2000)]
>>> L3000 = [randint(0, 100) for _ in xrange(3000)]
>>> L4000 = [randint(0, 100) for _ in xrange(4000)]
>>> L5000 = [randint(0, 100) for _ in xrange(5000)]
>>> timeit [i for i, x in enumerate(L1000) if x == min(L1000)]
10 loops, best of 3: 18.8 ms per loop
>>> timeit [i for i, x in enumerate(L2000) if x == min(L2000)]
10 loops, best of 3: 73.6 ms per loop
>>> timeit [i for i, x in enumerate(L3000) if x == min(L3000)]
1 loops, best of 3: 166 ms per loop
>>> timeit [i for i, x in enumerate(L4000) if x == min(L4000)]
1 loops, best of 3: 294 ms per loop
>>> timeit [i for i, x in enumerate(L5000) if x == min(L5000)]
1 loops, best of 3: 457 ms per loop
``` |
How to use the user_passes_test decorator in class based views? | 29,682,704 | 5 | 2015-04-16T18:06:25Z | 29,683,126 | 12 | 2015-04-16T18:29:14Z | [
"python",
"django",
"python-2.7",
"decorator",
"python-decorators"
] | I am trying to check certain conditions before the user is allowed to see a particular user settings page. I am trying to achieve this using the user\_passes\_test decorator. The function sits in a class based view as follows. I am using method decorator to decorate the get\_initial function in the view.
```
class UserSettingsView(LoginRequiredMixin, FormView):
success_url = '.'
template_name = 'accts/usersettings.html'
def get_form_class(self):
if self.request.user.profile.is_student:
return form1
if self.request.user.profile.is_teacher:
return form2
if self.request.user.profile.is_parent:
return form3
@method_decorator(user_passes_test(test_settings, login_url='/accounts/usertype/'))
def get_initial(self):
if self.request.user.is_authenticated():
user_obj = get_user_model().objects.get(email=self.request.user.email)
if user_obj.profile.is_student:
return { ..........
...... ....
```
Below is the test\_settings function:
```
def test_settings(user):
print "I am in test settings"
if not (user.profile.is_student or user.profile.is_parent or user.profile.is_teacher):
return False
else:
return True
```
I am getting the below error with the decorator.
```
File "../django/core/handlers/base.py", line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "../django/views/generic/base.py", line 69, in view
return self.dispatch(request, *args, **kwargs)
File "../braces/views.py", line 107, in dispatch
request, *args, **kwargs)
File "../django/views/generic/base.py", line 87, in dispatch
return handler(request, *args, **kwargs)
File "../django/views/generic/edit.py", line 162, in get
form = self.get_form(form_class)
File "../django/views/generic/edit.py", line 45, in get_form
return form_class(**self.get_form_kwargs())
File "../django/views/generic/edit.py", line 52, in get_form_kwargs
'initial': self.get_initial(),
File "../django/utils/decorators.py", line 29, in _wrapper
return bound_func(*args, **kwargs)
TypeError: _wrapped_view() takes at least 1 argument (0 given)
```
I am not sure how to resolve this error. Am I applying the decorator on the wrong function? Any leads will be helpful. | Django 1.9 has authentication mixins for class based views. You can use the [`UserPassesTest`](https://docs.djangoproject.com/en/1.9/topics/auth/default/#django.contrib.auth.mixins.UserPassesTestMixin) mixin as follows.
```
from django.contrib.auth.mixins import LoginRequiredMixin, UserPassesTestMixin
class UserSettingsView(LoginRequiredMixin, UserPassesTestMixin, View):
def test_func(self):
return test_settings(self.request.user)
def get_login_url(self):
if not self.request.user.is_authenticated():
return super(UserSettingsView, self).get_login_url()
else:
return '/accounts/usertype/'
```
Note that in this case you have to override [`get_login_url`](https://docs.djangoproject.com/en/1.9/topics/auth/default/#django.contrib.auth.mixins.AccessMixin.get_login_url), because you want to redirect to a different url depending on whether the user is not logged in, or is logged in but fails the test.
For Django 1.8 and earlier, you should decorate the `dispatch` method, not `get_initial`.
```
@method_decorator(user_passes_test(test_settings, login_url='/accounts/usertype/'))
def dispatch(self, *args, **kwargs):
return super(UserSettingsView, self).dispatch(*args, **kwargs)
``` |
What should I use instead of syncdb in Django 1.9? | 29,683,494 | 43 | 2015-04-16T18:49:49Z | 29,683,785 | 43 | 2015-04-16T19:03:32Z | [
"python",
"django",
"django-1.8"
] | Take a look at this:
```
$ pypy ./manage.py syncdb
/usr/lib64/pypy-2.4.0/site-packages/django/core/management/commands/syncdb.py:24: RemovedInDjango19Warning: The syncdb command will be removed in Django 1.9
warnings.warn("The syncdb command will be removed in Django 1.9", RemovedInDjango19Warning)
(cut)
```
I ran a [quick google search](https://encrypted.google.com/search?q=django%20syncdb%201.9), but could not find the answer - what should I be using instead of `syncdb`? | `syncdb` is deprecated because of [the migration system](https://docs.djangoproject.com/en/1.8/topics/migrations/)1.
Now you can **track** your changes using `makemigrations`. This transforms your model changes into python code to make them deployable to another databases.
After you created the migrations you have to **apply** them: `migrate`.
So instead of using `syncdb` you should use `makemigrations` and then `migrate`.
1: imho: the best new feature of django. |
What should I use instead of syncdb in Django 1.9? | 29,683,494 | 43 | 2015-04-16T18:49:49Z | 34,635,951 | 20 | 2016-01-06T14:48:06Z | [
"python",
"django",
"django-1.8"
] | Take a look at this:
```
$ pypy ./manage.py syncdb
/usr/lib64/pypy-2.4.0/site-packages/django/core/management/commands/syncdb.py:24: RemovedInDjango19Warning: The syncdb command will be removed in Django 1.9
warnings.warn("The syncdb command will be removed in Django 1.9", RemovedInDjango19Warning)
(cut)
```
I ran a [quick google search](https://encrypted.google.com/search?q=django%20syncdb%201.9), but could not find the answer - what should I be using instead of `syncdb`? | You should definitely use [migration system](https://docs.djangoproject.com/en/stable/topics/migrations/). Which lets you track changes in your `models.py`, and create migrations for the database. The migration system uses the commands [`makemigrations`](https://docs.djangoproject.com/en/stable/ref/django-admin/#django-admin-makemigrations) to create migrations and [`migrate`](https://docs.djangoproject.com/en/stable/ref/django-admin/#django-admin-migrate) to migrate the database.
If for whatever reason you need to create a database the same way `syncdb` did it there is command flag that causes `migrate` to work the same way. You should only do this if you **REALLY** need it and you know what you are doing. For example to create an empty database on for a continuous integration system of your choice.
```
python manage.py migrate auth
# performs migrations for auth and contenttypes contrib apps
python manage.py migrate --run-syncdb
# creates the rest of the database
```
Tested on Django 1.9.1. |
Need Help Writing Recursive function that find cheapest route through a list of numbers | 29,684,981 | 4 | 2015-04-16T20:07:59Z | 29,686,460 | 7 | 2015-04-16T21:39:44Z | [
"python",
"recursion"
] | So I've been working on this homework problem for a few hours, I'll do my best to explain it.
I need to write a program in python that takes a list and starts you at the first item in the list, you can either move forward one space or jump over an item and land on the other side of it, each item you land on costs the number on that location. The goal is to get to the end as cheaply as possible.
I wrote this function,
```
def player(cost, board, player_pos):
if player_pos == (len(board)) - 1:
return cost
if player_pos < (len(board)) - 2:
if board[player_pos + 1] > board[player_pos + 2]:
return player(cost + board[player_pos + 2], board, player_pos + 2)
else:
return player(cost + board[player_pos + 1], board, player_pos + 1)
elif player_pos == (len(board)) - 2:
return (cost + board[player_pos] + board[player_pos + 1])
```
but it can't see past the next two positions, so it might make mistakes. A good example is this list [0,1,2,1000,0] my program outputs 3 because it picks 1 over 2, then 2 over a 1000, then 0. This adds up to 3, but the fastest way is jump to 2, then to 0.
In the homework it says it might take a long time to run long lists, I'm guessing they want me to try every possible combination of jumps and pick the cheapest one but I don't see how to do that with using recursion.
**EDIT:** So this is the improvement I made based off of the comments, It works with all the examples my prof. gave me except for one, this is the list it doesn't return what he said it should. [0, 98, 7, 44, 25, 3, 5, 85, 46, 4] He says it should return 87, but my adjusted program returns 124. Here's the new code:
```
def player(cost, board, player_pos):
if player_pos == (len(board)) - 1:
return cost
if player_pos < (len(board)) - 2:
if (player(cost + board[player_pos + 2], board, player_pos + 2)) < (player(cost + board[player_pos + 1], board, player_pos + 1)):
return player(cost + board[player_pos + 2], board, player_pos + 2)
else: return player(cost + board[player_pos + 1], board, player_pos + 1)
elif player_pos == (len(board)) - 2:
return (cost + board[player_pos] + board[player_pos + 1])
``` | The following should work:
```
def player(l):
a = b = l[0]
for v in l[1:]:
a, b = b, min(a, b) + v
return b
```
Example:
```
>>> player([0, 98, 7, 44, 25, 3, 5, 85, 46, 4])
87
```
This can be actually considered a [dynamic programming](http://en.wikipedia.org/wiki/Dynamic_programming) algorithm. If `c(i)` denotes the best cost for the subproblem that uses the first `i` entries then:
> c(1) = cost of first element
>
> c(2) = sum of costs of first two elements
For `i > 2` either the best cost is either the best solution reaching the `i - 1`th element and then including the `i`th element or the best solution reaching the `i - 2`th element and then jumping to the `i`th element. So
> c(i) = min(c(i - 1), c(i - 2)) + cost of `i`th element
The above relation explains the short loop in the code, where `a`, `b` are the currently last two best costs.
A recursive version would be like this:
```
def player(l):
return min(player(l[:-1]), player(l[:-2])) + l[-1] if l else 0
```
This recursive program performs an operation to the previous 2 values of the function, in a similar way as the naive recursive function of fibonnaci. It's easy to claim that the above version needs also exponential time. To avoid it, we should use [memoization](http://en.wikipedia.org/wiki/Memoization), which means to cache the results of the intermediate recursive calls:
```
def player(l, cache=None):
n = len(l)
if cache is None:
cache = [-1] * (n + 1)
if cache[n] < 0:
cache[n] = min(player(l[:-1], cache), player(l[:-2], cache)) + l[-1] if l else 0
return cache[n]
``` |
mean, nanmean and warning: Mean of empty slice | 29,688,168 | 15 | 2015-04-17T00:22:06Z | 29,688,390 | 13 | 2015-04-17T00:50:33Z | [
"python",
"numpy"
] | Say I construct two numpy arrays:
```
a = np.array([np.NaN, np.NaN])
b = np.array([np.NaN, np.NaN, 3])
```
Now I find that `np.mean` returns `nan` for both `a` and `b`:
```
>>> np.mean(a)
nan
>>> np.mean(b)
nan
```
Since numpy 1.8, we've been blessed with `nanmean`, which ignores `nan` values:
```
>>> np.nanmean(b)
3.0
```
However, when the array has nothing **but** `nan` values, it raises a warning:
```
>>> np.nanmean(a)
nan
C:\python-3.4.3\lib\site-packages\numpy\lib\nanfunctions.py:598: RuntimeWarning: Mean of empty slice
warnings.warn("Mean of empty slice", RuntimeWarning)
```
I don't like suppressing warnings; is there a better function I can use to get the behaviour of `nanmean` without that warning? | I really can't see any good reason not to just suppress the warning.
The safest way would be to use the [`warnings.catch_warnings`](https://docs.python.org/2/library/warnings.html#temporarily-suppressing-warnings) context manager to suppress the warning only where you anticipate it occurring - that way you won't miss any additional `RuntimeWarnings` that might be unexpectedly raised in some other part of your code:
```
import numpy as np
import warnings
x = np.ones((1000, 1000)) * np.nan
# I expect to see RuntimeWarnings in this block
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=RuntimeWarning)
foo = np.nanmean(x, axis=1)
```
dawg's solution is elegant, but ultimately any additional steps that you have to take in order to avoid computing `np.nanmean` on an array of all NaNs are going to incur some extra overhead that you could avoid by just suppressing the warning. Also your intent will be much more clearly reflected in the code. |
How I do convert from timestamp to date in python? | 29,688,511 | 6 | 2015-04-17T01:06:36Z | 29,688,540 | 8 | 2015-04-17T01:11:30Z | [
"python",
"date",
"datetime"
] | I have this string `'2015-04-08T07:52:00Z'` and I wanna to convert it to `'08/04/2015'`, how can I do this? | You can use the `datetime.datetime.strptime()` function to create a datetime object, then `datetime.datetime.strftime()` to return your correctly formatted date like so:
```
from datetime import datetime
dt = datetime.strptime('2015-04-08T07:52:00Z', '%Y-%m-%dT%H:%M:%SZ')
print dt.strftime('%d/%m/%Y')
``` |
auth_user error with Django 1.8 and syncdb / migrate | 29,689,365 | 34 | 2015-04-17T02:46:27Z | 29,941,471 | 70 | 2015-04-29T10:47:48Z | [
"python",
"django",
"buildout",
"django-syncdb",
"django-1.8"
] | When upgrading to Django 1.8 (with zc.buildout) and running syncdb or migrate, I get this message:
`django.db.utils.ProgrammingError: relation "auth_user" does not exist`
One of my models contains django.contrib.auth.models.User:
```
user = models.ForeignKey(
User, related_name='%(app_label)s_%(class)s_user',
blank=True, null=True, editable=False
)
```
Downgrading to Django 1.7 removes the error. Do I have to include the User object differently in Django 1.8? | I fix this by running auth first, then the rest of my migrations:
```
python manage.py migrate auth
python manage.py migrate
``` |
auth_user error with Django 1.8 and syncdb / migrate | 29,689,365 | 34 | 2015-04-17T02:46:27Z | 30,031,219 | 14 | 2015-05-04T13:19:33Z | [
"python",
"django",
"buildout",
"django-syncdb",
"django-1.8"
] | When upgrading to Django 1.8 (with zc.buildout) and running syncdb or migrate, I get this message:
`django.db.utils.ProgrammingError: relation "auth_user" does not exist`
One of my models contains django.contrib.auth.models.User:
```
user = models.ForeignKey(
User, related_name='%(app_label)s_%(class)s_user',
blank=True, null=True, editable=False
)
```
Downgrading to Django 1.7 removes the error. Do I have to include the User object differently in Django 1.8? | On my environment, I fix this running `makemigrations` on all apps that have relationship with `django.contrib.auth.models`:
```
manage.py makemigrations app_with_user_relation
manage.py migrate
``` |
auth_user error with Django 1.8 and syncdb / migrate | 29,689,365 | 34 | 2015-04-17T02:46:27Z | 32,743,611 | 11 | 2015-09-23T15:25:45Z | [
"python",
"django",
"buildout",
"django-syncdb",
"django-1.8"
] | When upgrading to Django 1.8 (with zc.buildout) and running syncdb or migrate, I get this message:
`django.db.utils.ProgrammingError: relation "auth_user" does not exist`
One of my models contains django.contrib.auth.models.User:
```
user = models.ForeignKey(
User, related_name='%(app_label)s_%(class)s_user',
blank=True, null=True, editable=False
)
```
Downgrading to Django 1.7 removes the error. Do I have to include the User object differently in Django 1.8? | I also had the same issue I solved it by using these :
```
python manage.py migrate auth
python manage.py migrate
```
Then migrate does its job |
Python Mogo ImportError: cannot import name Connection | 29,690,786 | 3 | 2015-04-17T05:06:44Z | 32,218,936 | 9 | 2015-08-26T05:54:11Z | [
"python",
"mongodb"
] | Can't figure out why this is not working.
`mogo==0.2.4`
```
File "/Users/Sam/Envs/AdiosScraper/lib/python2.7/site-packages/mogo/connection.py", line 3, in <module>
from pymongo import Connection as PyConnection
ImportError: cannot import name Connection
``` | i had same problem and too many files had the import, so, i couldn't risk changing the `import` - (didn't knew exactly where all it is mentioned).
I just downgraded `pymongo`:
```
sudo pip install pymongo==2.7.2
```
and it worked! |
Making an object x such that "x in [x]" returns False | 29,692,140 | 25 | 2015-04-17T06:46:15Z | 29,692,536 | 11 | 2015-04-17T07:09:45Z | [
"python",
"python-internals"
] | If we make a pathological potato like this:
```
>>> class Potato:
... def __eq__(self, other):
... return False
... def __hash__(self):
... return random.randint(1, 10000)
...
>>> p = Potato()
>>> p == p
False
```
We can break sets and dicts this way (*note:* it's the same even if `__eq__` returns `True`, it's mucking with the hash that broke them):
```
>>> p in {p}
False
>>> p in {p: 0}
False
```
Also `len({p: 0, p: 0}) == 2`, and `{p: 0}[p]` raises KeyError, basically all mapping related stuff goes out the window, as expected.
But what I didn't expect is that we *can't* break lists
```
>>> p in [p]
True
```
Why is that? It seems that `list.__contains__` iterates, but it's first [checking identity](https://docs.python.org/3/c-api/object.html#c.PyObject_RichCompareBool) before checking equality. Since it is not the case that identity implies equality (see for example NaN object), what is the reason for lists short-circuiting on identity comparisons? | `list`, `tuple`, etc., does indeed do an identity check before an equality check, and this behavior is motivated by [these invariants](http://bugs.python.org/issue4296#msg75735):
```
assert a in [a]
assert a in (a,)
assert [a].count(a) == 1
for a in container:
assert a in container # this should ALWAYS be true
```
Unfortunately, `dict`s, `set`s, and friends operate by hashes, so if you mess with those you can indeed effectively break them.
See [this issue](http://bugs.python.org/issue4296) and [this issue](http://bugs.python.org/issue11945) for some history. |
Making an object x such that "x in [x]" returns False | 29,692,140 | 25 | 2015-04-17T06:46:15Z | 29,692,544 | 8 | 2015-04-17T07:09:56Z | [
"python",
"python-internals"
] | If we make a pathological potato like this:
```
>>> class Potato:
... def __eq__(self, other):
... return False
... def __hash__(self):
... return random.randint(1, 10000)
...
>>> p = Potato()
>>> p == p
False
```
We can break sets and dicts this way (*note:* it's the same even if `__eq__` returns `True`, it's mucking with the hash that broke them):
```
>>> p in {p}
False
>>> p in {p: 0}
False
```
Also `len({p: 0, p: 0}) == 2`, and `{p: 0}[p]` raises KeyError, basically all mapping related stuff goes out the window, as expected.
But what I didn't expect is that we *can't* break lists
```
>>> p in [p]
True
```
Why is that? It seems that `list.__contains__` iterates, but it's first [checking identity](https://docs.python.org/3/c-api/object.html#c.PyObject_RichCompareBool) before checking equality. Since it is not the case that identity implies equality (see for example NaN object), what is the reason for lists short-circuiting on identity comparisons? | In general, breaking the assumption that identity implies equality can break a variety of things in Python. It is true that NaN breaks this assumption, and thus NaN breaks some things in Python. Discussion can be found in [this Python bug](http://bugs.python.org/issue4296). In a pre-release version of Python 3.0, reliance on this assumption was removed, but the resolution of the bug was to put it back in (i.e., make Python 3 give the same behavior as Python 2, in which the identity check shortcut is done). The [documentation](https://docs.python.org/3/reference/expressions.html#comparisons) for Python 3 correctly says:
> For container types such as list, tuple, set, frozenset, dict, or collections.deque, the expression `x in y` is equivalent to `any(x is e or x == e for e in y)`.
However, it appears the documentation for Python 2 is incorrect, since it says:
> For the list and tuple types, x in y is true if and only if there exists an index i such that x == y[i] is true.
You could raise a documentation bug about this if you want, although it is a pretty esoteric issue so I doubt it will be high on anyone's priority list. |
Find the second closest index to value | 29,696,644 | 3 | 2015-04-17T10:25:38Z | 29,696,782 | 7 | 2015-04-17T10:31:46Z | [
"python",
"numpy"
] | I am using
```
index = (np.abs(array - value)).argmin()
```
to find the index in an array with the smallest absolute difference to a value.
However, is there a nice clean way such as this for finding the *second* closest index to the value? | I think this works
```
a = np.linspace(0,10,30)
array([ 0. , 0.34482759, 0.68965517, 1.03448276,
1.37931034, 1.72413793, 2.06896552, 2.4137931 ,
2.75862069, 3.10344828, 3.44827586, 3.79310345,
4.13793103, 4.48275862, 4.82758621, 5.17241379,
5.51724138, 5.86206897, 6.20689655, 6.55172414,
6.89655172, 7.24137931, 7.5862069 , 7.93103448,
8.27586207, 8.62068966, 8.96551724, 9.31034483,
9.65517241, 10. ])
n = np.pi
a[np.argsort(np.abs(a-n))[1]]
# Output 3.4482758620689657
# the closest value is 3.103...
``` |
Get the closest datetime from a list | 29,700,214 | 4 | 2015-04-17T13:12:11Z | 29,700,303 | 8 | 2015-04-17T13:15:57Z | [
"python",
"datetime"
] | in Python, if I have a `datetime` and a list of `datetime`s, e.g.:
```
import datetime as dt
date = dt.datetime(1970, 1,1)
dates = [dt.datetime(1970, 1, 2), dt.datetime(1970, 1,3)]
```
How I can get the `datetime` in the list that's closest to `date`? | You can use [`min`](https://docs.python.org/2/library/functions.html#min) with a custom `key` parameter:
```
>>> import datetime as dt
>>> date = dt.datetime(1970, 1, 1)
>>> dates = [dt.datetime(1970, 1, 2), dt.datetime(1970, 1, 3)]
>>> min(dates, key=lambda d: abs(d - date))
datetime.datetime(1970, 1, 2, 0, 0)
```
Subtracting two [`datetime` objects](https://docs.python.org/2/library/datetime.html#datetime-objects) gives a [`timedelta` object](https://docs.python.org/2/library/datetime.html#timedelta-objects):
```
>>> map(lambda d: abs(d - date), dates)
[datetime.timedelta(1), datetime.timedelta(2)]
```
which behaves as you'd expect in comparisons. |
Is there a max length to a python conditional (if) statement? | 29,700,588 | 5 | 2015-04-17T13:27:46Z | 29,700,778 | 9 | 2015-04-17T13:35:41Z | [
"python",
"python-2.7",
"conditional",
"eval",
"conditional-statements"
] | I generate a conditional statement using python's (2.7) `eval()` function like so:
```
my_list = ['2 > 1','3 > 2','4 > 3']
if eval('(' + ') or ('.join(my_list) + ')'):
print 'yes'
else:
print 'no'
```
In my case, the list is generated by code, my\_list comes from a parameter file, and the list is joined with 'or' statements in the conditional expression. The code above prints 'yes'.
It works fine for small lists, but at a certain number of characters in the `eval()` statement and I get a string error.
Some searching finds these threads that point to a bug:
* [Why is there a length limit to python's eval?](http://stackoverflow.com/q/11635211/2327328)
* [segmentation fault in pi calculation (python)](http://stackoverflow.com/q/5177432/2327328)
But their max eval() size is much larger than what I found. In my case, I find between 1744 and 1803 characters does the issue begin. I tried this code and it does crash between the two statements
```
>>> eval("1.0*"*10000+"1.0")
1.0
>>> eval("1.0*"*100000+"1.0")
# segfault here
```
So, that brings me back to think that it is not eval(), but actually some max on the `if` statement.
**What's another way to conditionally apply the rules in the list that doesn't involve long strings and the eval() function?**
Interestingly, I made my\_list much bigger:
```
my_list = ['2 > 1']*1000000
```
and the code works fine... | Perhaps I'm missing something but it would seem that:
```
any(map(eval, my_list))
```
Does exactly what you'd like.
```
from itertools import imap
any(imap(eval, my_list)) # Python 2.
```
This has the nice effect of **not** evaluating the rest of the list if the first element evals to `True` (also known as "short-circuit"). This may or may not be what you are after.
Example:
```
> any(map(eval, ['2 > 1','3 > 2','4 > 3']))
True
``` |
How to get matplotlib figure size | 29,702,424 | 5 | 2015-04-17T14:45:45Z | 29,702,596 | 7 | 2015-04-17T14:52:15Z | [
"python",
"matplotlib",
"size",
"pixels",
"figure"
] | For a project, I need to know the current size (in pixels) of my matplotlib figure, but I can't find how to do this.
Does anyone know how to do this ?
Thanks,
Tristan | ```
import matplotlib.plt
fig = plt.figure()
size = fig.get_size_inches()*fig.dpi # size in pixels
```
To do it for the current figure,
```
fig = plt.gcf()
size = fig.get_size_inches()*fig.dpi # size in pixels
```
You can get the same info by doing:
```
bbox = fig.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
width, height = bbox.width*fig.dpi, bbox.height*fig.dpi
``` |
Asyncio event loop per python process (aioprocessing, multiple event loops) | 29,703,620 | 6 | 2015-04-17T15:36:40Z | 29,705,642 | 9 | 2015-04-17T17:19:31Z | [
"python",
"python-asyncio"
] | I have two processes; a main process and a subprocess. The main process is running an `asyncio` event loop, and starts the subprocess. I want to start another asyncio event loop in the subprocess. I'm using the `aioprocessing` module to launch the subprocess.
The subprocess function is:
```
def subprocess_code():
loop = asyncio.get_event_loop()
@asyncio.coroutine
def f():
for i in range(10):
print(i)
yield from asyncio.sleep(1)
loop.run_until_complete(f())
```
But I get an error:
```
loop.run_until_complete(f())
File "/usr/lib/python3.4/asyncio/base_events.py", line 271, in run_until_complete
self.run_forever()
File "/usr/lib/python3.4/asyncio/base_events.py", line 239, in run_forever
raise RuntimeError('Event loop is running.')
RuntimeError: Event loop is running.
```
Is it possible to start a new, or restart the existing, `asyncio` event loop in the subprocess? If so, how? | Sorry for disturb!
I found a solution!
```
policy = asyncio.get_event_loop_policy()
policy.set_event_loop(policy.new_event_loop())
loop = asyncio.get_event_loop()
```
put this code to start new asycnio event loop inside of subprocess started from process with asyncio event loop |
Getting TemplateDoesNotExist from Django 1.8 | 29,704,686 | 4 | 2015-04-17T16:26:57Z | 29,705,120 | 9 | 2015-04-17T16:51:23Z | [
"python",
"django"
] | \*\* I'm using Django 1.8. The templates feature has changed in this release of Django. Read more here [Upgrading templates to Django 1.8](https://docs.djangoproject.com/en/1.8/ref/templates/upgrading/ "Upgrading templates to Django 1.8")\*\*
This is bothering me because I've come across this issue and fixed it for one of my other projects, but I can't for the life of me figure out how to fix it this time around. I've gone through countless stackoverflow questions and tried to resolve the issue using the answers provided by I've had no luck. This is the error message I am getting:
```
Exception Type: TemplateDoesNotExist
Exception Value:
index.html
Exception Location: /Library/Python/2.7/site-packages/django/template/loader.py in get_template, line 46
Python Executable: /usr/bin/python
Python Version: 2.7.6
Python Path:
['/Users/User1/Documents/PyCharmProjects/Project1',
```
It seems that is it looking in the wrong folder, it should be looking under Project1/templates according to my settings.py file:
```
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
TEMPLATE_PATH = os.path.join(BASE_DIR, '/templates/')
TEMPLATE_DIRS = (
TEMPLATE_PATH,
)
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
#'django.template.loaders.eggs.load_template_source',
)
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)
ROOT_URLCONF = 'Project1.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
```
My templates folder is in the root folder of my project. What's the issue here? I have given it a TEMPLATE\_DIRS parameter, and used a proper BASE\_DIR, which is what the majority of the answers recommend. | remove the slashes: `TEMPLATE_PATH = os.path.join(BASE_DIR, 'templates')`
See [here](http://stackoverflow.com/questions/4562252/django-how-to-deal-with-the-paths-in-settings-py-on-collaborative-projects)
Things have changed with Django 1.8, in which the template system has been improved. See the [release notes](https://docs.djangoproject.com/en/1.8/ref/templates/upgrading/).
In your settings.py add:
```
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [BASE_DIR+"/templates", ],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
```
the code above comes straight from one of my projects.
Feel free to use `os.path.join(BASE_DIR, 'templates')` instead of catenating the strings. |
Is there a key that will always come last when a dictionary is sorted? | 29,704,997 | 3 | 2015-04-17T16:44:28Z | 29,705,116 | 9 | 2015-04-17T16:51:14Z | [
"python",
"dictionary"
] | I have a dictionary with many keys and I would like to add a dummy key which should always come last when the dictionary is sorted. And the sort is case insensitive. I was thinking of the using the last word in the dictionary `'zyzzyva'`. Would that work? And what if my keys are directory paths, where they can have /, ., etc... | You can create an ad-hoc object that is always the last when sorted:
```
import functools
@functools.total_ordering
class Last(object):
def __eq__(self, other):
return False
def __lt__(self, other):
return False
```
Here's an usage example:
```
>>> sorted([Last(), 'c', 'a', 'b'])
['a', 'b', 'c', <__main__.Last object at 0x7f8db518d2e8>]
```
---
If you need something that is a string, then consider using `'\xff'`. If all of your keys are alphanumeric (in the sense that they are composed by the letters A-Z and the digits 0-9), then there are no chances of items greater than `'\xff'` to appear.
```
>>> sorted(['\xff', 'c', 'a', 'b'])
['a', 'b', 'c', 'ÿ']
```
Beware, however, that in some encodings (including UTF-8) the `'\xff'` character is used.
If you are using Unicode strings (or are using Python 3), then `u'\uffff'` may be a good alternative to `'\xff'`. |
My answer is changing with the same code | 29,707,906 | 20 | 2015-04-17T19:31:27Z | 29,707,948 | 30 | 2015-04-17T19:34:34Z | [
"python",
"python-3.x"
] | I am a complete python beginner and I am trying to solve this problem :
> A number is called triangular if it is the sum of the first n positive
> integers for some n For example, 10 is triangular because 10 = 1+2+3+4
> and 21 is triangular because 21 = 1+2+3+4+5+6. Write a Python program
> to find the smallest 6-digit triangular number. Enter it as your
> answer below.
I have written this program:
```
n = 0
trinum = 0
while len(str(trinum)) < 6:
trinum = n*(n+1)/2
n += 1
print(trinum)
```
And it only works in the python I have installed on my computer if I say `while len(str(trinum)) < 8:` but it is supposed to be `while len(str(trinum)) < 6:`. So I went to <http://www.skulpt.org/> and ran my code there and it gave me the right answer with `while len(str(trinum)) < 6:` *like it's supposed to*. But it doesn't work with 6 with the python i have installed on my computer. Does anyone have any idea what's going on? | ### Short Answer
In Python 3, division is always floating point division. So on the first pass you get something like `str(trinum) == '0.5'`. Which isn't what you want.
You're looking for integer division. The operator for that is `//`.
### Long Answer
The division operator changed in Python 2.x to 3.x. Previously, the type of the result was dependent on the arguments. So `1/2` does integer division, but `1./2` does floating point division.
To clean this up, a new operator was introduced: `//`. This operator will always do integer division.
So in Python 3.x, this expression `(4 * 5)/2` is equal to `10.0`. Note that this number is less than 100, but it has 4 characters in it.
If instead, we did `(4*5)//2`, we would get the integer `10` back. Which would allow your condition to hold true. |
My answer is changing with the same code | 29,707,906 | 20 | 2015-04-17T19:31:27Z | 29,707,949 | 9 | 2015-04-17T19:34:36Z | [
"python",
"python-3.x"
] | I am a complete python beginner and I am trying to solve this problem :
> A number is called triangular if it is the sum of the first n positive
> integers for some n For example, 10 is triangular because 10 = 1+2+3+4
> and 21 is triangular because 21 = 1+2+3+4+5+6. Write a Python program
> to find the smallest 6-digit triangular number. Enter it as your
> answer below.
I have written this program:
```
n = 0
trinum = 0
while len(str(trinum)) < 6:
trinum = n*(n+1)/2
n += 1
print(trinum)
```
And it only works in the python I have installed on my computer if I say `while len(str(trinum)) < 8:` but it is supposed to be `while len(str(trinum)) < 6:`. So I went to <http://www.skulpt.org/> and ran my code there and it gave me the right answer with `while len(str(trinum)) < 6:` *like it's supposed to*. But it doesn't work with 6 with the python i have installed on my computer. Does anyone have any idea what's going on? | In Python 2, the `/` operator performs integer division when possible: "x divided by y is a remainder b," throwing away the "b" (use the `%` operator to find "b"). In Python 3, the `/` operator always performs float division: "x divided by y is a.fgh." Get integer division in Python 3 with the `//` operator. |
My answer is changing with the same code | 29,707,906 | 20 | 2015-04-17T19:31:27Z | 29,708,052 | 8 | 2015-04-17T19:41:35Z | [
"python",
"python-3.x"
] | I am a complete python beginner and I am trying to solve this problem :
> A number is called triangular if it is the sum of the first n positive
> integers for some n For example, 10 is triangular because 10 = 1+2+3+4
> and 21 is triangular because 21 = 1+2+3+4+5+6. Write a Python program
> to find the smallest 6-digit triangular number. Enter it as your
> answer below.
I have written this program:
```
n = 0
trinum = 0
while len(str(trinum)) < 6:
trinum = n*(n+1)/2
n += 1
print(trinum)
```
And it only works in the python I have installed on my computer if I say `while len(str(trinum)) < 8:` but it is supposed to be `while len(str(trinum)) < 6:`. So I went to <http://www.skulpt.org/> and ran my code there and it gave me the right answer with `while len(str(trinum)) < 6:` *like it's supposed to*. But it doesn't work with 6 with the python i have installed on my computer. Does anyone have any idea what's going on? | You have two problems here, that combine to give you the wrong answer.
The first problem is that you're using `/`, which means integer division in Python 2 (and the almost-Python language that Skulpt implements), but float division in Python 3. So, when you run it on your local machine with Python 3, you're going to get floating point numbers.
The second problem is that you're not checking for "under 6 digits" you're checking for "under 6 characters long". For positive integers, those are the same thing, but for floats, say, `1035.5` is only 4 digits, but it's 6 characters. So you exit early.
If you solve either problem, it will work, at least most of the time. But you really should solve both.
So:
```
n = 0
trinum = 0
while trinum < 10**6: # note comparing numbers, not string length
trinum = n*(n+1)//2 # note // instead of /
n += 1
print(trinum)
```
The first problem is fixed by using `//`, which always means integer division, instead of `/`, which means different things in different Python versions.
The second problem is fixed by comparing the number as a number to `10**6` (that is, 10 to the 6th power, which means 1 with 6 zeros, or 1000000) instead of comparing its length as a string to 6. |
What happens when converting between tuple/list? | 29,709,385 | 2 | 2015-04-17T21:09:22Z | 29,709,423 | 7 | 2015-04-17T21:12:46Z | [
"python",
"list",
"python-3.x",
"tuples",
"python-internals"
] | How does python internally make the conversion when converting a tuple to a list or the other way around.
Does it "switch a flag" (now you're immutable, now you're not!) or does it iterate through the items and convert them? | Tuples and lists are entirely separate types; so when converting a list to a tuple or vice versa a *new* object is created and element references are copied across.
Python *does* optimise this by reaching into the internal structure of the other object; for example, `list(tupleobj)` is essentially the same thing as `list().extend(tupleobj)`, where the [`listextend` function](https://hg.python.org/cpython/file/2d8e4047c270/Objects/listobject.c#l781) then uses the Python C API function to simply copy across the references from the C array of the tuple:
```
if (PyList_CheckExact(b) || PyTuple_CheckExact(b) || (PyObject *)self == b) {
PyObject **src, **dest;
b = PySequence_Fast(b, "argument must be iterable");
if (!b)
return NULL;
n = PySequence_Fast_GET_SIZE(b);
if (n == 0) {
/* short circuit when b is empty */
Py_DECREF(b);
Py_RETURN_NONE;
}
m = Py_SIZE(self);
if (list_resize(self, m + n) == -1) {
Py_DECREF(b);
return NULL;
}
/* note that we may still have self == b here for the
* situation a.extend(a), but the following code works
* in that case too. Just make sure to resize self
* before calling PySequence_Fast_ITEMS.
*/
/* populate the end of self with b's items */
src = PySequence_Fast_ITEMS(b);
dest = self->ob_item + m;
for (i = 0; i < n; i++) {
PyObject *o = src[i];
Py_INCREF(o);
dest[i] = o;
}
Py_DECREF(b);
Py_RETURN_NONE;
}
```
`PySequence_Fast_ITEMS` is a macro to access the `ob_item` array in the C struct for the tuple, and the `for` loop copies the items from that array straight over to the `self->ob_item` array (starting at offset `m`). |
Python Force List Index out of Range Exception | 29,710,249 | 4 | 2015-04-17T22:09:42Z | 29,710,320 | 11 | 2015-04-17T22:14:54Z | [
"python",
"list"
] | I have a list of lists
```
x = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
```
I want the code to throw an Array Out of Bounds Exception similar to how is does in Java when the index is out of range. For example,
```
x[0][0] # 1
x[0][1] # 2
x[0-1][0-1] # <--- this returns 9 but I want it to throw an exception
x[0-1][1] # <--- this returns 7 but again I want it to throw an exception
x[0][2] # this throws an index out of range exception, as it should
```
If an exception is thrown, I want it to return 0.
```
try:
x[0-1][0-1] # I want this to throw an exception
except:
print 0 # prints the integer 0
```
I think basically anytime the index is negative, throw an exception. | You can create your own list class, inheriting the default one, and implementing the `__getitem__` method that returns the element in a specified index:
```
class MyList(list):
def __getitem__(self, index):
if index < 0:
raise IndexError("list index out of range")
return super(MyList, self).__getitem__(index)
```
Example:
```
>>> l = MyList([1, 2, 3])
>>> l[-1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in __getitem__
IndexError: list index out of range
>>> l[0]
1
>>> l[3]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in __getitem__
IndexError: list index out of range
``` |
Finding highest product of three numbers | 29,710,357 | 23 | 2015-04-17T22:18:34Z | 29,710,417 | 54 | 2015-04-17T22:24:20Z | [
"python",
"algorithm"
] | Given an array of ints, `arrayofints`, find the highest product, `Highestproduct`, you can get from three of the integers. The input array of ints will always have at least three integers.
So I've popped three numbers from `arrayofints` and stuck them in `highestproduct`:
```
Highestproduct = arrayofints[:2]
for item in arrayofints[3:]:
If min(Highestproduct) < item:
Highestproduct[highestproduct.index(min(Highestproduct))] = item
```
If `min` of `highestproduct` less than item: Replace the lowest number with the current number.
This would end up with highest product, but apparently there is a better solution. What's wrong with my approach? Would my solution be O(n)? | Keep track of the two minimal elements and three maximal elements, the answer should be `min1 * min2 * max1` or `max1 * max2 * max3`.
To get the maximum product of 3 ints we have to choose 3 maximum elements. However there is a catch that we can substitute 2 of the smallest of 3 max elements with the 2 min ints. If both smallest ints are negative their product is positive so `min1 * min2` might be bigger than `max2 * max3` (where `max2` and `max3` are 2 of the smallest of 3 max elements from the array).
This runs in `O(n)` time. |
What is the use of buffering in python's built-in open() function? | 29,712,445 | 9 | 2015-04-18T03:08:00Z | 29,712,601 | 7 | 2015-04-18T03:37:48Z | [
"python",
"python-2.7"
] | Python Documentation : <https://docs.python.org/2/library/functions.html#open>
```
open(name[, mode[, buffering]])
```
The above documentation says "The optional buffering argument specifies the fileâs desired buffer size: 0 means unbuffered, 1 means line buffered, any other positive value means use a buffer of (approximately) that size (in bytes). A negative buffering means to use the system default.If omitted, the system default is used.".
When I use
```
filedata = open(file.txt,"r",0)
```
or
```
filedata = open(file.txt,"r",1)
```
or
```
filedata = open(file.txt,"r",2)
```
or
```
filedata = open(file.txt,"r",-1)
```
or
```
filedata = open(file.txt,"r")
```
The output has no change.Each line shown above prints at same speed.
output:
> Mr. Bean is a British television programme series of fifteen 25-
>
> minute episodes written by Robin Driscoll and starring Rowan Atkinson
> as
>
> the title character. Different episodes were also written by Robin
>
> Driscoll and Richard Curtis, and one by Ben Elton. Thirteen of the
>
> episodes were broadcast on ITV, from the pilot on 1 January 1990,
> until
>
> "Goodnight Mr. Bean" on 31 October 1995. A clip show, "The Best Bits
> of
>
> Mr. Bean", was broadcast on 15 December 1995, and one episode, "Hair
> by
>
> Mr. Bean of London", was not broadcast until 2006 on
> Nickelodeon.
Then how the buffering parameter in open() function is useful ? What
value
of that buffering parameter is best to use ? | Enabling buffering means that you're not directly interfacing with the OS's representation of a file, or its file system API. Instead, only a chunk of data is read from the raw OS filestream into a buffer until it is consumed, at which point more data is fetched into the buffer. In terms of the objects you get, you'll get a `BufferedIOBase` object wrapping an underlying `RawIOBase` (which represents the raw file stream).
What is the benefit of this? Well interfacing with the raw stream might have high latency, because the operating system has to fool around with physical objects like the hard disk, and may not be suitable in many cases. Let's say you want to read three letters from a file every 5ms and your file is on a crusty old hard disk, or even a network file system. Instead of trying to read from the raw filestream every 5ms, it is better to load a bunch of the file into a buffer in memory, then consume it at will.
What size of buffer you choose will depend on how you're consuming the data. For the example above, a buffer size of 1 char would be awful, 3 chars would be alright, and any large multiple of 3 chars that doesn't cause a noticeable delay for your users would be ideal. |
How to use Twisted to check Gmail with OAuth2.0 authentication | 29,712,644 | 10 | 2015-04-18T03:45:49Z | 29,907,859 | 14 | 2015-04-27T23:44:17Z | [
"python",
"oauth-2.0",
"gmail",
"google-oauth",
"twisted"
] | I had a working IMAP client for Google mail, however it recently stopped working. I believe the problem is that gmail no longer allows TTL username/password logins, but now requires OAuth2.0.
I would like to know the best way to alter my example below such that my twisted IMAP client authenticates using OAuth2.0. (And doing so without Google API packages, if that's possible.)
**Example using username/password login (no longer works)**
```
class AriSBDGmailImap4Client(imap4.IMAP4Client):
'''
client to fetch and process SBD emails from gmail. the messages
contained in the emails are sent to the AriSBDStationProtocol for
this sbd modem.
'''
def __init__(self, contextFactory=None):
imap4.IMAP4Client.__init__(self, contextFactory)
@defer.inlineCallbacks
def serverGreeting(self, caps):
# log in
try:
# the line below no longer works for gmail
yield self.login(mailuser, mailpass)
try:
yield self.uponAuthentication()
except Exception as e:
uponFail(e, "uponAuthentication")
except Exception as e:
uponFail(e, "logging in")
# done. log out
try:
yield self.logout()
except Exception as e:
uponFail(e, "logging out")
@defer.inlineCallbacks
def uponAuthentication(self):
try:
yield self.select('Inbox')
try:
# read messages, etc, etc
pass
except Exception as e:
uponFail(e, "searching unread")
except Exception as e:
uponFail(e, "selecting inbox")
```
I have a trivial factory for this client. It gets started by using `reactor.connectSSL` with Google mail's host url and port.
I have followed the directions at <https://developers.google.com/gmail/api/quickstart/quickstart-python> for an "installed app" (but I don't know if this was the right choice). I can run their "quickstart.py" example successfully.
**My quick and dirty attempt (does not work)**
```
@defer.inlineCallbacks
def serverGreeting(self, caps):
# log in
try:
#yield self.login(mailuser, mailpass)
flow = yield threads.deferToThread(
oauth2client.client.flow_from_clientsecrets,
filename=CLIENT_SECRET_FILE,
scope=OAUTH_SCOPE)
http = httplib2.Http()
credentials = yield threads.deferToThread( STORAGE.get )
if credentials is None or credentials.invalid:
parser = argparse.ArgumentParser(
parents=[oauth2client.tools.argparser])
flags = yield threads.deferToThread( parser.parse_args )
credentials = yield threads.deferToThread(
oauth2client.tools.run_flow,
flow=flow,
storage=STORAGE,
flags=flags, http=http)
http = yield threads.deferToThread(
credentials.authorize, http)
gmail_service = yield threads.deferToThread(
apiclient.discovery.build,
serviceName='gmail',
version='v1',
http=http)
self.state = 'auth'
try:
yield self.uponAuthentication()
except Exception as e:
uponFail(e, "uponAuthentication")
except Exception as e:
uponFail(e, "logging in")
# done. log out
try:
yield self.logout()
except Exception as e:
uponFail(e, "logging out")
```
I basically just copied over "quickstart.py" into `serverGreeting` and then tried to set the client state to "auth".
This authenticates just fine, but then twisted is unable to select the inbox:
> [AriSBDGmailImap4Client (TLSMemoryBIOProtocol),client] FAIL: Unknown command {random gibberish}
The random gibberish has letters and numbers and is different each time the select inbox command fails.
Thanks for your help! | After a lot of reading and testing, I was finally able to implement a working log-in to gmail using OAuth2.
One important note was that the 2-step process using a "service account" did **NOT** work for me. I'm still not clear why this process can't be used, but the service account does not seem to have access to the gmail in the same account. This is true even when the service account has "can edit" permissions and the Gmail API is enabled.
**Useful References**
Overview of using
OAuth2 <https://developers.google.com/identity/protocols/OAuth2>
Guide for using OAuth2 with "Installed Applications"
<https://developers.google.com/identity/protocols/OAuth2InstalledApp>
Guide for setting up the account to use OAuth2 with "Installed Applications"
<https://developers.google.com/api-client-library/python/auth/installed-app>
Collection of OAuth2 routines without the full Google API
<https://code.google.com/p/google-mail-oauth2-tools/wiki/OAuth2DotPyRunThrough>
**Step 1 - Get a Google Client ID**
Log in with the gmail account to <https://console.developers.google.com/>
Start a project, enable Gmail API and create a new client id for an installed application. Instructions at <https://developers.google.com/api-client-library/python/auth/installed-app#creatingcred>
Click the "Download JSON" button and save this file somewhere that will be inaccessible to the public (so probably not in the code repository).
**Step 2 - Get the Google OAuth2 Python tools**
Download the oauth2.py script from <https://code.google.com/p/google-mail-oauth2-tools/wiki/OAuth2DotPyRunThrough>
**Step 3 - Get the authorization URL**
Use the script from Step 2 to obtain a URL allowing you to authorize your Google project.
In the terminal:
`python oauth2.py --user={[email protected]} --client_id={your client_id from the json file} --client_secret={your client_secret from the json file} --generate_oauth2_token`
**Step 4 -- Get the Authorization Code**
Paste the URL from Step 3 into your browser and click the "accept" button.
Copy the code from the web page.
Paste the code into the terminal and hit enter. You will obtain:
```
To authorize token, visit this url and follow the directions: https://accounts.google.com/o/oauth2/auth?client_id{...}
Enter verification code: {...}
Refresh Token: {...}
Access Token: {...}
Access Token Expiration Seconds: 3600
```
**Step 5 - Save the Refresh Token**
Copy the refresh token from the terminal and save it somewhere. In this example, I save it to a json-formatted text file with the key "Refresh Token". But it could also be saved to a private database.
Make sure the refresh token cannot be accessed by the public!
**Step 6 - Make a Twisted Authenticator**
Here is a working example of an OAuth2 authenticator. It requires the oauth2.py script from Step 2.
```
import oauth2
import json
import zope.interface
MY_GMAIL = {your gmail address}
REFRESH_TOKEN_SECRET_FILE = {name of your refresh token file from Step 5}
CLIENT_SECRET_FILE = {name of your cliend json file from Step 1}
class GmailOAuthAuthenticator():
zope.interface.implements(imap4.IClientAuthentication)
authName = "XOAUTH2"
tokenTimeout = 3300 # 5 mins short of the real timeout (1 hour)
def __init__(self, reactr):
self.token = None
self.reactor = reactr
self.expire = None
@defer.inlineCallbacks
def getToken(self):
if ( (self.token==None) or (self.reactor.seconds() > self.expire) ):
rt = None
with open(REFRESH_TOKEN_SECRET_FILE) as f:
rt = json.load(f)
cl = None
with open(CLIENT_SECRET_FILE) as f:
cl = json.load(f)
self.token = yield threads.deferToThread(
oauth2.RefreshToken,
client_id = cl['installed']['client_id'],
client_secret = cl['installed']['client_secret'],
refresh_token = rt['Refresh Token'] )
self.expire = self.reactor.seconds() + self.tokenTimeout
def getName(self):
return self.authName
def challengeResponse(self, secret, chal):
# we MUST already have the token
# (allow an exception to be thrown if not)
t = self.token['access_token']
ret = oauth2.GenerateOAuth2String(MY_GMAIL, t, False)
return ret
```
**Step 7 - Register the Authenitcator for the Protocol**
In the IMAP4ClientFactory:
```
def buildProtocol(self, addr):
p = self.protocol(self.ctx)
p.factory = self
x = GmailOAuthAuthenticator(self.reactor)
p.registerAuthenticator(x)
return p
```
**Step 8 - Use the Access Token to authenticate**
Instead of using "login", get the Access Token (if necessary) and then use authenticate.
Altering the example code from the question:
```
@defer.inlineCallbacks
def serverGreeting(self, caps):
# log in
try:
# the line below no longer works for gmail
# yield self.login(mailuser, mailpass)
if GmailOAuthAuthenticator.authName in self.authenticators:
yield self.authenticators[AriGmailOAuthAuthenticator.authName].getToken()
yield self.authenticate("")
try:
yield self.uponAuthentication()
except Exception as e:
uponFail(e, "uponAuthentication")
except Exception as e:
uponFail(e, "logging in")
``` |
An extra parentheses in a call to a nested function in Python | 29,712,760 | 6 | 2015-04-18T04:10:23Z | 29,712,797 | 7 | 2015-04-18T04:17:16Z | [
"python",
"function",
"nested"
] | I'm studying the module of Mark Pilgrim's Dive Into Python Book in chapter 6, and I'm kind of stuck with what this line of code `return [getFileInfoClass(f)(f) for f in fileList]` does. getFileInfo is a nested function and I was wondering what's the duplicate (f), the extra parentheses is for. I was hoping someone can help me out. Here's the complete function:
```
def listDirectory(directory, fileExtList):
"get list of file info objects for files of particular extensions"
fileList = [os.path.normcase(f) for f in os.listdir(directory)]
fileList = [os.path.join(directory, f) for f in fileList \
if os.path.splitext(f)[1] in fileExtList]
def getFileInfoClass(filename, module=sys.modules[FileInfo.__module__]):
"get file info class from filename extension"
subclass = "%sFileInfo" % os.path.splitext(filename)[1].upper()[1:]
return hasattr(module, subclass) and getattr(module, subclass) or FileInfo
return [getFileInfoClass(f)(f) for f in fileList]
``` | `getFileInfoClass` returns a `class`; classes are then callable themselves -- calling a `class` by just a name returns an instance of it. The two pairs of parens in quick succession are just shorthand. It is effectively:
```
file_info_class = getFileInfoClass(f)
file_info_instance = file_info_class(f)
```
Generally, the two pairs of parens like that is probably a questionable decision, as it's not very readable, but I suppose in this case the writer deemed it reasonable because it allowed him or her to put everything inside a single list comprehension. |
Recursive unittest discover | 29,713,541 | 4 | 2015-04-18T06:00:21Z | 29,715,336 | 7 | 2015-04-18T09:21:16Z | [
"python",
"python-2.7",
"python-unittest"
] | I have a package with a directory "tests" in which I'm storing my unit tests. My package looks like:
```
.
âââ LICENSE
âââ models
â  âââ __init__.py
âââ README.md
âââ requirements.txt
âââ tc.py
âââ tests
â  âââ db
â  â  âââ test_employee.py
â  âââ test_tc.py
âââ todo.txt
```
From my package directory, I want to be able to find both `tests/test_tc.py` and `tests/db/test_employee.py`. I'd prefer not to have to install a third-party library (`nose` or etc) or have to manually build a `TestSuite` to run this in.
Surely there's a way to tell `unittest discover` not to stop looking once it's found a test? `python -m unittest discover -s tests` will find `tests/test_tc.py` and `python -m unittest discover -s tests/db` will find `tests/db/test_employee.py`. Isn't there a way to find both? | In doing a bit of digging, it seems that as long as deeper modules remain importable, they'll be discovered via `python -m unittest discover`. The solution, then, was simply to add a `__init__.py` file to each directory to make them packages.
```
.
âââ LICENSE
âââ models
â  âââ __init__.py
âââ README.md
âââ requirements.txt
âââ tc.py
âââ tests
â  âââ db
â  â  âââ __init__.py # NEW
â  â  âââ test_employee.py
â  âââ __init__.py # NEW
â  âââ test_tc.py
âââ todo.txt
```
So long as each directory has an `__init__.py`, `python -m unittest discover` can import the relevant `test_*` module. |
Django check if checkbox is selected | 29,714,763 | 4 | 2015-04-18T08:20:23Z | 29,715,689 | 8 | 2015-04-18T09:55:27Z | [
"python",
"django",
"checkbox"
] | I'm currently working on a fairly simple django project and could use some help. Its just a simple database query front end.
Currently I am stuck on refining the search using checkboxes, radio buttons etc
The issue I'm having is figuring out how to know when a checkbox (or multiple) is selected. My code so far is as such:
`views.py`
```
def search(request):
if 'q' in request.GET:
q = request.GET['q']
if not q:
error = True;
elif len(q) > 22:
error = True;
else:
sequence = Targets.objects.filter(gene__icontains=q)
request.session[key] = pickle.dumps(sequence.query)
return render(request, 'result.html', {'sequence' : sequence, 'query' : q, 'error' : False})
return render(request, 'search.html', {'error': True})
```
search.html
```
<p>This is a test site</p></center>
<hr>
<center>
{% if error == true %}
<p><font color="red">Please enter a valid search term</p>
{% endif %}
<form action="" method="get">
<input type="text" name="q">
<input type="submit" value="Search"><br>
</form>
<form action="" method="post">
<input type='radio' name='locationbox' id='l_box1'> Display Location
<input type='radio' name='displaybox' id='d_box2'> Display Direction
</form>
</center>
```
My current idea is that I check which checkboxes/radio buttons are selected and depending which are, the right data will be queried and displayed in a table.
So specifically:
How do I check if specific check-boxes are checked? and how do I pass this information onto `views.py` | **Radio Buttons:**
In the HTML for your radio buttons, you need all related radio inputs to share the same name, have a predefined "value" attribute, and optimally, have a surrounding label tag, like this:
```
<form action="" method="post">
<label for="l_box1"><input type="radio" name="display_type" value="locationbox" id="l_box1">Display Location</label>
<label for="d_box2"><input type="radio" name="display_type" value="displaybox" id="d_box2"> Display Direction</label>
</form>
```
Then in your view, you can look up which was selected by checking for the shared "name" attribute in the POST data. It's value will be the associated "value" attribute of the HTML input tag:
```
# views.py
def my_view(request):
...
if request.method == "POST":
display_type = request.POST.get("display_type", None)
if display_type in ["locationbox", "displaybox"]:
# Handle whichever was selected here
# But, this is not the best way to do it. See below...
```
That works, but it requires manual checks. It's better to create a Django form first. Then Django will do those checks for you:
forms.py:
```
from django import forms
DISPLAY_CHOICES = (
("locationbox", "Display Location"),
("displaybox", "Display Direction")
)
class MyForm(forms.Form):
display_type = forms.ChoiceField(widget=forms.RadioSelect, choices=DISPLAY_CHOICES)
```
your\_template.html:
```
<form action="" method="post">
{# This will display the radio button HTML for you #}
{{ form.as_p }}
{# You'll need a submit button or similar here to actually send the form #}
</form>
```
views.py:
```
from .forms import MyForm
from django.shortcuts import render
def my_view(request):
...
form = MyForm(request.POST or None)
if request.method == "POST":
# Have Django validate the form for you
if form.is_valid():
# The "display_type" key is now guaranteed to exist and
# guaranteed to be "displaybox" or "locationbox"
display_type = request.POST["display_type"]
...
# This will display the blank form for a GET request
# or show the errors on a POSTed form that was invalid
return render(request, 'your_template.html', {'form': form})
```
**Checkboxes:**
Checkboxes work like this:
forms.py:
```
class MyForm(forms.Form):
# For BooleanFields, required=False means that Django's validation
# will accept a checked or unchecked value, while required=True
# will validate that the user MUST check the box.
something_truthy = forms.BooleanField(required=False)
```
views.py:
```
def my_view(request):
...
form = MyForm(request.POST or None)
if request.method == "POST":
if form.is_valid():
...
if request.POST["something_truthy"]:
# Checkbox was checked
...
```
**Further reading:**
<https://docs.djangoproject.com/en/1.8/ref/forms/fields/#choicefield>
<https://docs.djangoproject.com/en/1.8/ref/forms/widgets/#radioselect>
<https://docs.djangoproject.com/en/1.8/ref/forms/fields/#booleanfield> |
How can I check if a list index exists? | 29,715,501 | 5 | 2015-04-18T09:37:46Z | 29,715,530 | 8 | 2015-04-18T09:40:28Z | [
"python",
"list"
] | Seems as though
```
if not mylist[1]:
return False
```
Doesn't work. | You just have to check if the index you want is in the range of `0` and the length of the list, like this
```
if 0 <= index < len(list):
```
it is actually internally evaluated as
```
if (0 <= index) and (index < len(list)):
```
So, that condition checks if the index is within the range [0, length of list).
**Note:** Python supports negative indexing. Quoting Python [documentation](https://docs.python.org/3/library/stdtypes.html#common-sequence-operations),
> If `i` or `j` is negative, the index is relative to the end of the string: `len(s) + i` or `len(s) + j` is substituted. But note that -0 is still 0.
It means that whenever you use negative indexing, the value will be added to the length of the list and the result will be used. So, `list[-1]` would be giving you the element `list[-1 + len(list)]`.
So, if you want to allow negative indexes, then you can simply check if the index doesn't exceed the length of the list, like this
```
if index < len(list):
```
---
Another way to do this is, excepting `IndexError`, like this
```
a = []
try:
a[0]
except IndexError:
return False
return True
```
When you are trying to access an element at an invalid index, an `IndexError` is raised. So, this method works.
---
**Note:** The method you mentioned in the question has a problem.
```
if not mylist[1]:
```
Lets say `1` is a valid index for `mylist`, and if it returns a [Falsy value](https://docs.python.org/2/library/stdtypes.html#truth-value-testing). Then `not` will negate it so the `if` condition would be evaluated to be Truthy. So, it will return `False`, even though an element actually present in the list. |
How to get class when I can't use self - Python | 29,716,828 | 3 | 2015-04-18T11:41:44Z | 29,716,841 | 7 | 2015-04-18T11:42:56Z | [
"python",
"class",
"inheritance",
"python-3.x"
] | I have one weird problem. I have following code:
```
class A:
def f():
return __class__()
class B(A):
pass
a = A.f()
b = B.f()
print(a, b)
```
And output is something like this:
```
<__main__.A object at 0x01AF2630> <__main__.A object at 0x01B09B70>
```
So how can I get `B` instead of second `A`? | The [magic `__class__` closure](https://stackoverflow.com/questions/19776056/the-difference-between-super-method-versus-superself-class-self-method/19776143#19776143) is set for the *method context* and only really meant for use by `super()`.
For methods you'd want to use `self.__class__` instead:
```
return self.__class__()
```
or better still, use `type(self)`:
```
return type(self)()
```
If you want to be able to call the method on a class, then use the [`classmethod` decorator](https://docs.python.org/3/library/functions.html#classmethod) to be handed a reference to the class object, rather than remain unbound:
```
@classmethod
def f(cls):
return cls()
```
`classmethod`s are always bound to the class they are called on, so for `A.f()` that'd be `A`, for `B.f()` you get handed in `B`. |
PySpark groupByKey returning pyspark.resultiterable.ResultIterable | 29,717,257 | 24 | 2015-04-18T12:18:49Z | 29,718,878 | 40 | 2015-04-18T14:52:02Z | [
"python",
"apache-spark",
"pyspark"
] | I am trying to figure out why my groupByKey is returning the following:
```
[(0, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a210>), (1, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a4d0>), (2, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a390>), (3, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a290>), (4, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a450>), (5, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a350>), (6, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a1d0>), (7, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a490>), (8, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a050>), (9, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a650>)]
```
I have flatMapped values that look like this:
```
[(0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D')]
```
I'm doing just a simple:
```
groupRDD = columnRDD.groupByKey()
``` | What you're getting back is an object which allows you to iterate over the results. You can turn the results of groupByKey into a list by calling list() on the values, e.g.
```
example = sc.parallelize([(0, u'D'), (0, u'D'), (1, u'E'), (2, u'F')])
example.groupByKey().collect()
# Gives [(0, <pyspark.resultiterable.ResultIterable object ......]
example.groupByKey().map(lambda x : (x[0], list(x[1]))).collect()
# Gives [(0, [u'D', u'D']), (1, [u'E']), (2, [u'F'])]
``` |
PySpark groupByKey returning pyspark.resultiterable.ResultIterable | 29,717,257 | 24 | 2015-04-18T12:18:49Z | 31,105,759 | 9 | 2015-06-28T23:15:56Z | [
"python",
"apache-spark",
"pyspark"
] | I am trying to figure out why my groupByKey is returning the following:
```
[(0, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a210>), (1, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a4d0>), (2, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a390>), (3, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a290>), (4, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a450>), (5, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a350>), (6, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a1d0>), (7, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a490>), (8, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a050>), (9, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a650>)]
```
I have flatMapped values that look like this:
```
[(0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D')]
```
I'm doing just a simple:
```
groupRDD = columnRDD.groupByKey()
``` | you can also use
```
example.groupByKey().mapValues(list)
``` |
How to read mp4 video to be processed by scikit-image? | 29,718,238 | 7 | 2015-04-18T13:49:12Z | 29,742,156 | 11 | 2015-04-20T07:54:59Z | [
"python",
"numpy",
"scikit-image"
] | I would like to apply a `scikit-image` function (specifically the template matching function `match_template`) to the frames of a `mp4` video, `h264` encoding. It's important for my application to track the time of each frame, but I know the framerate so I can easily calculate from the frame number.
Please note that I'm running on low resources, and I would like to keep dependencies as slim as possible: `numpy` is needed anyway, and since I'm planning to use `scikit-image`, I would avoid importing (and compiling) `openCV` just to read the video.
I see at the bottom of [this](http://scikit-image.org/docs/stable/user_guide/numpy_images.html) page that `scikit-image` can seamleassly process video stored as a `numpy` array, obtaining that would thus be ideal. | [Imageio](http://imageio.github.io/) python package should do what you want. Here is a python snippet using this package:
```
import pylab
import imageio
filename = '/tmp/file.mp4'
vid = imageio.get_reader(filename, 'ffmpeg')
nums = [10, 287]
for num in nums:
image = vid.get_data(num)
fig = pylab.figure()
fig.suptitle('image #{}'.format(num), fontsize=20)
pylab.imshow(image)
pylab.show()
```


You can also directly iterate over the images in the file ([see the documentation](http://imageio.readthedocs.org/en/latest/examples.html) ):
```
for i, im in enumerate(vid):
print('Mean of frame %i is %1.1f' % (i, im.mean()))
```
To install imageio you can use pip:
```
pip install imageio
```
An other solution would be to use [moviepy](http://zulko.github.io/moviepy/) (which use a similar code to read video), but I think imageio is lighter and does the job.
---
*response to first comment*
In order to check if the nominal frame rate is the same over the whole file, you can count the number of frame in the iterator:
```
count = 0
try:
for _ in vid:
count += 1
except RuntimeError:
print('something went wront in iterating, maybee wrong fps number')
finally:
print('number of frames counted {}, number of frames in metada {}'.format(count, vid.get_meta_data()['nframes']))
In [10]: something went wront in iterating, maybee wrong fps number
number of frames counted 454, number of frames in metada 461
```
In order to display the timestamp of each frame:
```
try:
for num, image in enumerate(vid.iter_data()):
if num % int(vid._meta['fps']):
continue
else:
fig = pylab.figure()
pylab.imshow(image)
timestamp = float(num)/ vid.get_meta_data()['fps']
print(timestamp)
fig.suptitle('image #{}, timestamp={}'.format(num, timestamp), fontsize=20)
pylab.show()
except RuntimeError:
print('something went wrong')
``` |
How to read mp4 video to be processed by scikit-image? | 29,718,238 | 7 | 2015-04-18T13:49:12Z | 32,041,825 | 7 | 2015-08-17T02:04:45Z | [
"python",
"numpy",
"scikit-image"
] | I would like to apply a `scikit-image` function (specifically the template matching function `match_template`) to the frames of a `mp4` video, `h264` encoding. It's important for my application to track the time of each frame, but I know the framerate so I can easily calculate from the frame number.
Please note that I'm running on low resources, and I would like to keep dependencies as slim as possible: `numpy` is needed anyway, and since I'm planning to use `scikit-image`, I would avoid importing (and compiling) `openCV` just to read the video.
I see at the bottom of [this](http://scikit-image.org/docs/stable/user_guide/numpy_images.html) page that `scikit-image` can seamleassly process video stored as a `numpy` array, obtaining that would thus be ideal. | You could use [scikit-video](https://github.com/aizvorski/scikit-video/), like this:
```
from skvideo.io import VideoCapture
cap = VideoCapture(filename)
cap.open()
while True:
retval, image = cap.read()
# image is a numpy array containing the next frame
# do something with image here
if not retval:
break
```
This uses avconv or ffmpeg under the hood. The performance is quite good, with a small overhead to move the data into python compared to just decoding the video in avconv.
The advantage of scikit-video is that the API is exactly the same as the video reading/writing API of OpenCV; just replace cv2.VideoCapture with skvideo.io.VideoCapture. |
How to test each specific digit or character | 29,722,807 | 4 | 2015-04-18T21:01:29Z | 29,722,871 | 7 | 2015-04-18T21:08:22Z | [
"python",
"string",
"function",
"if-statement",
"input"
] | I would like to receive **5 digits** inputted by the user and then print something for each specific digit.
For example, if the user enters 12345, I would like to print a specific output for 1 first, then another output for 2, etc.
How would I go about doing this? I would prefer to create a function if possible.
```
#!/usr/bin/python3
zipcode = int(raw_input("Enter a zipcode: "))
if zipcode == 1:
print ":::||"
elif zipcode == 2:
print "::|:|"
elif zipcode == 3:
print "::||:"
elif zipcode == 4:
print ":|::|"
elif zipcode == 5:
print ":|:|:"
elif zipcode == 6:
print ":||::"
elif zipcode == 7:
print "|:::|"
elif zipcode == 8:
print "|::|:"
elif zipcode == 9:
print "|:|::"
elif zipcode == 0:
print "||:::"
``` | You could use a [dictionary](https://docs.python.org/2/tutorial/datastructures.html#dictionaries) and then iterate through the input:
```
zipcode = raw_input("Enter a zipcode: ")
codes={1:":::||",2:"::|:|",3:"::||:",4:":|::|",5:":|:|:",6:":||::",7:"|:::|",8:"|::|:",9:"|:|::",0:"||:::"}
for num in zipcode:
print codes[int(num)], #add a comma here if you want it on the same line
```
This would give you:
```
>>>
Enter a zipcode: 54321
:|:|: :|::| ::||: ::|:| :::||
```
**EDIT:**
For no spaces:
```
zipcode = raw_input("Enter a zipcode: ")
codes={1:":::||",2:"::|:|",3:"::||:",4:":|::|",5:":|:|:",6:":||::",7:"|:::|",8:"|::|:",9:"|:|::",0:"||:::"}
L = [] #create a list
for num in zipcode:
L.append(codes[int(num)]) #append the values to a list
print ''.join(L) #join them together and then print
```
Now this would print:
```
>>>
Enter a zipcode: 54321
:|:|::|::|::||:::|:|:::||
``` |
Custom sort in Python 3 | 29,726,068 | 5 | 2015-04-19T05:10:34Z | 29,726,111 | 7 | 2015-04-19T05:18:22Z | [
"java",
"python",
"sorting"
] | I am starting out on learning Python 3.
I am wondering how to perform a custom sort. For instance, I might want to sort a list of animals in the following manner: sort by first character ascending, then by length descending, then by alphanumeric ascending.
A list made up of "ant", "antelope", "zebra", "anteater" when properly sorted would become "anteater", "antelope", "ant", "zebra".
I have read a little on the docs but don't quite get the "key" argument to the sort method. Any examples?
PS: this isn't a College assignment question. I am just looking to play around with python a bit.
I learnt java a long, long time ago and might have implemented the custom sort something like the following:
```
import java.util.*;
public class sortDemo implements Comparator<String> {
public static void main(String[] args) {
ArrayList<String> animals = new ArrayList<String>();
animals.add("ant");
animals.add("antelope");
animals.add("zebra");
animals.add("anteater");
for (String a:animals){
System.out.println(a);
}
System.out.println();
// want output to be anteater, antelope, ant, zebra following the sort
Collections.sort(animals, new sortDemo());
for (String a:animals){
System.out.println(a);
}
}
public int compare(String s1, String s2) {
if (s1.charAt(0) > s2.charAt(0)){
return 1;
} else if (s1.charAt(0) == s2.charAt(0)){
if(s1.length() < s2.length()) {
return 1;
} else if (s1.length() == s2.length()) {
return s1.compareTo(s2);
} else {
return -1;
}
} else {
return -1;
}
}
}
``` | The sorting key is a function that, given a list element, returns a value that Python knows how to compare natively. For example, Python knows how to compare integers and strings.
Python can also compare tuples and lists that are composed of things it knows how to compare. The way tuples and lists get compared is that earlier items in the tuple or list take precedence over later values, just as you would expect.
In your case, you would want to make the following key function:
```
lambda name: (name[0], -len(name), name)
```
Items with smaller keys always come earlier in a sorted list. Thus, a smaller initial character causes an animal to come earlier. If two names have the same initial, a longer name length causes an animal to come earlier because the negative name length is smaller. Finally, if two animals' names have the same initial and the same length, the tie is broken by lexicographic order.
This program demonstrates how to sort a list with the above key function:
```
animals = ["ant", "antelope", "zebra", "anteater", "cod", "cat"]
animals.sort(key=lambda name: (name[0], -len(name), name))
print(animals)
``` |
Python fails to open 11gb csv in r+ mode but opens in r mode | 29,729,082 | 11 | 2015-04-19T11:13:35Z | 29,742,974 | 18 | 2015-04-20T08:41:18Z | [
"python",
"windows",
"file-io"
] | I'm having problems with some code that loops through a bunch of .csvs and deletes the final line if there's nothing in it (i.e. files that end with the `\n` newline character)
My code works successfully on all files except one, which is the largest file in the directory at 11gb. The second largest file is 4.5gb.
The line it fails on is simply:
```
with open(path_str,"r+") as my_file:
```
and I get the following message:
```
IOError: [Errno 22] invalid mode ('r+') or filename: 'F:\\Shapefiles\\ab_premium\\processed_csvs\\a.csv'
```
The `path_str` I create using `os.file.join` to avoid errors, and I tried renaming the file to `a.csv` just to make sure there wasn't anything odd going on with the filename. This made no difference.
Even more strangely, the file is happy to open in r mode. I.e. the following code works fine:
```
with open(path_str,"r") as my_file:
```
I have tried navigating around the file in read mode, and it's happy to read characters at the start, end, and in the middle of the file.
Does anyone know of any limits on the size of file that Python can deal with or why I might be getting this error? I'm on Windows 7 64bit and have 16gb of RAM. | The default I/O stack in Python 2 is layered over CRT `FILE` streams. On Windows these are built on top of a POSIX emulation API that uses file descriptors (which in turn is layered over the user-mode Windows API, which is layered over the kernel-mode I/O system, which itself is a deeply layered system based on I/O request packets; the hardware is down there somewhere...). In the POSIX layer, opening a file with `_O_RDWR | _O_TEXT` mode (as in "r+"), requires seeking to the end of the file to remove CTRL+Z, if it's present. Here's a quote from the CRT's [`fopen`](https://msdn.microsoft.com/en-us/library/yeby3zcb%28v=vs.90%29.aspx) documentation:
> Open in text (translated) mode. In this mode, CTRL+Z is interpreted as
> an end-of-file character on input. In files opened for reading/writing
> with "a+", fopen checks for a CTRL+Z at the end of the file and
> removes it, if possible. This is done because using fseek and ftell to
> move within a file that ends with a CTRL+Z, may cause fseek to behave
> improperly near the end of the file.
The problem here is that the above check calls the 32-bit [`_lseek`](https://msdn.microsoft.com/en-us/library/1yee101t%28v=vs.90%29.aspx) (bear in mind that `sizeof long` is 4 bytes on 64-bit Windows, unlike most other 64-bit platforms), instead of `_lseeki64`. Obviously this fails for an 11 GB file. Specifically, [`SetFilePointer`](https://msdn.microsoft.com/en-us/library/aa365541) fails because it gets called with a `NULL` value for `lpDistanceToMoveHigh`. Here's the return value and `LastErrorValue` for the latter call:
```
0:000> kc 2
Call Site
KERNELBASE!SetFilePointer
MSVCR90!lseek_nolock
0:000> r rax
rax=00000000ffffffff
0:000> dt _TEB @$teb LastErrorValue
ntdll!_TEB
+0x068 LastErrorValue : 0x57
```
The error code 0x57 is [`ERROR_INVALID_PARAMETER`](https://msdn.microsoft.com/en-us/library/ms681382#ERROR_INVALID_PARAMETER). This is referring to `lpDistanceToMoveHigh` being `NULL` when trying to seek from the end of a large file.
To work around this problem with CRT `FILE` streams, I recommend opening the file using [`io.open`](https://docs.python.org/2/library/io.html#module-interface) instead. This is a backported implementation of Python 3's I/O stack. It always opens files in raw binary mode (`_O_BINARY`), and it implements its own buffering and text-mode layers on top of the raw layer.
```
>>> import io
>>> f = io.open('a.csv', 'r+')
>>> f
<_io.TextIOWrapper name='a.csv' encoding='cp1252'>
>>> f.buffer
<_io.BufferedRandom name='a.csv'>
>>> f.buffer.raw
<_io.FileIO name='a.csv' mode='rb+'>
>>> f.seek(0, os.SEEK_END)
11811160064L
``` |
django-rest-framework: Cannot call `.is_valid()` as no `data=` keyword argument was passed when instantiating the serializer instance | 29,731,013 | 4 | 2015-04-19T14:14:24Z | 29,731,923 | 9 | 2015-04-19T15:35:32Z | [
"python",
"django",
"post",
"django-rest-framework"
] | I've the following model:
```
class NoteCategory(models.Model):
title = models.CharField(max_length=100, unique=True)
def __unicode__(self):
return '{}'.format(self.title)
class PatientNote(models.Model):
category = models.ForeignKey(NoteCategory)
patient = models.ForeignKey(Patient)
description = models.CharField(max_length=500)
created_at = models.DateTimeField(auto_now_add=True)
def __unicode__(self):
return '{}'.format(self.description)
```
And the following serializer:
```
class PatientNoteSerializer(serializers.ModelSerializer):
class Meta:
model = PatientNote
```
I just want to make a POST on the PatientNote. The GET works and also the POST on other models works properly:
```
class PatientNoteViewSet(APIView):
queryset = PatientNote.objects.all()
def post(self, request, format=None):
if not request.auth:
return Response({})
token = Token.objects.filter(key=request.auth)[0]
user = token.user
serializer = PatientNoteSerializer(request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
```
The request.data is a QueryDict, i.e.,
```
<QueryDict: {u'category': [u'1'], u'patient': [u'1'], u'description': [u'da rest']}>
```
It would be able to populate the two FKs, patient and category, through their IDs and the description is a simple text.
The POST request is the following one (that works with other models):

Anyway, the POST response is 500 with the following error:
```
AssertionError at /api/notes/
```
Cannot call `.is_valid()` as no `data=` keyword argument was passed when instantiating the serializer instance.
The error is the same if I try to use it in the python shell. | When you want to serialize objects, you pass object as a first argument.
```
serializer = CommentSerializer(comment)
serializer.data
# {'email': u'[email protected]', 'content': u'foo bar', 'created': datetime.datetime(2012, 8, 22, 16, 20, 9, 822774)}
```
But when you want to deserialize you pass the data with a `data` kwarg.
```
serializer = CommentSerializer(data=data)
serializer.is_valid()
# True
serializer.validated_data
# {'content': 'foo bar', 'email': '[email protected]', 'created': datetime.datetime(2012, 08, 22, 16, 20, 09, 822243)}
```
So in your case you want to deserialize your post data, therefor you have to do:
```
serializer = PatientNoteSerializer(data=request.data)
``` |
Check if PyObject is None | 29,732,838 | 6 | 2015-04-19T16:50:53Z | 29,732,914 | 7 | 2015-04-19T16:57:28Z | [
"python",
"c++",
"nonetype",
"pyobject"
] | I would just like to check if a `PyObject` that I have is `None`. I naively expected that any `None` `Pyobject *` returned from a function would be a NULL pointer, but that doesn't seem to be the case.
So: how do I check if a `PyObject *` of mine points to a `None` object?
I know that there are macros like `PyInt_Check(PyObject *)` around, but I couldn't find anything like `PyNone_Check`. I thought I could just check the equality between my `PyObject` and `Py_None`, but turns out I don't even know how to make equality comparisons with this library. | You can just compare directly with `Py_None` using `==`:
```
if (obj == Py_None)
```
From the [docs](https://docs.python.org/2/c-api/none.html):
> Note that the `PyTypeObject` for `None` is not directly exposed in the
> Python/C API. **Since `None` is a singleton, testing for object identity
> (using `==` in C) is sufficient.** There is no `PyNone_Check()` function for
> the same reason. |
How to limit function parameter as array of fixed-size? | 29,733,062 | 7 | 2015-04-19T17:07:56Z | 29,733,129 | 10 | 2015-04-19T17:13:53Z | [
"python",
"arrays"
] | How can I limit python function parameter to accept only arrays of some fixed-size?
I tried this but it doesn't compile:
```
def func(a : array[2]):
```
with
```
TypeError: 'module' object is not subscriptable
```
*I'm new to this language.* | What about checking the length inside of the function? Here I just raised an error, but you could do anything.
```
def func(array):
if len(array) != 2:
raise ValueError("array with length 2 was expected")
# code here runs if len(array) == 2
``` |
Django default=datetime.now() in models always saves same datetime after uwsgi reset | 29,733,203 | 3 | 2015-04-19T17:18:21Z | 29,733,276 | 13 | 2015-04-19T17:24:11Z | [
"python",
"django",
"datetime",
"uwsgi"
] | I have this code in my model:
```
added_time = models.DateTimeField(
default=datetime.datetime.now()
)
```
After I migrate and restart uwsgi, I get first datetime in MariaDB now, and all next - exactly the same as first after resetting uwsgi.
```
2015-04-19 16:01:46
2015-04-19 16:01:46
2015-04-19 16:01:46
2015-04-19 16:01:46
```
I fixed it by changing code to:
```
added_time = models.DateTimeField(
auto_now_add=True
)
```
Though I fixed the problem, I'm not really sure why there even was such behavior? | `default=datetime.datetime.now()` is evaluated at parsing/compile time of the model. It is not changed afterwards. To evaluate `now()` at the time of adding/updating an object, you have to use:
`default=datetime.datetime.now`, which sets `now` as the callable. Django will call it at runtime.
Your solution of using `auto_now_add` is of course also correct (yet semantically different -- passing a default will set the value every time the model is saved, whereas `auto_now_add` only does it once, at creation time).
Don't dispair, this is avery [common](http://docs.python-guide.org/en/latest/writing/gotchas/#mutable-default-arguments) mistake. |
Python numpy keep a list of indices of a sorted 2D array | 29,734,660 | 6 | 2015-04-19T19:19:56Z | 29,734,789 | 7 | 2015-04-19T19:30:42Z | [
"python",
"arrays",
"sorting",
"numpy"
] | I have a 2D numpy array and I want to create a new 1D array where it is indices of numbers in the first array if they are sorted in an ascending order. For the following array:
```
A = [[1,0,2],
[0,3,0]]
```
I want this to be like:
```
B = [[1,1],[0,2],[0,0],[0,1],[1,0],[1,2]]
```
Any idea how it can be done in python using predefined functions?
Thanks | You can use [`argsort`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html) to sort the indices of flattened array, followed by [`unravel_index`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.unravel_index.html) to convert the flat index back to coordinates:
```
>>> i = (-a).argsort(axis=None, kind='mergesort')
>>> j = np.unravel_index(i, a.shape)
>>> np.vstack(j).T
array([[1, 1],
[0, 2],
[0, 0],
[0, 1],
[1, 0],
[1, 2]])
```
`-a` and `kind='mergesort'` is in order to sort the array in a *stable* manner in descending order (to match the output you are looking for).
If you do not care about having a stable sort, replace the first line with:
```
>>> i = a.argsort(axis=None)[::-1]
``` |
How to I assign each variable in a list, a number, and then add the numbers up for the same variables? | 29,734,833 | 4 | 2015-04-19T19:33:13Z | 29,734,859 | 10 | 2015-04-19T19:36:02Z | [
"python",
"list"
] | For example, if `ZZAZAAZ` is input, the sum of `A` would be `14` (since its placement is `3,5,6`), while the sum of `Z` would be `14` `(1 + 2 + 4 + 7)`.
How would I do that? | You can use a generator expression within `sum` :
```
>>> s='ZZAZAAZ'
>>> sum(i for i,j in enumerate(s,1) if j=='A')
14
``` |
python jump to a line in a txt file (a gzipped one) | 29,737,195 | 9 | 2015-04-19T23:40:53Z | 29,737,291 | 9 | 2015-04-19T23:54:40Z | [
"python",
"file-io"
] | I'm reading through a large file, and processing it.
I want to be able to jump to the middle of the file without it taking a long time.
right now I am doing:
```
f = gzip.open(input_name)
for i in range(1000000):
f.read() # just skipping the first 1M rows
for line in f:
do_something(line)
```
is there a faster way to skip the lines in the zipped file?
If I have to unzip it first, I'll do that, but there has to be a way.
It's of course a text file, with `\n` separating lines. | The nature of gzipping is such that there is no longer the concept of lines when the file is compressed -- it's just a binary blob. Check out [this](http://www.gzip.org/deflate.html) for an explanation of what gzip does.
To read the file, you'll need to decompress it -- the `gzip` module does a fine job of it. Like other answers, I'd also recommend `itertools` to do the jumping, as it will carefully make sure you don't pull things into memory, and it will get you there as fast as possible.
```
with gzip.open(filename) as f:
# jumps to `initial_row`
for line in itertools.slice(f, initial_row, None):
# have a party
```
Alternatively, if this is a CSV that you're going to be working with, you could also try clocking `pandas` parsing, as it can handle decompressing `gzip`. That would look like: `parsed_csv = pd.read_csv(filename, compression='gzip')`.
Also, to be extra clear, when you iterate over file objects in python -- i.e. like the `f` variable above -- you iterate over lines. You do not need to think about the '\n' characters. |
Lists in Python becoming very complex | 29,738,066 | 2 | 2015-04-20T01:45:32Z | 29,738,109 | 7 | 2015-04-20T01:50:48Z | [
"python"
] | I have a simple function which takes a 2D list as a parameter:
```
def get_noise_estimate(imag_array):
temp = []
temp.append(imag_array[:20])
temp.append(imag_array[-20:])
```
In an example instance, it has 305 elements, each with 129 elements. I like to think of this has 305 columns each with 129 rows. I found out these things using my debugger:
```
>>> len(imag_array)
305
>>> len(imag_array[0])
129
```
When I get the first 20 and last 20 'columns' of the array and append it to `temp` I get the following:
```
>>> len(temp)
2
>>> len(temp[0])
20
>>> len(temp[1])
20
>>> len(temp[0][0])
129
```
My `temp` list has become quite confusing. I'd like `len(temp)` to equal 40 rather than 2 blocks of 20. To illustrate what I mean I'll show how I might do something similar in Java:
```
int[] temp = new int[40];
for(int i = 0; i < 20; i++){
temp[i] = imag_array[i];
}
for(int i = 0; i < 20; i++){
temp[i+20] = imag_array[imag_array.length-i]
}
```
The above is off the top of my head but I hope it makes clear what I'm getting at. | Change:
```
temp.append(imag_array[:20])
temp.append(imag_array[-20:])
```
to
```
temp.extend(imag_array[:20])
temp.extend(imag_array[-20:])
```
The `append` command adds something as the last element of `temp`. So it's making the first element of `temp` be the list `imag_array[:20]`. `extend` takes all the elements of the list in the argument and adds each to the end of the list.
(and note Jay's answer: `temp = imag_array[:20]+imag_array[-20:]` is actually cleaner - it doesn't require predefining `temp` and avoids using `.append`) |
Comprehensions with multiple input sets | 29,738,661 | 5 | 2015-04-20T03:00:16Z | 29,738,721 | 7 | 2015-04-20T03:07:58Z | [
"python",
"list",
"dictionary",
"list-comprehension",
"dictionary-comprehension"
] | **I'm experimenting** with python and am stuck trying to understand the error messages in the context of what I am doing.
I'm playing around with comprehensions and trying to find a pattern to create a list/dictionary comprehension with more than one input set (assuming this is possible):
**Note:** Here the word *input set* means the input area of the comprehension. In setbuilder notation, from where python derived its comprehensions [Y for X in LIST], Y is the output function, X is the variable and LIST is the input set.
Assume I have the following working code:
```
from random import randint
mydict = {k: 0 for k in range(10)}
result = {randint(0,9): v + 1 for v in mydict.values()}
```
I'm not trying to do anything special about it. This is not even useful code because it won't work as expected. All elements in the dictionary will have the value 1, instead of just those pointed at by the random generator. **My only objective is to have a basis from where I start my attempt at working with a tuple of input sets.**
```
from random import randint
mydict = {k: 0 for k in range(10)}
result = {k: v + 1 for k, v in (randint(0,9), mydict.values())}
```
This option gives me: `TypeError: 'int' object is not iterable`.
By swapping the input sets and unpacking I have:
```
result = {k: v + 1 for *v, k in (mydict.values(), randint(0,9))}
```
But this option gives me: `TypeError: can only concatenate list (not "int") to list`
Are these errors appearing because I am trying to do something the language grammar does not understand, or am I missing something and I could in fact fix the code? | You will have to create a separate comprehension for the random numbers, as it currently stands, you have only one random number. Also, you will then need to zip the results to get a combined entity:
```
>>> from random import randint
>>> mydict = {k: 0 for k in range(10)}
>>> result = {k: v + 1 for k, v in zip([randint(0,9) for _ in range(10)] , mydict.values())}
>>> result
{2: 1, 3: 1, 4: 1, 5: 1, 8: 1, 9: 1}
```
Note that since your initial dict has the value `0` for all its keys, all the values in the result dict are `1` (`0+1`).
Also, since we are making the keys random, there can be possible overlaps (say `2` was generated twice), so that's why we don't see all the keys in the `result` dictionary.
As @wim notes in comments below, a better way to generate this result dictionary would be to use:
```
>>> {randint(0,9): v+1 for v in mydict.values()}
{0: 1, 1: 1, 2: 1, 3: 1, 6: 1, 7: 1}
``` |
How can i get all models in django 1.8 | 29,738,976 | 11 | 2015-04-20T03:41:36Z | 29,739,109 | 23 | 2015-04-20T03:57:58Z | [
"python",
"django"
] | I am using this code in my admin.py
```
from django.db.models import get_models, get_app
for model in get_models(get_app('myapp')):
admin.site.register(model)
```
But i get warning that `get_models is deprecated`
How can i do that in django 1.8 | This should work,
```
from django.apps import apps
apps.get_models()
```
The `get_models` method returns a list of all installed models. You can also pass three keyword arguments `include_auto_created`, `include_deferred` and `include_swapped`.
If you want to get the models for a specific app, you can do something like this.
```
from django.apps import apps
myapp = apps.get_app_config('myapp')
myapp.models
```
This will return an OrderedDict instance of the models for that app. |
Error packaging Kivy with numpy library for Android using buildozer | 29,742,289 | 49 | 2015-04-20T08:03:29Z | 30,878,639 | 7 | 2015-06-16T21:34:24Z | [
"android",
"python",
"numpy",
"kivy"
] | I am trying to create an `Android` package of my `Kivy` application using `buildozer` but I am getting this error when I try to include the `numpy`:
resume of the error:
```
compile options: '-DNO_ATLAS_INFO=1 -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/home/joao/github/buildozer/.buildozer/android/platform/python-for-android/build/python-install/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c'
ccache: numpy/linalg/lapack_litemodule.c
ccache: numpy/linalg/python_xerbla.c
/usr/bin/gfortran -Wall -lm build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib -L/home/joao/github/buildozer/.buildozer/android/platform/python-for-android/build/python-install/lib -Lbuild/temp.linux-x86_64-2.7 -llapack -lblas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so
/usr/bin/ld: build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o: Relocations in generic ELF (EM: 40)
/usr/bin/ld: build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o: Relocations in generic ELF (EM: 40)
build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o: error adding symbols: File in wrong format
collect2: error: ld returned 1 exit status
/usr/bin/ld: build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o: Relocations in generic ELF (EM: 40)
/usr/bin/ld: build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o: Relocations in generic ELF (EM: 40)
build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o: error adding symbols: File in wrong format
collect2: error: ld returned 1 exit status
unable to execute _configtest: Exec format error
error: Command "/usr/bin/gfortran -Wall -lm build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib -L/home/joao/github/buildozer/.buildozer/android/platform/python-for-android/build/python-install/lib -Lbuild/temp.linux-x86_64-2.7 -llapack -lblas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so" failed with exit status 1
```
does anyone knows how to solve it?
P.S. I am using Ubuntu 14.04 64-bit | Try sudo apt-get install libatlas-base-dev it looks like you're missing some libraries |
Append to a list defined in a tuple - is it a bug? | 29,747,224 | 27 | 2015-04-20T11:56:33Z | 29,747,287 | 22 | 2015-04-20T11:58:53Z | [
"python",
"list",
"tuples"
] | So I have this code:
```
tup = ([1,2,3],[7,8,9])
tup[0] += (4,5,6)
```
which generates this error:
```
TypeError: 'tuple' object does not support item assignment
```
While this code:
```
tup = ([1,2,3],[7,8,9])
try:
tup[0] += (4,5,6)
except TypeError:
print tup
```
prints this:
```
([1, 2, 3, 4, 5, 6], [7, 8, 9])
```
Is this behavior expected?
## Note
I realize this is not a very common use case. However, while the error is expected, I did not expect the list change. | Yes it's expected.
A tuple cannot be changed. A tuple, like a list, is a structure that points to other objects. It doesn't care about what those objects are. They could be strings, numbers, tuples, lists, or other objects.
So doing anything to one of the objects contained in the tuple, including appending to that object if it's a list, isn't relevant to the semantics of the tuple.
(Imagine if you wrote a class that had methods on it that cause its internal state to change. You wouldn't expect it to be impossible to call those methods on an object based on where it's stored).
Or another example:
```
>>> l1 = [1, 2, 3]
>>> l2 = [4, 5, 6]
>>> t = (l1, l2)
>>> l3 = [l1, l2]
>>> l3[1].append(7)
```
Two mutable lists referenced by a list and by a tuple. Should I be able to do the last line (answer: yes). If you think the answer's no, why not? Should `t` change the semantics of `l3` (answer: no).
If you want an immutable object of sequential structures, it should be tuples all the way down.
# Why does it error?
This example uses the infix operator:
> Many operations have an âin-placeâ version. The following functions
> provide a more primitive access to in-place operators than the usual
> syntax does; for example, the statement x += y is equivalent to x =
> operator.iadd(x, y). Another way to put it is to say that z =
> operator.iadd(x, y) is equivalent to the compound statement z = x; z
> += y.
<https://docs.python.org/2/library/operator.html>
So this:
```
l = [1, 2, 3]
tup = (l,)
tup[0] += (4,5,6)
```
is equivalent to this:
```
l = [1, 2, 3]
tup = (l,)
x = tup[0]
x = x.__iadd__([4, 5, 6]) # like extend, but returns x instead of None
tup[0] = x
```
The `__iadd__` line succeeds, and modifies the first list. So the list has been changed. The `__iadd__` call returns the mutated list.
The second line tries to assign the list back to the tuple, and this fails.
So, at the end of the program, the list has been extended but the second part of the `+=` operation failed. For the specifics, see [this question](http://stackoverflow.com/questions/2022031/python-append-vs-operator-on-lists-why-do-these-give-different-results). |
Append to a list defined in a tuple - is it a bug? | 29,747,224 | 27 | 2015-04-20T11:56:33Z | 29,747,466 | 9 | 2015-04-20T12:07:48Z | [
"python",
"list",
"tuples"
] | So I have this code:
```
tup = ([1,2,3],[7,8,9])
tup[0] += (4,5,6)
```
which generates this error:
```
TypeError: 'tuple' object does not support item assignment
```
While this code:
```
tup = ([1,2,3],[7,8,9])
try:
tup[0] += (4,5,6)
except TypeError:
print tup
```
prints this:
```
([1, 2, 3, 4, 5, 6], [7, 8, 9])
```
Is this behavior expected?
## Note
I realize this is not a very common use case. However, while the error is expected, I did not expect the list change. | Well I guess `tup[0] += (4, 5, 6)` is translated to:
```
tup[0] = tup[0].__iadd__((4,5,6))
```
`tup[0].__iadd__((4,5,6))` is executed normally changing the list in the first element. But the assignment fails since tuples are immutables. |
Pass multiple args from bash into python | 29,750,203 | 7 | 2015-04-20T14:07:21Z | 29,750,269 | 15 | 2015-04-20T14:10:39Z | [
"python",
"string",
"bash",
"command-line-arguments"
] | I have a short inline python script that I call from a bash script, and I want to have it handle a multi-word variable (which comes from `$*`). I expected this to just work:
```
#!/bin/bash
arg="A B C"
python -c "print '"$arg"'"
```
but it doesn't:
```
File "<string>", line 1
print 'A
^
SyntaxError: EOL while scanning string literal
```
Why? | The BASH script is wrong.
```
#!/bin/bash
arg="A B C"
python -c "print '$arg'"
```
And output
```
$ sh test.sh
A B C
```
Note that to concatenate two string variables you don't need to put them outside the string constants |
Pass multiple args from bash into python | 29,750,203 | 7 | 2015-04-20T14:07:21Z | 29,750,786 | 10 | 2015-04-20T14:33:58Z | [
"python",
"string",
"bash",
"command-line-arguments"
] | I have a short inline python script that I call from a bash script, and I want to have it handle a multi-word variable (which comes from `$*`). I expected this to just work:
```
#!/bin/bash
arg="A B C"
python -c "print '"$arg"'"
```
but it doesn't:
```
File "<string>", line 1
print 'A
^
SyntaxError: EOL while scanning string literal
```
Why? | I would like to explain why your code doesn't work.
What you wanted to do is that:
```
arg="A B C"
python -c "print '""$arg""'"
```
Output:
```
A B C
```
The problem of your code is that `python -c "print '"$arg"'"` is parsed as `python -c "print '"A B C"'"` by the shell. See this:
```
arg="A B C"
python -c "print '"A B C"'"
#__________________^^^^^____
```
Output:
```
File "<string>", line 1
print 'A
SyntaxError: EOL while scanning string literal
```
Here you get a syntax error because the spaces prevent concatenation, so the following `B` and `C"'"` are interpreted as two different strings that are not part of the string passed as a command to python interpreter (which takes only the string following `-c` as command).
For better understanding:
```
arg="ABC"
python -c "print '"$arg"'"
```
Output:
```
ABC
``` |
How to find a Python package's dependencies | 29,751,572 | 12 | 2015-04-20T15:05:29Z | 29,751,732 | 13 | 2015-04-20T15:12:13Z | [
"python",
"pip"
] | How can you programmatically get a Python package's list of dependencies?
The standard `setup.py` has these documented, but I can't find an easy way to access it *from* either Python or the command line.
Ideally, I'm looking for something like:
```
$ pip install somepackage --only-list-deps
kombu>=3.0.8
billiard>=3.3.0.13
boto>=2.26
```
or:
```
>>> import package_deps
>>> package = package_deps.find('somepackage')
>>> print package.dependencies
['kombu>=3.0.8', 'billiard>=3.3.0.13', 'boto>=2.26']
```
Note, I'm not talking about importing a package and finding all referenced modules. While this might find most of the dependent packages, it wouldn't be able to find the minimum version number required. That's only stored in the setup.py. | Try to use `show` command in `pip`, for example:
```
$ pip show tornado
---
Name: tornado
Version: 4.1
Location: *****
Requires: certifi, backports.ssl-match-hostname
```
**Update** (retrieve deps with specified version):
```
from pip._vendor import pkg_resources
_package_name = 'somepackage'
_package = pkg_resources.working_set.by_key[_package_name]
print([str(r) for r in _package.requires()]) # retrieve deps from setup.py
```
---
```
Output: ['kombu>=3.0.8',
'billiard>=3.3.0.13',
'boto>=2.26']
``` |
class getting kwargs from enclosing scope | 29,759,387 | 4 | 2015-04-20T22:13:59Z | 29,759,456 | 7 | 2015-04-20T22:20:34Z | [
"python",
"dictionary",
"trie"
] | Python seems to be inferring some kwargs from the enclosing scope of a class method, and I'm not sure why. I'm implementing a Trie:
```
class TrieNode(object):
def __init__(self, value = None, children = {}):
self.children = children
self.value = value
def __getitem__(self, key):
if key == "":
return self.value
return self.children[key[0]].__getitem__(key[1:])
def __setitem__(self, key, value):
if key == "":
self.value = value
return
if key[0] not in self.children:
self.children[key[0]] = TrieNode()
self.children[key[0]].__setitem__(key[1:], value)
```
On the second to last line, I create a new TrieNode with, presumably, an empty dictionary of children. However, when I inspect the resulting data structure, all of the TrieNodes in the tree are, using the same children dictionary. Viz, if we do:
```
>>>test = TrieNode()
>>>test["pickle"] = 5
>>>test.children.keys()
['c', 'e', 'i', 'k', 'l', 'p']
```
Whereas the children of test should only consist of "p" pointing to a new TrieNode. On the other hand, if we go into the second to last line of that code and replace it with:
```
self.children[key[0]] = TrieNode(children = {})
```
Then it works as expected. Somehow, then, the self.children dictionary is getting passed implicitly as a kwarg to TrieNode(), but why? | You're having a [mutable default argument](http://stackoverflow.com/questions/1132941/least-astonishment-in-python-the-mutable-default-argument) issue. Change your `__init__` function to be like this
```
def __init__(self, value=None, children=None):
if not children:
children = {}
```
The default value for children will only be evaluated once at the point of function creation, whereas you want it to be a new dict at within each call.
Here's a simple example of the problem using a list
```
>>> def f(seq=[]):
... seq.append('x') #append one 'x' to the argument
... print(seq) # print it
>>> f() # as expected
['x']
>>> f() # but this appends 'x' to the same list
['x', 'x']
>>> f() # again it grows
['x', 'x', 'x']
>>> f()
['x', 'x', 'x', 'x']
>>> f()
['x', 'x', 'x', 'x', 'x']
```
As the the answer I linked to describes, this bites every python programmer eventually. |
What is the proper way to print a nested list with the highest value in Python | 29,760,130 | 6 | 2015-04-20T23:19:35Z | 29,760,142 | 13 | 2015-04-20T23:20:38Z | [
"python",
"list",
"python-3.x",
"nested-lists"
] | I have a a nested list and I'm trying to get the sum and print the list that has the highest numerical value when the individual numbers are summed together
```
x = [[1,2,3],[4,5,6],[7,8,9]]
highest = list()
for i in x:
highest.append(sum(i))
for ind, a in enumerate(highest):
if a == max(highest):
print(x[ind])
```
I've been able to print out the results but I think there should be a simple and more Pythonic way of doing this (Maybe using a list comprehension).
How would I do this? | How about:
```
print(max(x, key=sum))
```
Demo:
```
>>> x = [[1,2,3],[4,5,6],[7,8,9]]
>>> print(max(x, key=sum))
[7, 8, 9]
```
This works because `max` (along with a number of other python builtins like `min`, `sort` ...) accepts a function to be used for the comparison. In this case, I just said that we should compare the elements in `x` based on their individual `sum` and Bob's our uncle, we're done! |
Scope of variables in python decorator | 29,760,593 | 17 | 2015-04-21T00:07:23Z | 29,760,724 | 14 | 2015-04-21T00:20:51Z | [
"python",
"python-3.x",
"decorator",
"python-decorators"
] | I'm having a very weird problem in a Python 3 decorator.
If I do this:
```
def rounds(nr_of_rounds):
def wrapper(func):
@wraps(func)
def inner(*args, **kwargs):
return nr_of_rounds
return inner
return wrapper
```
it works just fine. However, if I do this:
```
def rounds(nr_of_rounds):
def wrapper(func):
@wraps(func)
def inner(*args, **kwargs):
lst = []
while nr_of_rounds > 0:
lst.append(func(*args, **kwargs))
nr_of_rounds -= 1
return max(lst)
return inner
return wrapper
```
I get:
```
while nr_of_rounds > 0:
UnboundLocalError: local variable 'nr_of_rounds' referenced before assignment
```
In other words, I can use `nr_of_rounds`in the inner function if I use it in a return, but I can't do anything else with it. Why is that? | Since `nr_of_rounds` is picked up by the **closure**, you can think of it as a "read-only" variable. If you want to write to it (e.g. to decrement it), you need to tell python explicitly -- In this case, the python3.x `nonlocal` keyword would work.
As a brief explanation, what Cpython does when it encounters a function definition is it looks at the code and decides if all the variables are *local* or *non-local*. Local variables (by default) are anything that appear on the left-hand side of an assignment statement, loop variables and the input arguments. Every other name is non-local. This allows some neat optimizations1. To use a non-local variable the same way you would a local, you need to tell python explicitly either via a `global` or `nonlocal` statement. When python encounters something that it thinks *should* be a local, but really isn't, you get an `UnboundLocalError`.
1The Cpython bytecode generator turns the local names into indices in an array so that local name lookup (the LOAD\_FAST bytecode instruction) is as fast as indexing an array plus the normal bytecode overhead. |
How can I resolve 'django_content_type already exists'? | 29,760,817 | 39 | 2015-04-21T00:30:58Z | 29,760,818 | 68 | 2015-04-21T00:30:58Z | [
"python",
"django"
] | After upgrading to django 1.8 I'm recieving the error during migration:
```
ProgrammingError: relation "django_content_type" already exists
```
I'd be interested in the background behind this error, but more importantly,
How can I resolve it? | Initial migrations on a project can sometimes be troubleshot using --fake-initial
```
python manage.py migrate --fake-initial
```
It's new in 1.8. In 1.7, --fake-initial was an implicit default, but explicit in 1.8.
From the Docs:
> The --fake-initial option can be used to allow Django to skip an appâs initial migration if all database tables with the names of all models created by all CreateModel operations in that migration already exist. This option is intended for use when first running migrations against a database that preexisted the use of migrations. This option does not, however, check for matching database schema beyond matching table names and so is only safe to use if you are confident that your existing schema matches what is recorded in your initial migration.
<https://docs.djangoproject.com/en/1.8/ref/django-admin/#django-admin-option---fake-initial> |
Joining elements in a list without the join command | 29,761,800 | 15 | 2015-04-21T02:25:20Z | 29,761,895 | 13 | 2015-04-21T02:36:27Z | [
"python",
"list",
"join"
] | I need to join the elements in a list **without using the join command**, so if for example I have the list:
```
[12,4,15,11]
```
The output should be:
```
1241511
```
Here is my code so far:
```
def lists(list1):
answer = 0
h = len(list1)
while list1 != []:
answer = answer + list1[0] * 10 ** h
h = h - 1
list1.pop(0)
print(answer)
```
But, in the end, the answer ends up being `125610` which is clearly wrong.
I think the logic is OK, but I can't find the problem? | If you just want to print the number rather than `return` an actual `int`:
```
>>> a = [12,4,15,11]
>>> print(*a, sep='')
1241511
``` |
Understanding Markov Chains in terms of Matrix Multiplication | 29,763,108 | 2 | 2015-04-21T04:45:47Z | 29,763,320 | 7 | 2015-04-21T05:03:17Z | [
"python",
"numpy",
"probability",
"markov-chains"
] | In [a lecture on YouTube](https://www.youtube.com/watch?v=8AJPs3gvNlY&feature=player_detailpage#t=2011), a professor said Markov Chains could be simplified to `Start(S) * Transition Matrix(Q)^State#`
I'm trying to replicate this using numpy.
```
import numpy as np
S = np.zeros(shape=(1,2))
Q = np.zeros(shape=(2,2))
#starting state
S[0] = [.2,.8]
#transition matrix
Q[0] = [.9, .1]
Q[1] = [.7, .3]
```
If I do `print S.dot(Q).dot(Q)`, it gives me `[[0.848 0.152]]` which appears to be the correct answer (two steps into the future).
However, this doesn't exactly seem the same as `SQ^x`, so I tried `print S.dot(np.power(Q,2))`, but that gives me `[[0.554 0.074]]`. Where am I going wrong, or what don't I understand here? | The expressions `S.dot(Q).dot(Q)` and `S.dot(np.power(Q,2))` are not the same thing. The first is the behaviour you desire, while `S.dot(np.power(Q,2))` raises each element in `Q` to the second power. Documenation [here](http://docs.scipy.org/doc/numpy/reference/generated/numpy.power.html).
For a more compact notation than repeatedly chaining `.dot(Q)`, use:
```
S.dot(np.linalg.matrix_power(Q,n))
```
where `n` is the desired power. |
How to select all columns, except one column in pandas using .ix | 29,763,620 | 7 | 2015-04-21T05:24:59Z | 29,763,653 | 19 | 2015-04-21T05:27:40Z | [
"python",
"pandas"
] | I have a dataframe look like this:
```
import pandas
import numpy as np
df = DataFrame(np.random.rand(4,4), columns = list('abcd'))
df
a b c d
0 0.418762 0.042369 0.869203 0.972314
1 0.991058 0.510228 0.594784 0.534366
2 0.407472 0.259811 0.396664 0.894202
3 0.726168 0.139531 0.324932 0.906575
```
How I can get all columns except `column b` using `df.ix` | When you don't have a MultiIndex, `df.columns` is just an array of column names so you can do:
```
df.ix[:, df.columns != 'b']
a c d
0 0.561196 0.013768 0.772827
1 0.882641 0.615396 0.075381
2 0.368824 0.651378 0.397203
3 0.788730 0.568099 0.869127
``` |
How to select all columns, except one column in pandas using .ix | 29,763,620 | 7 | 2015-04-21T05:24:59Z | 37,717,675 | 9 | 2016-06-09T05:38:42Z | [
"python",
"pandas"
] | I have a dataframe look like this:
```
import pandas
import numpy as np
df = DataFrame(np.random.rand(4,4), columns = list('abcd'))
df
a b c d
0 0.418762 0.042369 0.869203 0.972314
1 0.991058 0.510228 0.594784 0.534366
2 0.407472 0.259811 0.396664 0.894202
3 0.726168 0.139531 0.324932 0.906575
```
How I can get all columns except `column b` using `df.ix` | The most readable and idiomatic way of doing this is `df.drop()`:
```
>>> df
a b c d
0 0.175127 0.191051 0.382122 0.869242
1 0.414376 0.300502 0.554819 0.497524
2 0.142878 0.406830 0.314240 0.093132
3 0.337368 0.851783 0.933441 0.949598
>>> df.drop('b', axis=1)
a c d
0 0.175127 0.382122 0.869242
1 0.414376 0.554819 0.497524
2 0.142878 0.314240 0.093132
3 0.337368 0.933441 0.949598
```
Note that by default, `.drop()` does not operate inplace; despite the ominous name, `df` is unharmed by this process. If you want to permanently remove `b` from `df`, do `df.drop('b', inplace=True)`.
`df.drop()` also accepts a list of labels, e.g. `df.drop(['a', 'b'], axis=1)` will drop column `a` and `b`. |
How / why does Python type hinting syntax work? | 29,770,412 | 15 | 2015-04-21T11:04:59Z | 29,770,490 | 25 | 2015-04-21T11:09:10Z | [
"python",
"python-3.x",
"type-hinting",
"pep"
] | I have just seen the following example in [PEP 484](https://www.python.org/dev/peps/pep-0484/#compatibility-with-other-uses-of-function-annotations):
```
def greeting(name: str) -> str:
return 'Hello ' + name
print(greeting('Martin'))
print(greeting(1))
```
As expected, this does not work in Python 2:
```
File "test.py", line 1
def greeting(name: str) -> str:
^
SyntaxError: invalid syntax
```
However, it works for Python 3:
```
Hello Martin
Traceback (most recent call last):
File "test.py", line 5, in <module>
print(greeting(1))
File "test.py", line 2, in greeting
return 'Hello ' + name
TypeError: Can't convert 'int' object to str implicitly
```
This was unexpected. It does not really check types yet, as you can see with the following example (it runs, but does not throw an exception):
```
def greeting(name: str) -> int:
return 'Hello ' + name
print(greeting('Martin'))
```
It seems as if after the `:` has to be the name of a function, but the function seems to be ignored:
```
def aha(something):
print("aha")
return something+"!"
def greeting(name: aha, foo) -> int:
return 'Hello ' + name + foo
print(greeting('Martin', 'ad'))
```
The same seems to be true for the name after `->`.
Is this type hinting syntax using something else (like Java Modeling language makes use of comments)? When was this syntax introduced to Python? Is there a way to do static type checking already with this Syntax? Does it always break Python 2 compatibility? | There is no type hinting going on here. All you did was provide *annotations*; these were introduced with [PEP 3107](https://www.python.org/dev/peps/pep-3107/) (only in Python 3, there is no support for this in Python 2); they let you annotate arguments and return values with arbitrary information for later inspection:
```
>>> greeting.__annotations__
{'name': <class 'str'>, 'return': <class 'str'>}
```
They are otherwise not consulted at all here. Instead, the error message you got is from trying to concatenate string and integer values *in the body of the function*:
```
>>> 'Hello ' + 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't convert 'int' object to str implicitly
```
It is a custom type error aimed at providing additional information as to why the `str` + `int` concatenation failed; it is thrown by the `str.__add__` method for any type that is not `str`:
```
>>> ''.__add__(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't convert 'int' object to str implicitly
>>> ''.__add__(True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't convert 'bool' object to str implicitly
```
PEP 484 then proposes to make use of those annotations to do actual static type checking with *additional tools*, but as the introduction of the PEP states:
> While these annotations are available at runtime through the usual `__annotations__` attribute, *no type checking happens at runtime*. Instead, the proposal assumes the existence of a separate off-line type checker which users can run over their source code voluntarily. Essentially, such a type checker acts as a very powerful linter.
Emphasis in the original.
The PEP was inspired by existing tools that use PEP 3107 annotations; specifically the [mypy project](http://www.mypy-lang.org/) (which is looping right back by adopting PEP 484), but also the [type hinting support in the PyCharm IDE](https://www.jetbrains.com/pycharm/help/type-hinting-in-pycharm.html) and the [pytypedecl project](https://github.com/google/pytypedecl). See Guido van Rossum's [original email kickstarting this effort](https://mail.python.org/pipermail/python-ideas/2014-August/028618.html) as well as a [follow-up email](https://mail.python.org/pipermail/python-ideas/2014-August/028742.html).
mypy apparently supports Python 2 by preprocessing the annotations, removing them before byte-compiling the source code for you, but you otherwise cannot normally use the syntax Python code meant to work in Python 2.
PEP 484 also describes the use of [stub files](https://www.python.org/dev/peps/pep-0484/#stub-files), which sit next to the regular Python files; these use the `.pyi` extension and only contain the signatures (with type hints), leaving the main `.py` files annotation free and thus usable on Python 2 (provided you wrote Polyglot Python code otherwise). |
Make ipython notebook print in real time | 29,772,158 | 19 | 2015-04-21T12:24:20Z | 31,153,046 | 12 | 2015-07-01T05:17:37Z | [
"python",
"buffer",
"ipython-notebook"
] | Ipython Notebook doesn't seem to print results in real time, but seems to buffer in a certain way and then bulk output the prints. How can I make ipython print my results as soon as the print command is processed?
**Example code:**
```
import time
def printer():
for i in range(100):
time.sleep(5)
print i
```
Supposing that the above code is in a file that is imported. How could I make it that when I call the printer function it prints a number every 5 seconds and not all the numbers at the very end?
Please note that I cannot edit the function `printer()` because I get it from some external module. I want the to change the configs of ipython notebook somehow so that it doesn't use a buffer. Therefore, I also do not wish to use sys.stdout.flush(), I want to do it in real-time according to the question, I don't want any buffer to start with.
I also tried loading ipython notebook with the command:
```
ipython notebook --cache-size=0
```
but that also doesn't seem to work. | This is merely [one of the answers](http://stackoverflow.com/a/231216/420867) to the question suggested by [Carsten](http://stackoverflow.com/users/1071311/carsten) incorporating the `__getattr__` delegation suggested by [diedthreetimes](http://stackoverflow.com/users/234261/diedthreetimes) in a comment:
```
import sys
oldsysstdout = sys.stdout
class flushfile():
def __init__(self, f):
self.f = f
def __getattr__(self,name):
return object.__getattribute__(self.f, name)
def write(self, x):
self.f.write(x)
self.f.flush()
def flush(self):
self.f.flush()
sys.stdout = flushfile(sys.stdout)
```
In the original answer, the `__getattr__` method is not implemented. Without that, it fails. Other variants in answers to that question also fail in a notebook.
In a notebook, `sys.stdout` is an instance of `IPython.kernel.zmq.iostream.OutStream` and has a number of methods and attributes not present in the usual `sys.stdout`. Delegating `__getattr__` allows a `flushfile` to masquerade as a `...zmq.iostream.OutStream` duck.
This works in a python 2.7 notebook run with ipython 3.1.0 |
Python MySQL connector - unread result found when using fetchone | 29,772,337 | 6 | 2015-04-21T12:32:21Z | 33,632,767 | 12 | 2015-11-10T14:50:14Z | [
"python",
"mysql"
] | I am inserting JSON data into a MySQL database
I am parsing the JSON and then inserting it into a MySQL db using the python connector
Through trial, I can see the error is associated with this piece of code
```
for steps in result['routes'][0]['legs'][0]['steps']:
query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s')
if steps['travel_mode'] == "pub_tran":
travel_mode = steps['travel_mode']
Orig_lat = steps['var_1']['dep']['lat']
Orig_lng = steps['var_1']['dep']['lng']
Dest_lat = steps['var_1']['arr']['lat']
Dest_lng = steps['var_1']['arr']['lng']
time_stamp = leg['_sent_time_stamp']
if steps['travel_mode'] =="a_pied":
query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s')
travel_mode = steps['travel_mode']
Orig_lat = steps['var_2']['lat']
Orig_lng = steps['var_2']['lng']
Dest_lat = steps['var_2']['lat']
Dest_lng = steps['var_2']['lng']
time_stamp = leg['_sent_time_stamp']
cursor.execute(query,(travel_mode, Orig_lat, Orig_lng, Dest_lat, Dest_lng, time_stamp))
leg_no = cursor.fetchone()[0]
print(leg_no)
```
I have inserted higher level details and am now searching the database to associate this lower level information with its parent. The only way to find this unique value is to search via the origin and destination coordinates with the time\_stamp. I believe the logic is sound and by printing the leg\_no immediately after this section, I can see values which appear at first inspection to be correct
However, when added to the rest of the code, it causes subsequent sections where more data is inserted using the cursor to fail with this error -
```
raise errors.InternalError("Unread result found.")
mysql.connector.errors.InternalError: Unread result found.
```
The issue seems similar to [MySQL Unread Result with Python](http://stackoverflow.com/questions/15336767/mysql-unread-result-with-python)
Is the query too complex and needs splitting or is there another issue?
If the query is indeed too complex, can anyone advise how best to split this?
EDIT As per @Gord's help, Ive tried to dump any unread results
```
cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng))
leg_no = cursor.fetchone()[0]
try:
cursor.fetchall()
except mysql.connector.errors.InterfaceError as ie:
if ie.msg == 'No result set to fetch from.':
pass
else:
raise
cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng, time_stamp))
```
But, I still get
```
raise errors.InternalError("Unread result found.")
mysql.connector.errors.InternalError: Unread result found.
[Finished in 3.3s with exit code 1]
```
*scratches head*
EDIT 2 - when I print the ie.msg, I get -
```
No result set to fetch from
``` | All that was required was for buffered to be set to true!
```
cursor = cnx.cursor(buffered=True)
``` |
Python matching words with same index in string | 29,776,336 | 8 | 2015-04-21T15:15:06Z | 29,776,365 | 10 | 2015-04-21T15:16:30Z | [
"python",
"string",
"matching"
] | I have two strings of equal length and want to match words that have the same index. I am also attempting to match consecutive matches which is where I am having trouble.
For example I have two strings
```
alligned1 = 'I am going to go to some show'
alligned2 = 'I am not going to go the show'
```
What I am looking for is to get the result:
```
['I am','show']
```
My current code is as follow:
```
keys = []
for x in alligned1.split():
for i in alligned2.split():
if x == i:
keys.append(x)
```
Which gives me:
```
['I','am','show']
```
Any guidance or help would be appreciated. | Finding matching words is fairly simple, but putting them in contiguous groups is fairly tricky. I suggest using `groupby`.
```
import itertools
alligned1 = 'I am going to go to some show'
alligned2 = 'I am not going to go the show'
results = []
word_pairs = zip(alligned1.split(), alligned2.split())
for k, v in itertools.groupby(word_pairs, key = lambda pair: pair[0] == pair[1]):
if k:
words = [pair[0] for pair in v]
results.append(" ".join(words))
print results
```
Result:
```
['I am', 'show']
``` |
Can you "restart" the current iteration of a Python loop? | 29,776,689 | 5 | 2015-04-21T15:30:34Z | 29,776,745 | 7 | 2015-04-21T15:32:57Z | [
"python",
"for-loop",
"iteration"
] | Is there a way to implement something like this:
```
for row in rows:
try:
something
except:
restart iteration
``` | You could put your `try/except` block in another loop and then break when it succeeds:
```
for row in rows:
while True:
try:
something
break
except Exception: # Try to catch something more specific
pass
``` |
How to check if python unit test started in PyCharm or not? | 29,777,737 | 3 | 2015-04-21T16:13:08Z | 29,782,618 | 8 | 2015-04-21T20:35:26Z | [
"python",
"pycharm",
"python-unittest"
] | Is there a way to check in a python unit test (or any other script) if it is executed inside the PyCharm IDE or not?
I would like to do some special things in a unit test when it started locally, things I would not like to do when the whole thing is execute on the build server.
Cheers | When running under PyCharm, the `PYCHARM_HOSTED` environment variable is defined.
```
isRunningInPyCharm = "PYCHARM_HOSTED" in os.environ
``` |
pip install reportlab error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 | 29,778,715 | 2 | 2015-04-21T17:02:54Z | 29,779,291 | 7 | 2015-04-21T17:32:46Z | [
"python",
"linux",
"django",
"ubuntu",
"reportlab"
] | use ubuntu virtualenv. i tray to install reportlab
the command is
pip install reportlab
in this directory
(company2)stefano@stefano-X550EP:~/htdocs/company2$
the error is
error: command 'x86\_64-linux-gnu-gcc' failed with exit status 1
actually the pip list is:
argparse (1.2.1)
Django (1.7.7)
html5lib (0.999)
pip (1.5.4)
pisa (3.0.33)
PyPDF2 (1.24)
setuptools (2.2)
wsgiref (0.1.2)
xhtml2pdf (0.0.6)
i need reportlab to use xhtml2pdf because now the django project give me this error:
No module named reportlab.lib.colors | without your full error log, it is impossible to tell. But I bet you are just missing python-dev.
try installing it:
```
$ sudo apt-get install python-dev
```
then pip install reportlab again.
hope that helps.
see: [installing Reportlab (error: command 'gcc' failed with exit status 1 )](https://stackoverflow.com/questions/7325305/installing-reportlab-error-command-gcc-failed-with-exit-status-1) |
How do I print in the middle of the screen? | 29,780,053 | 3 | 2015-04-21T18:14:11Z | 29,780,173 | 7 | 2015-04-21T18:21:01Z | [
"python"
] | For example,
```
print "hello world"
```
in the middle of screen instead of beginning? Sample output would be like:
```
hello world
``` | Python 3 offers [`shutil.get_terminal_size()`](https://docs.python.org/3/library/shutil.html#shutil.get_terminal_size), and you can use [`str.center`](https://docs.python.org/3/library/stdtypes.html#str.center) to center using spaces:
```
import shutil
columns = shutil.get_terminal_size().columns
print("hello world".center(columns))
```
~~If youâre not using Python 3, use [`os.get_terminal_size()`](https://docs.python.org/3/library/os.html#os.get_terminal_size) instead.~~
As @br1ckb0t mentions, this isnât available conveniently in Python 2. Rather than using a less convenient way, though, Iâd suggest switching to Python 3 instead. |
In python, what exactly is going on in the background such that "x = 1j" works, but "x = 1*j" throws an error? | 29,781,498 | 4 | 2015-04-21T19:32:42Z | 29,781,545 | 12 | 2015-04-21T19:35:25Z | [
"python",
"built-in"
] | Specifically, if I wanted to define an object, say z, such that
```
x = 1z
```
worked but
```
x = 1*z
```
~~failed~~ threw an error, how would I define such an object?
I don't think it involves overloading the multiplying operator. | `1j`, works because it's a [literal for a Complex Number](https://docs.python.org/2/reference/lexical_analysis.html#imaginary-literals) (you mentioned `1j` in your question title). Kind of like `[]` is a literal for a list.
Here's the relevant excerpt from the Python docs / spec:
> Imaginary literals are described by the following lexical definitions:
>
> `imagnumber ::= (floatnumber | intpart) ("j" | "J")`
>
> An imaginary
> literal yields a complex number with a real part of 0.0. Complex
> numbers are represented as a pair of floating point numbers and have
> the same restrictions on their range. To create a complex number with
> a nonzero real part, add a floating point number to it, e.g., `(3+4j)`.
In other words, `1j` is a special case, and there's nothing you can do to make `1z` work like `1j` does. `1z` is a `SyntaxError`, and that's it (as far as Python is concerned, that is). |
Combine Pandas data frame column values into new column | 29,782,898 | 3 | 2015-04-21T20:52:15Z | 29,783,112 | 8 | 2015-04-21T21:05:58Z | [
"python",
"pandas",
"dataframe"
] | I'm working with Pandas and I have a data frame where we can have one of three values populated:
```
ID_1 ID_2 ID_3
abc NaN NaN
NaN def NaN
NaN NaN ghi
NaN NaN jkl
NaN mno NaN
pqr NaN NaN
```
And my goal is to combine these three columns into a new columns in my data frame:
```
ID_1 ID_2 ID_3 Combined_ID
abc NaN NaN abc
NaN def NaN def
NaN NaN ghi ghi
NaN NaN jkl jkl
NaN mno NaN mno
pqr NaN NaN pqr
```
Ideally it would just find whatever not null value exists in columns 1 through 3, but I could also concatenate since we should only have one of the three populated for each row. Thanks.
```
df_note = pd.read_csv("NoteIds.csv")
df_note['Combined_ID'] = # ID_1 + ID_2 + ID_3
``` | You can use the property that summing will concatenate the string values, so you could call [`fillna`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html#pandas.DataFrame.fillna) and pass an empty str and the call [`sum`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html#pandas.DataFrame.sum) and pass param `axis=1` to sum row-wise:
```
In [26]:
df['Combined_ID'] = df.fillna('').sum(axis=1)
df
Out[26]:
ID_1 ID_2 ID_3 Combined_ID
0 abc NaN NaN abc
1 NaN def NaN def
2 NaN NaN ghi ghi
3 NaN NaN jkl jkl
4 NaN mno NaN mno
5 pqr NaN NaN pqr
```
If you're only interested in those 3 columns you can just select them:
```
In [39]:
df['Combined_ID'] = df[['ID_1','ID_2','ID_3']].fillna('').sum(axis=1)
df
Out[39]:
ID_1 ID_2 ID_3 Combined_ID
0 abc NaN NaN abc
1 NaN def NaN def
2 NaN NaN ghi ghi
3 NaN NaN jkl jkl
4 NaN mno NaN mno
5 pqr NaN NaN pqr
``` |
Getting "weak reference" error in cryptography-0.8.2 python install | 29,784,651 | 4 | 2015-04-21T22:57:35Z | 29,784,875 | 7 | 2015-04-21T23:18:05Z | [
"python"
] | On our linux redhat RHEL 6 cluster, I downloaded cryptography-0.8.2.tar.gz and then ran
```
python setup.py install --user
```
in the cryptography-0.8.2 directory. I am getting the following error. Can anyone help me solve it? I'm not up to speed on weak references, just trying to install the cryptography module. Thanks much!
```
rcook@rzgpu2 (cryptography-0.8.2 ): python setup.py install --user
running install
Traceback (most recent call last):
File "setup.py", line 342, in <module>
**keywords_with_side_effects(sys.argv)
File "/usr/apps/python/lib/python2.7/distutils/core.py", line 152, in setup
dist.run_commands()
File "/usr/apps/python/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/apps/python/lib/python2.7/distutils/dist.py", line 971, in run_command
cmd_obj.ensure_finalized()
File "/usr/apps/python/lib/python2.7/distutils/cmd.py", line 109, in ensure_finalized
self.finalize_options()
File "setup.py", line 119, in finalize_options
self.distribution.ext_modules = get_ext_modules()
File "setup.py", line 78, in get_ext_modules
from cryptography.hazmat.bindings.commoncrypto.binding import (
File "src/cryptography/hazmat/bindings/commoncrypto/binding.py", line 14, in <module>
class Binding(object):
File "src/cryptography/hazmat/bindings/commoncrypto/binding.py", line 36, in Binding
"-framework", "Security", "-framework", "CoreFoundation"
File "src/cryptography/hazmat/bindings/utils.py", line 97, in build_ffi_for_binding
extra_link_args=extra_link_args,
File "src/cryptography/hazmat/bindings/utils.py", line 106, in build_ffi
ffi.cdef(cdef_source)
File "/g/g0/rcook/.local/lib/python2.7/site-packages/cffi/api.py", line 106, in cdef
self._parser.parse(csource, override=override, packed=packed)
File "/g/g0/rcook/.local/lib/python2.7/site-packages/cffi/cparser.py", line 165, in parse
self._internal_parse(csource)
File "/g/g0/rcook/.local/lib/python2.7/site-packages/cffi/cparser.py", line 199, in _internal_parse
realtype = self._get_type(decl.type, name=decl.name)
File "/g/g0/rcook/.local/lib/python2.7/site-packages/cffi/cparser.py", line 360, in _get_type
return self._get_struct_union_enum_type('struct', type, name)
File "/g/g0/rcook/.local/lib/python2.7/site-packages/cffi/cparser.py", line 434, in _get_struct_union_enum_type
return self._structnode2type[type]
File "/usr/apps/python/lib/python2.7/weakref.py", line 256, in __getitem__
return self.data[ref(key)]
TypeError: cannot create weak reference to 'Struct' object
``` | The problem is actually in the pycparser module.
As per <https://bugs.launchpad.net/openstack-gate/+bug/1446882> do the following:
```
pip uninstall pycparser && pip install -Iv pycparser==2.10
``` |
Setting GLOG_minloglevel=1 to prevent output in shell from Caffe | 29,788,075 | 10 | 2015-04-22T04:56:59Z | 29,788,785 | 14 | 2015-04-22T05:49:51Z | [
"python",
"deep-learning",
"caffe",
"glog"
] | I'm using Caffe, which is printing a lot of output to the shell when loading the neural net.
I'd like to suppress that output, which supposedly can be done by setting `GLOG_minloglevel=1` when running the Python script. I've tried doing that using the following code, but I still get all the output from loading the net. How do I suppress the output correctly?
```
os.environ["GLOG_minloglevel"] = "1"
net = caffe.Net(model_file, pretrained, caffe.TEST)
os.environ["GLOG_minloglevel"] = "0"
``` | To supress the output level you need to **increase** the loglevel to at least 2
```
os.environ['GLOG_minloglevel'] = '2'
```
The levels are
0 - debug
1 - info (still a LOT of outputs)
2 - warnings
3 - errors
---
**Update:**
Since this flag is *global* to `caffe`, it must be set *prior* to importing of `caffe` package (as pointed by [jbum](http://stackoverflow.com/a/31350273/1714410)). Once the flag is set and `caffe` is imported the behavior of the GLOG tool cannot be changed. |
Setting GLOG_minloglevel=1 to prevent output in shell from Caffe | 29,788,075 | 10 | 2015-04-22T04:56:59Z | 31,350,273 | 11 | 2015-07-10T21:06:25Z | [
"python",
"deep-learning",
"caffe",
"glog"
] | I'm using Caffe, which is printing a lot of output to the shell when loading the neural net.
I'd like to suppress that output, which supposedly can be done by setting `GLOG_minloglevel=1` when running the Python script. I've tried doing that using the following code, but I still get all the output from loading the net. How do I suppress the output correctly?
```
os.environ["GLOG_minloglevel"] = "1"
net = caffe.Net(model_file, pretrained, caffe.TEST)
os.environ["GLOG_minloglevel"] = "0"
``` | I was able to get [Shai's solution](http://stackoverflow.com/a/29788785/1714410) to work, but only by executing that line in Python *before* calling
```
import caffe
``` |
Using "if" as argument identifier | 29,790,344 | 2 | 2015-04-22T07:16:33Z | 29,790,493 | 7 | 2015-04-22T07:22:55Z | [
"python",
"lxml"
] | I want to generate the following xml file:
```
<foo if="bar"/>
```
I've tried this:
```
from lxml import etree
etree.Element("foo", if="bar")
```
But I got this error:
```
page = etree.Element("configuration", if="ok")
^
SyntaxError: invalid syntax
```
Any ideas?
I'm using python 2.7.9 and lxml 3.4.2 | ```
etree.Element("foo", {"if": "bar"})
```
The attributes can be passed in as a dict:
```
from lxml import etree
root = etree.Element("foo", {"if": "bar"})
print etree.tostring(root, pretty_print=True)
```
output
```
<foo if="bar"/>
``` |
Given 2 int values, return True if one is negative and other is positive | 29,790,594 | 25 | 2015-04-22T07:27:00Z | 29,790,699 | 8 | 2015-04-22T07:32:26Z | [
"python",
"python-3.x",
"return",
"logical-operators"
] | ```
def logical_xor(a, b): # for example, -1 and 1
print (a < 0) # evaluates to True
print (b < 0) # evaluates to False
print (a < 0 != b < 0) # EVALUATES TO FALSE! why??? it's True != False
return (a < 0 != b < 0) # returns False when it should return True
print ( logical_xor(-1, 1) ) # returns FALSE!
# now for clarification
print ( True != False) # PRINTS TRUE!
```
Could someone explain what is happening? I'm trying to make a one liner:
```
lambda a, b: (a < 0 != b < 0)
``` | ~~Your code doesn't work as intended because `!=` takes higher [precedence](https://docs.python.org/2/reference/expressions.html#operator-precedence) than `a < 0` and `b < 0`. As itzmeontv suggests in his answer, you can simply decide the precedence yourself by surrounding logical components with parentheses:~~
```
(a < 0) != (b < 0)
```
Your code attempts to evaluate `a < (0 != b) < 0`
**[EDIT]**
As tzaman rightly points out, the operators have the same precedence, but your code is attempting to evaluate `(a < 0) and (0 != b) and (b < 0)`. Surrounding your logical components with parentheses will resolve this:
```
(a < 0) != (b < 0)
```
Operator precedence: <https://docs.python.org/3/reference/expressions.html#operator-precedence>
Comparisons (i.a. chaining): <https://docs.python.org/3/reference/expressions.html#not-in> |
Given 2 int values, return True if one is negative and other is positive | 29,790,594 | 25 | 2015-04-22T07:27:00Z | 29,790,806 | 31 | 2015-04-22T07:37:57Z | [
"python",
"python-3.x",
"return",
"logical-operators"
] | ```
def logical_xor(a, b): # for example, -1 and 1
print (a < 0) # evaluates to True
print (b < 0) # evaluates to False
print (a < 0 != b < 0) # EVALUATES TO FALSE! why??? it's True != False
return (a < 0 != b < 0) # returns False when it should return True
print ( logical_xor(-1, 1) ) # returns FALSE!
# now for clarification
print ( True != False) # PRINTS TRUE!
```
Could someone explain what is happening? I'm trying to make a one liner:
```
lambda a, b: (a < 0 != b < 0)
``` | All comparison operators in Python have the [same precedence.](https://docs.python.org/3/reference/expressions.html#not-in) In addition, Python does chained comparisons. Thus,
```
(a < 0 != b < 0)
```
breaks down as:
```
(a < 0) and (0 != b) and (b < 0)
```
If any one of these is false, the total result of the expression will be `False`.
What you want to do is evaluate each condition separately, like so:
```
(a < 0) != (b < 0)
```
Other variants, from comments:
```
(a < 0) is not (b < 0) # True and False are singletons so identity-comparison works
(a < 0) ^ (b < 0) # bitwise-xor does too, as long as both sides are boolean
(a ^ b < 0) # or you could directly bitwise-xor the integers;
# the sign bit will only be set if your condition holds
# this one fails when you mix ints and floats though
(a * b < 0) # perhaps most straightforward, just multiply them and check the sign
``` |
pandas - add new column to dataframe from dictionary | 29,794,959 | 9 | 2015-04-22T10:39:21Z | 29,794,993 | 17 | 2015-04-22T10:40:42Z | [
"python",
"pandas"
] | I would like to add a column 'D' to a dataframe like this:
```
U,L
111,en
112,en
112,es
113,es
113,ja
113,zh
114,es
```
based on the following Dictionary:
```
d = {112: 'en', 113: 'es', 114: 'es', 111: 'en'}
```
so that the resulting dataframe appears as:
```
U,L,D
111,en,en
112,en,en
112,es,en
113,es,es
113,ja,es
113,zh,es
114,es,es
```
So far I tried the `pd.join()` method but I can't figured out how it works with Dictionaries. | Call [`map`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html#pandas.Series.map) and pass the dict, this will perform a lookup and return the associated value for that key:
```
In [248]:
d = {112: 'en', 113: 'es', 114: 'es', 111: 'en'}
df['D'] = df['U'].map(d)
df
Out[248]:
U L D
0 111 en en
1 112 en en
2 112 es en
3 113 es es
4 113 ja es
5 113 zh es
6 114 es es
``` |
How to test if an Enum member with a certain name exists? | 29,795,488 | 4 | 2015-04-22T11:02:25Z | 29,795,561 | 7 | 2015-04-22T11:05:39Z | [
"python",
"python-3.x",
"enums"
] | Using Python 3.4 I want to test whether an Enum class contains a member with a certain name.
Example:
```
class Constants(Enum):
One = 1
Two = 2
Three = 3
print(Constants['One'])
print(Constants['Four'])
```
gives:
```
Constants.One
File "C:\Python34\lib\enum.py", line 258, in __getitem__
return cls._member_map_[name]
KeyError: 'Four'
```
I could catch the `KeyError` and take the exception as indication of existence but maybe there is a more elegant way? | You could use `Enum.__members__` - [*an ordered dictionary mapping names to members*](https://docs.python.org/3/library/enum.html#iteration):
```
In [12]: 'One' in Constants.__members__
Out[12]: True
In [13]: 'Four' in Constants.__members__
Out[13]: False
``` |
If I compare two strings in python I get false even if they are the same | 29,798,009 | 3 | 2015-04-22T12:45:07Z | 29,798,136 | 9 | 2015-04-22T12:50:21Z | [
"python",
"if-statement"
] | I am trying to compare two strings, one downloaded, one from a file, but the if-statement returns always false, even if the strings are equal.
Am I doing something wrong?
Is this a bug in Python?
Code:
```
#!/usr/bin/python
import json
import urllib2
jsonstring = urllib2.urlopen("https://xkcd.com/info.0.json").read()
j = json.loads(jsonstring)
current_xkcd = j['num']
print current_xkcd
with open ("xkcd.num", "r") as file:
downloaded_xkcd = file.read().replace('\n', '')
print downloaded_xkcd
if current_xkcd == downloaded_xkcd:
print "Already got latest xkcd"
else:
print "Downloading xkcd..."
```
Output:
```
1515
1515
Downloading xkcd...
``` | `json.loads` translates the data to Python types. You're looking at an integer and comparing it to a string.
Instead of just `print current_xkcd`, try `print repr(current_xkcd)` or `print type(current_xkcd)`, and do the same for `downloaded_xkcd`. |
When should I use a custom Manager versus a custom QuerySet in Django? | 29,798,125 | 7 | 2015-04-22T12:49:50Z | 29,798,508 | 7 | 2015-04-22T13:05:46Z | [
"python",
"django",
"orm",
"django-managers"
] | In Django, custom Managers are a great way to organize reusable query logic. The [docs](https://docs.djangoproject.com/en/dev/topics/db/managers/#custom-managers) state that *there are two reasons you might want to customize a Manager: to add extra Manager methods, and/or to modify the initial QuerySet the Manager returns.*
However, [it goes on to describe](https://docs.djangoproject.com/en/dev/topics/db/managers/#creating-manager-with-queryset-methods) how custom QuerySets can also be created, and that these can be made accessible directly from the data model as manager via the `QuerySet.as_manager()`. *The Manager instance created by QuerySet.as\_manager() will be virtually identical to the PersonManager from the previous example.*
It seems like there is a lot of flexibility in how one could organize their logic between custom Manager and/or custom QuerySets. Can someone explain the principles by which I should decide when to use one versus the other? | Mainly to allow for easy composition of queries. Generally if you want to be able perform some operation on an existing queryset in a chain of queryset calls you can use a `QuerySet`.
For example, say you have an `Image` model that has a `width`, `height` fields:
```
class Image(models.Model):
width = ... # Width in pixels
height = ... # Height in pixels
```
you could write some custom `QuerySet` methods:
```
class ImageQuerySet(models.QuerySet):
def landscapes(self):
return self.filter(width__gte=models.F('height'))
def portraits(self):
return self.filter(width__lte=models.F('height'))
def small(self):
return self.filter(width__lte=1200)
def large(self):
return self.filter(width__gte=1200)
class ImageManager(models.Manager):
def get_queryset(self):
return ImageQuerySet(self.model, using=self._db)
```
now you can easily create dynamic querysets:
```
Image.objects.all().portraits().small()
Image.objects.all().large().portraits()
```
Logically, these functions should be concerned primarily with partitioning or redefining of existing querysets of the queryset's model. For situations where you aren't operating on existing querysets, you don't want to return a queryset at all, or you might have to perform some related logic that doesn't involve this particular model, than a model manager it better suited. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.