title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Delaunay Triangulation of points from 2D surface in 3D with python? | 29,800,749 | 13 | 2015-04-22T14:28:51Z | 29,872,732 | 7 | 2015-04-26T01:39:27Z | [
"python",
"numpy",
"matplotlib",
"data-visualization"
] | I have a collection of 3D points. These points are sampled at constant levels (z=0,1,...,7). An image should make it clear:

These points are in a numpy ndarray of shape `(N, 3)` called `X`. The above plot is created using:
```
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
X = load('points.npy')
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_wireframe(X[:,0], X[:,1], X[:,2])
ax.scatter(X[:,0], X[:,1], X[:,2])
plt.draw()
```
I'd like to instead triangulate only the surface of this object, and plot the surface. I do not want the convex hull of this object, however, because this loses subtle shape information I'd like to be able to inspect.
I have tried `ax.plot_trisurf(X[:,0], X[:,1], X[:,2])`, but this results in the following mess:

Any help?
## Example data
Here's a snippet to generate 3D data that is representative of the problem:
```
import numpy as np
X = []
for i in range(8):
t = np.linspace(0,2*np.pi,np.random.randint(30,50))
for j in range(t.shape[0]):
# random circular objects...
X.append([
(-0.05*(i-3.5)**2+1)*np.cos(t[j])+0.1*np.random.rand()-0.05,
(-0.05*(i-3.5)**2+1)*np.sin(t[j])+0.1*np.random.rand()-0.05,
i
])
X = np.array(X)
```
## Example data from original image
Here's a pastebin to the original data:
<http://pastebin.com/YBZhJcsV>
Here are the slices along constant z:
 | I realise that you mentioned in your question that you didn't want to use the convex hull because you might lose some shape information. I have a simple solution that works pretty well for your 'jittered spherical' example data, although it does use [`scipy.spatial.ConvexHull`](http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.spatial.ConvexHull.html). I thought I would share it here anyway, just in case it's useful for others:
```
from matplotlib.tri import triangulation
from scipy.spatial import ConvexHull
# compute the convex hull of the points
cvx = ConvexHull(X)
x, y, z = X.T
# cvx.simplices contains an (nfacets, 3) array specifying the indices of
# the vertices for each simplical facet
tri = Triangulation(x, y, triangles=cvx.simplices)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.hold(True)
ax.plot_trisurf(tri, z)
ax.plot_wireframe(x, y, z, color='r')
ax.scatter(x, y, z, color='r')
plt.draw()
```

It does pretty well in this case, since your example data ends up lying on a more-or-less convex surface. Perhaps you could make some more challenging example data? A toroidal surface would be a good test case which the convex hull method would obviously fail.
Mapping an arbitrary 3D surface from a point cloud is a *really* tough problem. Here's a [related question](http://stackoverflow.com/q/4882993/1461210) containing some links that might be helpful. |
Testing for a Palindrome in Python | 29,801,025 | 2 | 2015-04-22T14:40:04Z | 29,801,092 | 7 | 2015-04-22T14:42:27Z | [
"python",
"string",
"palindrome",
"splice"
] | I now know there are better solutions for this, but I'm confused as to why I'm getting the result I am.
```
import sys
def isPalindrome(test):
if len(test) == 1:
return("Is a palindrome")
else:
if test[0] == test[-1]:
isPalindrome(test[1:-1])
else:
return("Not a palindrome")
print(isPalindrome(sys.argv[1]))
```
On a true palindrome, I get 'None'. When the result is not a palindrome, I get the expected value of 'Not a palindrome'. | Change to the following line:
```
return isPalindrome(test[1:-1])
```
You have to return a value or the value returned is `None`. |
Python multiprocessing doesn't seem to use more than one core | 29,802,503 | 33 | 2015-04-22T15:40:20Z | 29,802,608 | 49 | 2015-04-22T15:45:22Z | [
"python",
"multiprocessing"
] | I want to use Python multiprocessing to run grid search for a predictive model.
When I look at core usage, it always seem to be using only one core. Any idea what I'm doing wrong?
```
import multiprocessing
from sklearn import svm
import itertools
#first read some data
#X will be my feature Numpy 2D array
#y will be my 1D Numpy array of labels
#define the grid
C = [0.1, 1]
gamma = [0.0]
params = [C, gamma]
grid = list(itertools.product(*params))
GRID_hx = []
def worker(par, grid_list):
#define a sklearn model
clf = svm.SVC(C=g[0], gamma=g[1],probability=True,random_state=SEED)
#run a cross validation fuction: returns error
ll = my_cross_validation_function(X, y, model=clf, n=1, test_size=0.2)
print(par, ll)
grid_list.append((par, ll))
if __name__ == '__main__':
manager = multiprocessing.Manager()
GRID_hx = manager.list()
jobs = []
for g in grid:
p = multiprocessing.Process(target=worker, args=(g,GRID_hx))
jobs.append(p)
p.start()
p.join()
print("\n-------------------")
print("SORTED LIST")
print("-------------------")
L = sorted(GRID_hx, key=itemgetter(1))
for l in L[:5]:
print l
``` | Your problem is that you *join* each job immediately after you started it:
```
for g in grid:
p = multiprocessing.Process(target=worker, args=(g,GRID_hx))
jobs.append(p)
p.start()
p.join()
```
*join* blocks until the respective process has finished working. This means that your code starts **only one process at once, waits until it is finished and then starts the next one**.
In order for all processes to run in parallel, you need to **first start them all** and then **join them all**:
```
jobs = []
for g in grid:
p = multiprocessing.Process(target=worker, args=(g,GRID_hx))
jobs.append(p)
p.start()
for j in jobs:
j.join()
```
Documentation: [link](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Process.join) |
Is there a way to start unit tests which related only to changed code? | 29,803,333 | 10 | 2015-04-22T16:13:40Z | 29,862,370 | 7 | 2015-04-25T07:31:36Z | [
"python",
"unit-testing",
"dependencies",
"code-coverage",
"python-unittest"
] | In my Python project, we have a big number of unit tests (some thousands). Though they are logically distributed between files and classes, I need sometimes a lot of time in order to find ones, which cover the functionality I'm changing.
Of course, I can run all test from some specific file/class, but again because of big number of that tests, It'll be time-consuming to run them continuously (I'm executing unit tests each time after saving a file in my IDE).
So in general I need some solution which will do following activities at a time:
* Tracks which files have been changed since last file saving
* Traces dependencies between code that have been changed in those files and unit tests which cover that code
* Selectively executes only those unit tests which cover the code that has been affected
Does anyone have idea about something similar? | You might like to checkout [pytest-incremental](http://pytest-incremental.readthedocs.org/):
> The idea is to execute your tests faster by executing not all of them but only the ârequiredâ ones.
Install via [pypi](https://pypi.python.org/pypi/pytest-incremental):
```
pip install pytest-incremental
```
Usage:
```
$ py.test --inc
```
I think it does what you are looking for, it "looks for imports recursively to find dependencies (using AST)" and runs only the changed tests. |
Why use re.match(), when re.search() can do the same thing? | 29,806,650 | 14 | 2015-04-22T19:03:11Z | 29,806,772 | 11 | 2015-04-22T19:09:26Z | [
"python",
"regex"
] | From the documentation, it's very clear that:
* `match()` -> apply pattern match at the beginning of the string
* `search()` -> search through the string and return first match
And `search` with `'^'` and without `re.M` flag would work the same as `match`.
Then why does python have `match()`? Isn't it redundant?
Are there any performance benefits to keeping `match()` in python? | The `pos` argument behaves differently in important ways:
```
>>> s = "a ab abc abcd"
>>> re.compile('a').match(s, pos=2)
<_sre.SRE_Match object; span=(2, 3), match='a'>
>>> re.compile('^a').search(s, pos=2)
None
```
`match` makes it possible to write a tokenizer, and ensure that characters are never skipped. `search` has no way of saying "start from the earliest allowable character".
Example use of match to break up a string with no gaps:
```
def tokenize(s, patt):
at = 0
while at < len(s):
m = patt.match(s, pos=at)
if not m:
raise ValueError("Did not expect character at location {}".format(at))
at = m.end()
yield m
``` |
Different ways of deleting lists | 29,810,632 | 35 | 2015-04-22T23:14:24Z | 29,810,740 | 7 | 2015-04-22T23:25:14Z | [
"python",
"list",
"memory-management"
] | I want to understand why:
* `a = []`;
* `del a`; and
* `del a[:]`;
behave so differently.
I ran a test for each to illustrate the differences I witnessed:
```
>>> # Test 1: Reset with a = []
...
>>> a = [1,2,3]
>>> b = a
>>> a = []
>>> a
[]
>>> b
[1, 2, 3]
>>>
>>> # Test 2: Reset with del a
...
>>> a = [1,2,3]
>>> b = a
>>> del a
>>> a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'a' is not defined
>>> b
[1, 2, 3]
>>>
>>> # Test 3: Reset with del a[:]
...
>>> a = [1,2,3]
>>> b = a
>>> del a[:]
>>> a
[]
>>> b
[]
```
I did find [Clearing Python lists](http://stackoverflow.com/questions/850795/clearing-python-lists), but I didn't find an explanation for the differences in behaviour. Can anyone clarify this? | `Test 1:` rebinds `a` to a new object, `b` still holds a *reference* to the original object, `a` is just a name by rebinding `a` to a new object does not change the original object that `b` points to.
`Test 2:` you del the name `a` so it no longer exists but again you still have a reference to the object in memory with `b`.
`Test 3` `a[:]` just like when you copy a list or want to change all the elements of a list refers to references to the objects stored in the list not the name `a`. `b` gets cleared also as again it is a reference to `a` so changes to the content of `a` will effect `b`.
The behaviour is [documented](https://docs.python.org/2/tutorial/datastructures.html#the-del-statement):
> There is a way to remove an item from a list given its index instead
> of its value: the `del` statement. This differs from the `pop()`
> method which returns a value. The `del` statement can also be used to
> remove slices from a list or clear the entire list (which we did
> earlier by assignment of an empty list to the slice). For example:
>
> ```
> >>>
> >>> a = [-1, 1, 66.25, 333, 333, 1234.5]
> >>> del a[0]
> >>> a
> [1, 66.25, 333, 333, 1234.5]
> >>> del a[2:4]
> >>> a
> [1, 66.25, 1234.5]
> >>> del a[:]
> >>> a
> []
> ```
>
> `del` can also be used to delete entire variables:
>
> ```
> >>>
> >>> del a
> ```
>
> Referencing the name `a` hereafter is an error (at least until another
> value is assigned to it). We'll find other uses for `del` later.
So only `del a` actually deletes `a`, `a = []` rebinds a to a new object and `del a[:]` clears `a`. In your second test if `b` did not hold a reference to the object it would be garbage collected. |
Different ways of deleting lists | 29,810,632 | 35 | 2015-04-22T23:14:24Z | 29,810,816 | 23 | 2015-04-22T23:31:48Z | [
"python",
"list",
"memory-management"
] | I want to understand why:
* `a = []`;
* `del a`; and
* `del a[:]`;
behave so differently.
I ran a test for each to illustrate the differences I witnessed:
```
>>> # Test 1: Reset with a = []
...
>>> a = [1,2,3]
>>> b = a
>>> a = []
>>> a
[]
>>> b
[1, 2, 3]
>>>
>>> # Test 2: Reset with del a
...
>>> a = [1,2,3]
>>> b = a
>>> del a
>>> a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'a' is not defined
>>> b
[1, 2, 3]
>>>
>>> # Test 3: Reset with del a[:]
...
>>> a = [1,2,3]
>>> b = a
>>> del a[:]
>>> a
[]
>>> b
[]
```
I did find [Clearing Python lists](http://stackoverflow.com/questions/850795/clearing-python-lists), but I didn't find an explanation for the differences in behaviour. Can anyone clarify this? | ## Test 1
```
>>> a = [1,2,3] # set a to point to a list [1, 2, 3]
>>> b = a # set b to what a is currently pointing at
>>> a = [] # now you set a to point to an empty list
# Step 1: A --> [1 2 3]
# Step 2: A --> [1 2 3] <-- B
# Step 3: A --> [ ] [1 2 3] <-- B
# at this point a points to a new empty list
# whereas b points to the original list of a
```
## Test 2
```
>>> a = [1,2,3] # set a to point to a list [1, 2, 3]
>>> b = a # set b to what a is currently pointing at
>>> del a # delete the reference from a to the list
# Step 1: A --> [1 2 3]
# Step 2: A --> [1 2 3] <-- B
# Step 3: [1 2 3] <-- B
# so a no longer exists because the reference
# was destroyed but b is not affected because
# b still points to the original list
```
## Test 3
```
>>> a = [1,2,3] # set a to point to a list [1, 2, 3]
>>> b = a # set b to what a is currently pointing at
>>> del a[:] # delete the contents of the original
# Step 1: A --> [1 2 3]
# Step 2: A --> [1 2 3] <-- B
# Step 2: A --> [ ] <-- B
# both a and b are empty because they were pointing
# to the same list whose elements were just removed
``` |
Different ways of deleting lists | 29,810,632 | 35 | 2015-04-22T23:14:24Z | 29,810,830 | 14 | 2015-04-22T23:32:50Z | [
"python",
"list",
"memory-management"
] | I want to understand why:
* `a = []`;
* `del a`; and
* `del a[:]`;
behave so differently.
I ran a test for each to illustrate the differences I witnessed:
```
>>> # Test 1: Reset with a = []
...
>>> a = [1,2,3]
>>> b = a
>>> a = []
>>> a
[]
>>> b
[1, 2, 3]
>>>
>>> # Test 2: Reset with del a
...
>>> a = [1,2,3]
>>> b = a
>>> del a
>>> a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'a' is not defined
>>> b
[1, 2, 3]
>>>
>>> # Test 3: Reset with del a[:]
...
>>> a = [1,2,3]
>>> b = a
>>> del a[:]
>>> a
[]
>>> b
[]
```
I did find [Clearing Python lists](http://stackoverflow.com/questions/850795/clearing-python-lists), but I didn't find an explanation for the differences in behaviour. Can anyone clarify this? | Of your three *"ways of deleting Python lists"*, **only one** actually alters the original list object; the other two only affect *the name*.
1. `a = []` creates a *new list object*, and assigns it to the name `a`.
2. `del a` deletes *the name*, **not** the object it refers to.
3. `del a[:]` deletes *all references* from the list referenced by the name `a` (although, similarly, it doesn't directly affect the objects that were referenced from the list).
It's probably worth reading [this article](http://nedbatchelder.com/text/names.html) on Python names and values to better understand what's going on here. |
finding needle in haystack, what is a better solution? | 29,810,883 | 14 | 2015-04-22T23:36:57Z | 29,921,752 | 7 | 2015-04-28T14:02:11Z | [
"python",
"dynamic-programming"
] | so given "needle" and "there is a needle in this but not thisneedle haystack"
I wrote
```
def find_needle(n,h):
count = 0
words = h.split(" ")
for word in words:
if word == n:
count += 1
return count
```
This is O(n) but wondering if there is a better approach? maybe not by using split at all?
How would you write tests for this case to check that it handles all edge cases? | I don't think it's possible to get bellow `O(n)` with this (because you need to iterate trough the string at least once). You can do some optimizations.
I assume you want to match "*whole words*", for example looking up `foo` should match like this:
```
foo and foo, or foobar and not foo.
^^^ ^^^ ^^^
```
So splinting just based on space wouldn't do the job, because:
```
>>> 'foo and foo, or foobar and not foo.'.split(' ')
['foo', 'and', 'foo,', 'or', 'foobar', 'and', 'not', 'foo.']
# ^ ^
```
This is where [`re` module](https://docs.python.org/3.2/library/re.htm) comes in handy, which will allows you to build fascinating conditions. For example `\b` inside the regexp means:
> Matches the empty string, but only at the beginning or end of a word. *A word is defined as a sequence of Unicode alphanumeric or underscore characters, so the end of a word is indicated by **whitespace or a non-alphanumeric***, non-underscore Unicode character. Note that formally, `\b` is defined as the boundary between a `\w` and a `\W` character (or vice versa), or between `\w` and the beginning/end of the string. This means that `r'\bfoo\b'` matches `'foo'`, `'foo.'`, `'(foo)'`, `'bar foo baz'` but not `'foobar'` or `'foo3'`.
So `r'\bfoo\b'` will match only *whole word `foo`*. Also don't forget to use [`re.escape()`](https://docs.python.org/3.2/library/re.html#re.escape):
```
>>> re.escape('foo.bar+')
'foo\\.bar\\+'
>>> r'\b{}\b'.format(re.escape('foo.bar+'))
'\\bfoo\\.bar\\+\\b'
```
All you have to do now is use [`re.finditer()`](https://docs.python.org/3.2/library/re.html#re.finditer) to scan the string. Based on documentation:
> Return an iterator yielding match objects over all non-overlapping matches for the RE pattern in string. The string is scanned left-to-right, and matches are returned in the order found. Empty matches are included in the result unless they touch the beginning of another match.
I assume that matches are generated *on the fly*, so they never have to be in memory at once (which may come in handy with **large** strings, with many matched items). And in the end just count them:
```
>>> r = re.compile(r'\bfoo\b')
>>> it = r.finditer('foo and foo, or foobar and not foo.')
>>> sum(1 for _ in it)
3
``` |
Python for loop - why does this not infinite loop? | 29,812,626 | 5 | 2015-04-23T02:38:04Z | 29,812,639 | 9 | 2015-04-23T02:39:38Z | [
"python",
"for-loop",
"infinite-loop"
] | Consider the following snippet of Python code:
```
x = 14
for k in range(x):
x += 1
```
At the end of execution, `x` is equal to 28.
My question: shouldn't this code loop forever? At each iteration, it checks if `k` is less than `x`. However, `x` is incremented within the for loop, so it has a higher value for the next comparison. | `range(x)` is not a "command". It creates a range object one time, and the loop iterates over that. Changing x does not change all objects that were made using it.
```
>>> x = 2
>>> k = range(x)
>>> list(k)
[0, 1]
>>> x += 1
>>> list(k)
[0, 1]
``` |
How to add a title to Seaborn Facet Plot | 29,813,694 | 21 | 2015-04-23T04:28:39Z | 29,814,281 | 35 | 2015-04-23T05:15:49Z | [
"python",
"visualization",
"seaborn"
] | How do I add a title to this Seaborne plot? Let's give it a title 'I AM A TITLE'.
```
tips = sns.load_dataset("tips")
g = sns.FacetGrid(tips, col="sex", row="smoker", margin_titles=True)
g.map(sns.plt.scatter, "total_bill", "tip")
```
 | After those lines:
```
plt.subplots_adjust(top=0.9)
g.fig.suptitle('THIS IS A TITLE, YOU BET') # can also get the figure from plt.gcf()
```
If you add a suptitle without adjusting the axis, the seaborn facet titles overlap it.
(With different data):
 |
How to add a title to Seaborn Facet Plot | 29,813,694 | 21 | 2015-04-23T04:28:39Z | 32,876,200 | 7 | 2015-09-30T21:50:38Z | [
"python",
"visualization",
"seaborn"
] | How do I add a title to this Seaborne plot? Let's give it a title 'I AM A TITLE'.
```
tips = sns.load_dataset("tips")
g = sns.FacetGrid(tips, col="sex", row="smoker", margin_titles=True)
g.map(sns.plt.scatter, "total_bill", "tip")
```
 | In ipython notebook, this worked for me!
```
sns.plt.title('YOUR TITLE HERE')
``` |
Pandas DataFrame to List of Dictionaries (Dics) | 29,815,129 | 14 | 2015-04-23T06:12:18Z | 29,815,523 | 19 | 2015-04-23T06:36:59Z | [
"python",
"list",
"dictionary",
"pandas",
"dataframe"
] | I have the following DataFrame:
```
customer item1 item2 item3
1 apple milk tomato
2 water orange potato
3 juice mango chips
```
which I want to translate it to list of dictionaries per row
```
rows = [{'customer': 1, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
{'customer': 2, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
{'customer': 3, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]
``` | Use `df.T.to_dict().values()`, like below:
```
In [1]: df
Out[1]:
customer item1 item2 item3
0 1 apple milk tomato
1 2 water orange potato
2 3 juice mango chips
In [2]: df.T.to_dict().values()
Out[2]:
[{'customer': 1.0, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
{'customer': 2.0, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
{'customer': 3.0, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]
```
---
As John Galt mentions in [his answer](http://stackoverflow.com/a/29816143/2358206) , you should probably instead use `df.to_dict('records')`. It's faster than transposing manually.
```
In [20]: timeit df.T.to_dict().values()
1000 loops, best of 3: 395 µs per loop
In [21]: timeit df.to_dict('records')
10000 loops, best of 3: 53 µs per loop
``` |
Pandas DataFrame to List of Dictionaries (Dics) | 29,815,129 | 14 | 2015-04-23T06:12:18Z | 29,816,143 | 35 | 2015-04-23T07:08:44Z | [
"python",
"list",
"dictionary",
"pandas",
"dataframe"
] | I have the following DataFrame:
```
customer item1 item2 item3
1 apple milk tomato
2 water orange potato
3 juice mango chips
```
which I want to translate it to list of dictionaries per row
```
rows = [{'customer': 1, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
{'customer': 2, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
{'customer': 3, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]
``` | Use [`df.to_dict('records')`](http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.to_dict.html#pandas.DataFrame.to_dict) -- gives the output without having to transpose externally.
```
In [2]: df.to_dict('records')
Out[2]:
[{'customer': 1L, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
{'customer': 2L, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
{'customer': 3L, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]
``` |
How to run Pip commands from CMD | 29,817,447 | 7 | 2015-04-23T08:12:49Z | 29,817,514 | 11 | 2015-04-23T08:15:59Z | [
"python",
"cmd",
"pip"
] | As I understand, Python 2.7.9 comes with Pip installed, however when I try to execute a Pip command from CMD (Windows) I get the following error:
> 'pip' is not recognized as an internal or external command, operable program or batch file.
When I type `python` I do get the following, which suggests it has been installed correctly:
> Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win32
>
> Type "help", "copyright", "credits" or "license" for more information.
I did however need to add some environmental variables to get the python part working on CMD:
* Add to the environment variable `PATH`: `"C:\Python27\"`
* Define the system variable `PYTHONPATH`: `"C:\Python27\"`
I cannot find a Pip folder within the Python directory, however there is a folder called "ensurepip" in `C:\Python27\Lib\`.
Does anybody know how can I get Pip commands to start working in CMD? | To execute Pip, first of all make sure you have it, so type in your CMD:
```
> python
>>> import pip
>>>
```
And it should proceed with no error. Otherwise, if this fails, you can look [**here**](http://stackoverflow.com/questions/4750806/how-do-i-install-pip-on-windows?rq=1) to see how to install it. Now that you are sure you've got Pip, you can run it from CMD like this:
```
> python -m pip <command> <args>
```
Where `<command>` is any Pip command you want to run, and `<args>` are its relative arguments, separated by spaces.
---
For example, to install a package:
```
> python -m pip install <package>
```
Where `<package>` is the name of the package you want to install. |
What should I decorate with @asyncio.coroutine for async operations? | 29,819,151 | 4 | 2015-04-23T09:24:10Z | 29,850,732 | 8 | 2015-04-24T14:53:57Z | [
"python",
"python-3.x",
"asynchronous",
"coroutine",
"python-asyncio"
] | I want to run my code asnychronously. What should I decorate with `@asyncio.coroutine` and what should I call with `yield from` for async operations?
In my case, I have some example code without decorator. (Simple chat bot look-like IRC)
```
import asyncio
class ChatBot:
def __init__(self, loop):
conn = asyncio.open_connection(HOST, PORT, loop=loop)
self.reader, self.writer = yield from conn
def send(self, msg):
self.writer.write(msg)
def read(self):
msg = yield from self.reader.readline()
return msg
def run(self):
while True:
msg = self.read()
self.parse(msg)
def parse(self, msg):
if msg.startswith('PING'):
self.some_work(msg)
elif msg.startswith('ERROR'):
self.some_error()
else:
self.server_log(msg)
def some_work(self, msg):
# some work. It can call asynchronous function like I/O or long operation. It can use self.send().
def some_error(self, msg):
# some work. It can call asynchronous function like I/O or long operation. It can use self.send().
def server_log(self, msg):
# some work. It can call asynchronous function like I/O or long operation. It can use self.send().
loop = asyncio.get_event_loop()
bot = ChatBot(loop)
loop.run_until_complete(???)
loop.close()
```
I think `???` is `bot.run()` and `ChatBot.run` must be decorated with `@asyncio.coroutine`. Then, how about other methods? I can't understand when use `@asyncio.coroutine` decorator and call method with `yield from` or `asyncio.async`. (I already read PEP-3156 for understanding asnycio. But I can't understand fully.) | ## When to use the `@asyncio.coroutine` decorator
If you have a function that needs to use `yield from` to call a coroutine, you should decorate it with `asyncio.coroutine`. Also note that coroutines are often (not always) "viral". As soon as you add `yield from` to a function it becomes a coroutine, and additionally any function that *calls* that coroutine usually (though not always) needs to be come a coroutine, too.
## When to use `asyncio.async`
Why are coroutines not always viral? Because you actually don't always need to use `yield from` to call a coroutine. You only need to use `yield from` if you want to call a coroutine *and wait for it to finish*. If you just want to kick off a coroutine in the background, you can just do this:
```
asyncio.async(coroutine())
```
This will schedule `coroutine` to run as soon as control returns to the event loop; it won't wait for `coroutine` to finish before moving on to the next line. An ordinary function can use this to schedule a coroutine to run without also having to become a coroutine itself.
You can also use this approach to run multiple `coroutines` concurrently. So, imagine you have these two coroutines:
```
@asyncio.coroutine
def coro1():
yield from asyncio.sleep(1)
print("coro1")
@asyncio.coroutine
def coro2():
yield from asyncio.sleep(2)
print("coro2")
```
If you had this:
```
@asyncio.coroutine
def main():
yield from coro1()
yield from coro2()
yield from asyncio.sleep(5)
asyncio.get_event_loop().run_until_complete(main())
```
After 1 second, `"coro1"` would be printed. Then, after two more seconds (so three seconds total), `"coro2"` would be printed, and five seconds later the program would exit, making for 8 seconds of total runtime. Alternatively, if you used `asyncio.async`:
```
@asyncio.coroutine
def main():
asyncio.async(coro1())
asyncio.async(coro2())
yield from asyncio.sleep(5)
asyncio.get_event_loop().run_until_complete(main())
```
This will print `"coro1"` after one second, `"coro2"` one second later, and the program would exit 3 seconds later, for a total of 5 seconds of runtime.
## How does this affect your code?
So following those rules, your code needs to look like this:
```
import asyncio
class ChatBot:
def __init__(self, reader, writer):
# __init__ shouldn't be a coroutine, otherwise you won't be able
# to instantiate ChatBot properly. So I've removed the code that
# used yield from, and moved it outside of __init__.
#conn = asyncio.open_connection(HOST, PORT, loop=loop)
#self.reader, self.writer = yield from conn
self.reader, self.writer = reader, writer
def send(self, msg):
# writer.write is not a coroutine, so you
# don't use 'yield from', and send itself doesn't
# need to be a coroutine.
self.writer.write(msg)
@asyncio.coroutine
def read(self):
msg = yield from self.reader.readline()
return msg
@asyncio.coroutine
def run(self):
while True:
msg = yield from self.read()
yield from self.parse(msg)
@asyncio.coroutine
def parse(self, msg):
if msg.startswith('PING'):
yield from self.some_work(msg)
elif msg.startswith('ERROR'):
yield from self.some_error()
else:
yield from self.server_log(msg)
@asyncio.coroutine
def some_work(self, msg):
# some work. It can call asynchronous function like I/O or long operation. It can use self.send().
@asyncio.coroutine
def some_error(self, msg):
# some work. It can call asynchronous function like I/O or long operation. It can use self.send().
@asyncio.coroutine
def server_log(self, msg):
# some work. It can call asynchronous function like I/O or long operation. It can use self.send()
@asyncio.coroutine
def main(host, port):
reader, writer = yield from asyncio.open_connection(HOST, PORT, loop=loop)
bot = ChatBot(reader, writer)
yield from bot.run()
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
```
One other thing to keep in mind - adding `yield from` in front of a function doesn't magically make that call non-blocking. Neither does adding the `@asyncio.coroutine` decorator. Functions are only non-blocking if they're actually directly or indirectly calling native `asyncio` coroutines, which use non-blocking I/O and are integrated with the `asyncio` event loop. You mentioned making REST API calls, for example. In order for those REST API calls to not block the event loop, you'd need to use the `aiohttp` library, or `asyncio.open_connection`. Using something like `requests` or `urllib` will block the loop, because they're not integrated with `asyncio. |
Can I have anonymous function with "static" variables in Python? | 29,825,365 | 3 | 2015-04-23T13:44:20Z | 29,825,946 | 8 | 2015-04-23T14:07:24Z | [
"python",
"lambda"
] | Can I have anonymous function with "static" variables in Python?
For example
```
lambda x: re.compile(r'foobar').match(x)
```
is not so great, because it may recompile every time it is called (if `re` runs out of cache - thank you responders for pointing out the cache mechanism).
I can do this without recompiling:
```
def f(line):
try:
f.c
except:
f.c = re.compile(r'foobar')
return f.c.match(line)
```
How to do it with a lambda, without recompiling?
And well, I don't want to use a helper function, to use inside the lambda. The whole point of using lambdas is "anonymity". So yes the lambda is anonymous, and self-contained. | The usual trick is to provide a default value for an argument you don't intend to supply.
```
lambda x, regexobject=re.compile(r'foobar'): regexobject.match(x)
```
The default value is evaluated when the `lambda` is defined, not each time it is called.
---
Rather than using the `lambda`, though, I would just define your regular expressions explicitly
```
regex1 = re.compile(r'foobar')
regex2 = re.compile(r'bazquux')
# etc
```
then pass the bound method around where needed. That is, instead of
```
somefunction(lambda x, regexobject=re.compile(r'foobar'): regexobject.match(x))
```
use
```
somefunction(regex1.match)
```
The use case for an anonymous function is one that will only be called once, so there is no sense in binding a name to it. The fact that you are concerned about `re.compile` being called multiple times indicates that the functions will be called several times. |
Python: "Chained definition" of ints vs lists | 29,825,842 | 8 | 2015-04-23T14:03:09Z | 29,825,915 | 13 | 2015-04-23T14:06:20Z | [
"python"
] | I just discovered in the definition of variables in Python. Namely:
```
a = b = 0
a = 1
```
gives me `a=1` and `b=0` or a and b are two independent variables.
But:
```
a = b = []
a.append(0)
```
gives me `a = [0]` and `b = [0]`, or a and b are two references to the same object. This is confusing to me, how are these two cases different? Is it because `int` are primitive types or because lists are just pointers? | `a` and `b` point to the *same object always*. But you cannot alter the integer, it is immutable.
In your first example, you *rebound* `a` to point to another object. You did not do that in the other example, you never assigned another object to `a`.
Instead, you asked the object `a` *references* to alter itself, to add another entry to that object. All other references to that same object (the `list`), will see these changes. That's because `list` objects are mutable.
A proper comparison would be to re-assign `a` to point to a new list object:
```
a = b = []
a = [0]
```
Now you rebound `a` and `b` is still referencing the first list object. |
Programmatically surrounding a Python input in quotes | 29,826,257 | 2 | 2015-04-23T14:20:34Z | 29,826,354 | 7 | 2015-04-23T14:23:56Z | [
"python",
"string",
"ip"
] | Using the netaddr Python library tutorial (<https://pythonhosted.org/netaddr/tutorial_01.html>) I am creating a program that allows a user to input an IP address that gets added to a list, the only problem being it needs to be converted to an IP object first.
```
ip = input('Enter a valid IP Address/Subnet: ')
ip_list = IPNetwork(ip)
print('You have selected: ', ip_list)
```
When I run the program and enter:
> 192.168.1.1
I get
> ```
> > Traceback (most recent call last):
> File "C:\Python27\IPAddress.py", line 4, in <module>
> ip = input('Enter a valid IP Address/Subnet: ')
> File "<string>", line 1
> 192.168.1.1
> ^ SyntaxError: invalid syntax
> ```
However when i surround the IP in quotes it seems to work
> '192.168.1.1'
I get my desired result
> ('You have selected: ', IPNetwork('192.168.1.1/32'))
My question is how can I allow the user to enter an IP address without having to surround it in quotes themselves i.e. it will be done programatically | I'm going to guess you're using Python 2. Use `raw_input` instead of `input` and it will work. With `input`, if you enter a number you will get a number type (`int` for integer, `float` for floating point, etc). The IP address confuses things as it doesn't understand why more than one decimal point exists. |
force object to be `dirty` in sqlalchemy | 29,830,229 | 4 | 2015-04-23T17:14:40Z | 29,831,809 | 7 | 2015-04-23T18:42:06Z | [
"python",
"sqlalchemy"
] | Is there a way to force an object mapped by sqlalchemy to be considered `dirty`? For example, given the context of sqlalchemy's [Object Relational Tutorial](http://docs.sqlalchemy.org/en/latest/orm/tutorial.html) the problem is demonstrated,
```
a=session.query(User).first()
a.__dict__['name']='eh'
session.dirty
```
yielding,
```
IdentitySet([])
```
i am looking for a way to force the user `a` into a dirty state.
This problem arises because the class that is mapped using sqlalchemy takes control of the attribute getter/setter methods, and this preventing sqlalchemy from registering changes. | I came across the same problem recently and it was not obvious.
Objects themselves are not dirty, but their attributes are. As SQLAlchemy will write back only changed attributes, not the whole object, as far as I know.
If you set an attribute using `set_attribute` and it is different from the original attribute data, SQLAlchemy founds out the object is dirty (TODO: I need details how it does the comparison):
```
from sqlalchemy.orm.attributes import set_attribute
set_attribute(obj, data_field_name, data)
```
If you want to mark the object dirty regardless of the original attribute value, no matter if it has changed or not, use `flag_modified`:
```
from sqlalchemy.orm.attributes import flag_modified
flag_modified(obj, data_field_name)
``` |
Python using ZIP64 extensions when compressing large files | 29,830,531 | 5 | 2015-04-23T17:32:15Z | 31,546,564 | 7 | 2015-07-21T18:11:04Z | [
"python",
"zip",
"zlib",
"zipfile"
] | I have a script that compresses the output files. The problem is that one of the files is over 4Gigs. How would I convert my script to use ZIP64 extensions instead of the standard zip?
Here is how I am currently zipping:
```
try:
import zlib
compression = zipfile.ZIP_DEFLATED
except:
compression = zipfile.ZIP_STORED
modes = { zipfile.ZIP_DEFLATED: 'deflated',
zipfile.ZIP_STORED: 'stored',
}
compressed_name = 'edw_files_' + datetime.strftime(date(), '%Y%m%d') + '.zip'
print 'creating archive'
zf = zipfile.ZipFile('edw_files_' + datetime.strftime(date(), '%Y%m%d') + '.zip', mode='w')
try:
zf.write(name1, compress_type=compression)
zf.write(name2, compress_type=compression)
zf.write(name3, compress_type=compression)
finally:
print 'closing'
zf.close()
```
Thanks!
Bill | Check out [zipfile-objects](https://docs.python.org/2/library/zipfile.html#zipfile-objects).
You can do this:
```
zf = zipfile.ZipFile('edw_files_' + datetime.strftime(date(), '%Y%m%d') + '.zip', mode='w', allowZip64 = True)
``` |
django.db.utils.ProgrammingError: relation already exists | 29,830,928 | 22 | 2015-04-23T17:54:18Z | 32,432,472 | 16 | 2015-09-07T06:37:00Z | [
"python",
"django",
"postgresql",
"ubuntu"
] | I'm trying to set up the tables for a new django project (that is, the tables do NOT already exist in the database); the django version is 1.7 and the db back end is PostgreSQL. The name of the project is crud. Results of migration attempt follow:
`python manage.py makemigrations crud`
```
Migrations for 'crud':
0001_initial.py:
- Create model AddressPoint
- Create model CrudPermission
- Create model CrudUser
- Create model LDAPGroup
- Create model LogEntry
- Add field ldap_groups to cruduser
- Alter unique_together for crudpermission (1 constraint(s))
```
`python manage.py migrate crud`
```
Operations to perform:
Apply all migrations: crud
Running migrations:
Applying crud.0001_initial...Traceback (most recent call last):
File "manage.py", line 18, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/migrate.py", line 161, in handle
executor.migrate(targets, plan, fake=options.get("fake", False))
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/executor.py", line 68, in migrate
self.apply_migration(migration, fake=fake)
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/executor.py", line 102, in apply_migration
migration.apply(project_state, schema_editor)
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/migration.py", line 108, in apply
operation.database_forwards(self.app_label, schema_editor, project_state, new_state)
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/operations/models.py", line 36, in database_forwards
schema_editor.create_model(model)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/schema.py", line 262, in create_model
self.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/schema.py", line 103, in execute
cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/utils.py", line 82, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/utils.py", line 66, in execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/utils.py", line 66, in execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "crud_crudpermission" already exists
```
Some highlights from the migration file:
```
dependencies = [
('auth', '0001_initial'),
('contenttypes', '0001_initial'),
]
migrations.CreateModel(
name='CrudPermission',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('_created_by', models.CharField(default=b'', max_length=64, null=True, editable=False, blank=True)),
('_last_updated_by', models.CharField(default=b'', max_length=64, null=True, editable=False, blank=True)),
('_created', models.DateTimeField(null=True, editable=False, blank=True)),
('_last_updated', models.DateTimeField(null=True, editable=False, blank=True)),
('domain', models.CharField(max_length=32, choices=[(b'town', b'Town'), (b'boe', b'BOE'), (b'police', b'Police')])),
('ldap_group', models.CharField(max_length=128, verbose_name=b'LDAP group')),
('can_add', models.BooleanField(default=False, verbose_name=b'add')),
('can_change', models.BooleanField(default=False, verbose_name=b'change')),
('restrict_change_to_own', models.BooleanField(default=False)),
('can_delete', models.BooleanField(default=False, verbose_name=b'delete')),
('restrict_delete_to_own', models.BooleanField(default=False)),
('models', models.ManyToManyField(to='contenttypes.ContentType', null=True, blank=True)),
],
options={
'verbose_name': 'CRUD permission',
},
bases=(models.Model,),
),
migrations.AlterUniqueTogether(
name='crudpermission',
unique_together=set([('ldap_group', 'can_add', 'can_change', 'can_delete', 'domain')]),
)
```
,
The crud app is not meant to actually do anything, but I use it another app, so when I try migrate from that app, I trigger the above problem.
I've found other examples on the web of people with similar issues, but none of their cases seem to apply because
1. The problem affects an entire relation, not just one column
2. I am not using multiple inheritance.
Where should I look next to find the underlying problem? | This works pretty fine
```
./manage.py migrate --fake default
```
Source: -<https://github.com/nijel/weblate/issues/587> |
Numpy 1-hot array | 29,831,489 | 15 | 2015-04-23T18:24:54Z | 29,831,596 | 33 | 2015-04-23T18:30:15Z | [
"python",
"numpy",
"machine-learning"
] | Let's say I have a 1d numpy array
```
a=[1,0,3]
```
I would like to encode this as a 2d 1-hot array
```
b=[[0,1,0,0], [1,0,0,0], [0,0,0,1]]
```
Is there a quick way to do this? Quicker than just looping over a to set elements of b, that is. | Your array `a` defines the columns, you just need to define the rows and then use fancy indexing:
```
>>> a = np.array([1, 0, 3])
>>> b = np.zeros((3, 4))
>>> b[np.arange(3), a] = 1
>>> b
array([[ 0., 1., 0., 0.],
[ 1., 0., 0., 0.],
[ 0., 0., 0., 1.]])
>>>
```
This is just for illustration. You may want to choose a more appropriate `dtype` for `b` such as `np.bool`. |
Numpy 1-hot array | 29,831,489 | 15 | 2015-04-23T18:24:54Z | 37,323,404 | 10 | 2016-05-19T12:35:50Z | [
"python",
"numpy",
"machine-learning"
] | Let's say I have a 1d numpy array
```
a=[1,0,3]
```
I would like to encode this as a 2d 1-hot array
```
b=[[0,1,0,0], [1,0,0,0], [0,0,0,1]]
```
Is there a quick way to do this? Quicker than just looping over a to set elements of b, that is. | ```
>>> values = [1, 0, 3]
>>> n_values = np.max(values) + 1
>>> np.eye(n_values)[values]
array([[ 0., 1., 0., 0.],
[ 1., 0., 0., 0.],
[ 0., 0., 0., 1.]])
``` |
Manual commit in Django 1.8 | 29,831,976 | 2 | 2015-04-23T18:51:31Z | 29,834,940 | 7 | 2015-04-23T21:48:04Z | [
"python",
"django",
"django-models"
] | How do you implement `@commit_manually` in Django 1.8?
I'm trying to upgrade Django 1.5 code to work with Django 1.8, and for some bizarre reason, the `commit_manually` decorator was removed in Django 1.6 with no direct replacement. My process iterates over thousands of records, so it can't wrap the entire process in a single transaction without running out of memory, but it still needs to group some records in a transaction to improve performance. To do this, I had a method wrapped with @commit\_manually, which called transaction.commit() every N iterations.
I can't tell for sure from the [docs](https://docs.djangoproject.com/en/1.8/topics/db/transactions/#transactions), but this still seems supported. I just have to call `set_autocommit(False)` instead of having a convenient decorator. Is this correct? | Yeah, you've got it. Call `set_autocommit(False)` to start a transaction, then call `commit()` and `set_autocommit(True)` to commit it.
You could wrap this up in your own decorator:
```
def commit_manually(fn):
def _commit_manually(*args, **kwargs):
set_autocommit(False)
res = fn(*args, **kwargs)
commit()
set_autocommit(True)
return res
return _commit_manually
``` |
How do I filter a pandas DataFrame based on value counts? | 29,836,836 | 3 | 2015-04-24T00:48:31Z | 29,836,852 | 8 | 2015-04-24T00:50:54Z | [
"python",
"pandas",
"filtering",
"dataframe"
] | I'm working in Python with a pandas DataFrame of video games, each with a genre. I'm trying to remove any video game with a genre that appears less than some number of times in the DataFrame, but I have no clue how to go about this. I did find [a StackOverflow question](http://stackoverflow.com/questions/6796569/how-to-filter-a-dataframe-based-on-category-counts) that seems to be related, but I can't decipher the solution at all (possibly because I've never heard of R and my memory of functional programming is rusty at best).
Help? | Use [groupby filter](http://pandas.pydata.org/pandas-docs/stable/groupby.html#filtration):
```
In [11]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=['A', 'B'])
In [12]: df
Out[12]:
A B
0 1 2
1 1 4
2 5 6
In [13]: df.groupby("A").filter(lambda x: len(x) > 1)
Out[13]:
A B
0 1 2
1 1 4
```
I recommend reading the [split-combine-section of the docs](http://pandas.pydata.org/pandas-docs/stable/groupby.html#filtration). |
Optimize the performance of dictionary membership for a list of Keys | 29,839,397 | 19 | 2015-04-24T05:28:26Z | 29,839,421 | 37 | 2015-04-24T05:30:16Z | [
"python",
"performance",
"list",
"python-2.7",
"dictionary"
] | I am trying to write a code which should return true if any element of list is present in a dictionary. Performance of this piece is really important. I know I can just loop over list and break if I find the first search hit. Is there any faster or more Pythonic way for this than given below?
```
for x in someList:
if x in someDict:
return True
return False
```
EDIT: I am using `Python 2.7`. My first preference would be a faster method. | Use of builtin [any](https://docs.python.org/2/library/functions.html#any) can have some performance edge over two loops
```
any(x in someDict for x in someList)
```
but you might need to measure your mileage. If your list and dict remains pretty static and you have to perform the comparison multiple times, you may consider using set
```
someSet = set(someList)
someDict.viewkeys() & someSet
```
**Note** Python 3.X, by default returns views rather than a sequence, so it would be straight forward when using Python 3.X
```
someSet = set(someList)
someDict.keys() & someSet
```
In both the above cases you can wrap the result with a bool to get a boolean result
```
bool(someDict.keys() & set(someSet ))
```
**Heretic Note**
My curiosity got the better of me and I timed all the proposed solutions. ***It seems that your original solution is better performance wise. Here is the result***
Sample Randomly generated Input
```
def test_data_gen():
from random import sample
for i in range(1,5):
n = 10**i
population = set(range(1,100000))
some_list = sample(list(population),n)
population.difference_update(some_list)
some_dict = dict(zip(sample(population,n),
sample(range(1,100000),n)))
yield "Population Size of {}".format(n), (some_list, some_dict), {}
```
## The Test Engine
I rewrote the Test Part of the answer as it was messy and the answer was receiving quite a decent attention. I created a timeit compare python module and moved it onto [github](https://github.com/itabhijitb/comptimeit/tree/master/comptimeit)
## The Test Result
```
Timeit repeated for 10 times
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
======================================
Test Run for Population Size of 10
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.000011 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_any |0.000014 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_not_not |0.000015 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_imap_any |0.000018 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 5|foo_any |0.000019 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_ifilter_next |0.000022 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.000024 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.000047 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 100
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_any |0.000071 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 2|foo_nested |0.000072 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_not_not |0.000073 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_next |0.000076 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_imap_any |0.000082 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.000092 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.000170 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.000638 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 1000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_not_not |0.000746 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_any |0.000746 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_next |0.000752 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_nested |0.000771 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 5|foo_set_ashwin |0.000838 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_imap_any |0.000842 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 7|foo_any |0.000933 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.001702 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 10000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.007195 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_next |0.007410 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_any |0.007491 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_not_not |0.007671 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_set_ashwin |0.008385 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_imap_any |0.011327 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 7|foo_any |0.011533 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.018313 |some_dict.viewkeys() & set(some_list )
Timeit repeated for 100 times
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
======================================
Test Run for Population Size of 10
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.000098 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_any |0.000124 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_not_not |0.000131 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_imap_any |0.000142 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 5|foo_ifilter_next |0.000151 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.000158 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.000186 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.000496 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 100
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_any |0.000661 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_not_not |0.000677 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_nested |0.000683 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_next |0.000684 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_imap_any |0.000762 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.000854 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.001291 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.005018 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 1000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_any |0.007585 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 2|foo_nested |0.007713 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 3|foo_set_ashwin |0.008256 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_not_not |0.008526 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_any |0.009422 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_ifilter_next |0.010259 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 7|foo_imap_any |0.011414 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.019862 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 10000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_imap_any |0.082221 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_any |0.083573 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 3|foo_nested |0.095736 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 4|foo_set_ashwin |0.103427 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 5|foo_ifilter_next |0.104589 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 6|foo_ifilter_not_not |0.117974 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 7|foo_any |0.127739 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.208228 |some_dict.viewkeys() & set(some_list )
Timeit repeated for 1000 times
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
======================================
Test Run for Population Size of 10
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.000953 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_any |0.001134 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_not_not |0.001213 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_next |0.001340 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_imap_any |0.001407 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.001535 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.002252 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.004701 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 100
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_any |0.006209 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_next |0.006411 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_not_not |0.006657 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_nested |0.006727 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 5|foo_imap_any |0.007562 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.008262 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.012260 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.046773 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 1000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_not_not |0.071888 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_next |0.072150 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_nested |0.073382 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_any |0.075698 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 5|foo_set_ashwin |0.077367 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_imap_any |0.090623 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 7|foo_any |0.093301 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.177051 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 10000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.701317 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_next |0.706156 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_any |0.723368 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_not_not |0.746650 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_set_ashwin |0.776704 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_imap_any |0.832117 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 7|foo_any |0.881777 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |1.665962 |some_dict.viewkeys() & set(some_list )
Timeit repeated for 10000 times
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
======================================
Test Run for Population Size of 10
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.010581 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_any |0.013512 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 3|foo_imap_any |0.015321 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_not_not |0.017680 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_ifilter_next |0.019334 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.026274 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.030881 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.053605 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 100
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.070194 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_not_not |0.078524 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_any |0.079499 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 4|foo_imap_any |0.087349 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 5|foo_ifilter_next |0.093970 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.097948 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.130725 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.480841 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 1000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_any |0.754491 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_not_not |0.756253 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_next |0.771382 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_nested |0.787152 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 5|foo_set_ashwin |0.818520 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_imap_any |0.902947 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 7|foo_any |1.001810 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |2.012781 |some_dict.viewkeys() & set(some_list )
=======================================
Test Run for Population Size of 10000
=======================================
|Rank |FunctionName |Result |Description
+------+---------------------+-----------+-----------------------------------------------
| 1|foo_imap_any |10.071469 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+-----------+-----------------------------------------------
| 2|foo_any |11.127034 |any(x in some_dict for x in some_list)
+------+---------------------+-----------+-----------------------------------------------
| 3|foo_set |18.881414 |some_dict.viewkeys() & set(some_list )
+------+---------------------+-----------+-----------------------------------------------
| 4|foo_nested |8.731133 |Original OPs Code
+------+---------------------+-----------+-----------------------------------------------
| 5|foo_ifilter_not_not |9.019190 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+-----------+-----------------------------------------------
| 6|foo_ifilter_next |9.189966 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+-----------+-----------------------------------------------
| 7|foo_set_ashwin |9.363886 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+-----------+-----------------------------------------------
| 8|foo_ifilter_any |9.442759 |any(ifilter(some_dict.__contains__, some_list))
```
And a Graphical Comparison from the above referred module




## Conclusion
Premature optimization is evil. It is evident that none of the solutions have optimal performance when the test domain varies. Depending on population size and frequency of iteration, performance of solutions varies considerably. The result again speaks out about the fact that in Python, one should ensure that the code should be readable rather than ensuring that the code is either nifty or optimized for performance for certain cases, but then it may not be scalable.
**Note** There were some doubts on why not using ifilter performs better than the rest
> "In Abhit's answer, he timed the different approaches and found that
> ifilter/next was not the fastest; any idea why this would be the case?
> "
It is a known fact that in python, there is an overhead when calling C functions, and if the population size is low but the frequency of iteration is high, the accumulation of C function call overhead would slowly show up. As can be seen in the graphs, where population size is low but iteration is high, using ifilter, performance deviates considerably. |
Optimize the performance of dictionary membership for a list of Keys | 29,839,397 | 19 | 2015-04-24T05:28:26Z | 29,839,544 | 11 | 2015-04-24T05:39:01Z | [
"python",
"performance",
"list",
"python-2.7",
"dictionary"
] | I am trying to write a code which should return true if any element of list is present in a dictionary. Performance of this piece is really important. I know I can just loop over list and break if I find the first search hit. Is there any faster or more Pythonic way for this than given below?
```
for x in someList:
if x in someDict:
return True
return False
```
EDIT: I am using `Python 2.7`. My first preference would be a faster method. | The cleanest and fastest way is to use [*any()*](https://docs.python.org/2.7/library/functions.html#any) and [*itertools.ifilter()*](https://docs.python.org/2.7/library/itertools.html#itertools.ifilter):
```
any(ifilter(someDict.__contains__, someList))
```
This code uses:
* a bound method, `someDict.__contains__` as the predicate
* the *ifilter()* itertool lazily scans for a true result at C speed
* the *any()* built-in drives the filter to find a single matching occurrence or to return *False* if the iterator is exhausted.
You could also use *itertools.imap()* instead of *ifilter()*. |
how to change the subject for Django error reporting emails? | 29,839,766 | 5 | 2015-04-24T05:56:00Z | 29,840,614 | 8 | 2015-04-24T06:52:19Z | [
"python",
"django"
] | i noticed about to change the subject for django error reporting emails,
is it possible to change subject?
can we modify the subject for Django error reporting emails ? | You need to set the `EMAIL_SUBJECT_PREFIX` variable in your Django settings file.
<https://docs.djangoproject.com/en/1.8/ref/settings/#email-subject-prefix>
> `EMAIL_SUBJECT_PREFIX`
>
> Default: `'[Django]'`
>
> Subject-line prefix for email messages sent with `django.core.mail.mail_admins` or `django.core.mail.mail_managers`. Youâll probably want to include the trailing space.
If you don't like Django error emails and want a better way of handling them, take a look a [Sentry](http://sentry.readthedocs.org/en/latest/). |
Get variable type in bash | 29,840,525 | 4 | 2015-04-24T06:47:03Z | 29,840,856 | 8 | 2015-04-24T07:06:15Z | [
"python",
"bash",
"variables",
"types"
] | In Python I can get variable type by:
```
>>> i = 123
>>> type(i)
<type 'int'>
```
I saw on [this page](http://tldp.org/LDP/abs/html/untyped.html) that there are no variable types in bash. The explanation given is:
> Untyped variables are both a blessing and a curse. They permit more flexibility in scripting and make it easier to grind out lines of code (and give you enough rope to hang yourself!). However, they likewise permit subtle errors to creep in and encourage sloppy programming habits.
But I'm not sure what it means and what are the real advantages (and drawbacks).
Thanks for your answers! :) | Bash doesn't have types in the same way as Python (although I would say that Python has classes rather than types). But bash variables do have *attributes* that are given (mostly) through `declare`, but the range of attributes is fairly small. You can find an attribute using `declare -p`, for example, `declare -i` creates an integer:
```
declare -i num
num=42
declare -p num
```
Gives:
```
declare -i num="42"
```
But this is a poor feature compared to Python, or almost any modern language. The problem is that in something like Bash the basic type is a text string, and that's fine if all you need is text strings for things like filenames. But once you start needing to do heavy processing you need other types. Bash doesn't support floating point, for example. You also need *compound* types, like a class describing a file with all the attributes that a file can have.
Bash 4 does have *associative arrays* (`declare -A`), similar to Python dictionaries, which extends functionality considerably.
Even so, most would agree that Object Orientation is pretty much impossible in Bash, although some would argue that it can be done in Korn shell (which has much more powerful features). <http://en.wikipedia.org/wiki/Object-oriented_programming>
What bash has is fine for what it is meant for - simple processing that is quick and easy to get working. But there is a critical mass beyond which using such a language becomes unwieldy, error prone, and slow. That critical mass can be one of scale, i.e. large amount of data, or complexity.
There is no simple cut-off point where you should stop using Bash and switch to Python. Its just that as programs get more complex and larger the case for using Python gets stronger.
I should add that shell scripts rarely get smaller and less complex over time! |
Remove unique values from a list and keep only duplicates | 29,845,051 | 3 | 2015-04-24T10:26:21Z | 29,845,176 | 10 | 2015-04-24T10:32:34Z | [
"python"
] | I'm looking to run over a list of ids and return a list of any ids that occurred more than once. This was what I set up that is working:
```
singles = list(ids)
duplicates = []
while len(singles) > 0:
elem = singles.pop()
if elem in singles:
duplicates.append(elem)
```
But the ids list is likely to get quite long, and I realistically don't want a while loop predicated on an expensive len call if I can avoid it. (I could go the inelegant route and call len once, then just decrement it every iteration but I'd rather avoid that if I could). | The smart way to do this is to use a data structure that makes it easy and efficient, like [`Counter`](https://docs.python.org/3/library/collections.html#collections.Counter):
```
>>> ids = [random.randrange(100) for _ in range(200)]
>>> from collections import Counter
>>> counts = Counter(ids)
>>> dupids = [id for id in ids if counts[id] > 1]
```
Building the `Counter` takes O(N) time, as opposed to O(N log N) time for sorting, or O(N^2) for counting each element from scratch every time.
---
As a side note:
> But the ids list is likely to get quite long, and I realistically don't want a while loop predicated on an expensive len call if I can avoid it.
`len` is not expensive. It's constant time, and (at least on builtin types list `list`) it's about as fast as a function can possibly get in Python short of doing nothing at all.
The part of your code that's expensive is calling `elem in singles` inside the loopâthat means for every element, you have to compare it against potentially every other element, meaning quadratic time. |
How to count the number of digits in numbers in different bases? | 29,847,504 | 6 | 2015-04-24T12:24:36Z | 29,847,712 | 8 | 2015-04-24T12:35:56Z | [
"python",
"c",
"algorithm",
"base"
] | I'm working with numbers in different bases (base-10, base-8, base-16, etc). I'm trying to count the number of characters in each number.
Example
> Number: `ABCDEF`
>
> Number of digits: *6*
I know about the method based on logarithms but I'm facing some problems.
1. [This Python script](http://pastebin.com/cE0wVqiB) outputs that it failed to calculate the number of digits correctly in 3,969 numbers out of 1,000,000.
2. I think the method that uses logarithms could be rather slow
Links:
* [This C program](http://www.programiz.com/c-programming/examples/digits-count) must be very slow (what if I have a very great number?). It also can't deal with numbers in different bases (for example, base-16).
* Not a dupe of [this](http://stackoverflow.com/questions/9721042/count-number-of-digits-which-method-is-most-efficient) as there the OP was asking only about base-10
---
**Edit:** certainly I can calculate the length of a string but what interests me most, is if it is possible to do the calculation *without convention to string*. I'd want to know the algorithm that could help to do it knowing just the *source-base* and *the base to convert to*.
**Edit2:** *source-base* is *base-10* and *the base to convert to* can be any other base.
---
How can we calculate the number of digits in numbers in different bases?
If I know the number in base-10, how do I calculate the number of digits in the same number converted to base-16 (base-8, etc) *without performing the conversion*?
*Note*: some Python or C code will be greatly appreciated | Logarithms shouldn't really be slow. And you can easily calculate logarithms to any base by this formula: `logBaseN(x)=logBaseA(x)/logBaseA(N)` - you can use `ln`(Base e = 2.718...) or `logBase10` or whatever you have. So you don't really need a program, a formular should do it:
```
num_digets(N, base) = 1 + floor(log(N) / log(base))
```
where `N` is your number and `base` the base you want that number in.
For more explanation take a look here:
<http://www.mathpath.org/concepts/Num/numdigits.htm> |
__enter__() takes exactly 3 arguments (1 given) | 29,848,620 | 2 | 2015-04-24T13:20:11Z | 29,848,663 | 10 | 2015-04-24T13:22:20Z | [
"python"
] | I have written a class like this:
```
class FooBar(object):
# some methods
# ...
def __enter__(self, param1, param2):
# do something here ...
pass
```
I try to use my class like this (imported from module mymod):
```
with (mymod.FooBar("hello", 123)) as x:
# do something here with instance of mymod.FooBar called x ...
pass
```
when the block above gets executed, I get the error:
```
__enter__() takes exactly 3 arguments (1 given)
```
What am I doing wrong? | The `__enter__` method is never given any arguments, so beyond `self` your signature should not have any other.
You should move those arguments to the `__init__` method instead:
```
class FooBar(object):
def __init__(self, param1, param2):
# do something here ...
def __enter__(self):
# something else, perhaps return self
```
Creating an instance of `FooBar()` is a *separate step*. `with` calls `__enter__` on the *result* of your `mymod.FooBar("hello", 123)` expression, the expression itself is not translated to an `__enter__` call.
If it was, you couldn't also use it like this, but you can:
```
cm = mymod.FooBar("hello", 123)
with cm as x:
# do something here with x, which is the return value of cm.__enter__()
```
Note that `x` is assigned whatever `cm.__enter__()` returned; you can return `self` from `__enter__` or you can return something entirely different.
The expected methods `__enter__` and `__exit__` are documented in the [*With Statement Context Managers* section](https://docs.python.org/2/reference/datamodel.html#with-statement-context-managers) of the Python Data model documentation:
> `object.__enter__(self)`
>
> Enter the runtime context related to this object. The `with` statement will bind this methodâs return value to the target(s) specified in the `as` clause of the statement, if any.
as well as in the [*Content Manager Types* section](https://docs.python.org/2/library/stdtypes.html#typecontextmanager) of the Built-In Types documentation:
> `contextmanager.__enter__()`
>
> Enter the runtime context and return either this object or another object related to the runtime context. The value returned by this method is bound to the identifier in the `as` clause of `with` statements using this context manager.
>
> An example of a context manager that returns itself is a file object. File objects return themselves from `__enter__()` to allow `open()` to be used as the context expression in a `with` statement.
If you are interested in the exact interactions, see the original proposal: [PEP 343 -- The "with" Statement](https://www.python.org/dev/peps/pep-0343/); from the specification section you can see what `with EXPR as VAR: BLOCK` statement does under the hood:
```
mgr = (EXPR)
exit = type(mgr).__exit__ # Not calling it yet
value = type(mgr).__enter__(mgr)
exc = True
try:
try:
VAR = value # Only if "as VAR" is present
BLOCK
except:
# The exceptional case is handled here
exc = False
if not exit(mgr, *sys.exc_info()):
raise
# The exception is swallowed if exit() returns true
finally:
# The normal and non-local-goto cases are handled here
if exc:
exit(mgr, None, None, None)
```
Note the `mgr = (EXPR)` part; in your case, `mymod.FooBar("hello", 123)` is that part. Also note that `(EXPR)`, `__enter__` and `__exit__` are not 'protected' by the `try..except` here, exceptions raised in the expression or when entering or exiting are *not* handled by the context manager! |
Error Installing any module using pip, but easy_install works | 29,849,892 | 3 | 2015-04-24T14:18:38Z | 29,853,308 | 7 | 2015-04-24T16:58:30Z | [
"python",
"sockets",
"pyopenssl"
] | I get this error whenever I try to install any module using pip, but easy\_install work perfectly. I have no proxies configured in my Ubuntu 12.04 machine.
Previously it was working fine, just dnt know, how it stoped working suddenly.
This is error i get, while running `sudo pip install <any_package_name>`:
```
Exception:
Traceback (most recent call last):
File "/tmp/tmpOA61D3/pip.zip/pip/basecommand.py", line 246, in main
status = self.run(options, args)
File "/tmp/tmpOA61D3/pip.zip/pip/commands/install.py", line 342, in run
requirement_set.prepare_files(finder)
File "/tmp/tmpOA61D3/pip.zip/pip/req/req_set.py", line 345, in prepare_files
functools.partial(self._prepare_file, finder))
File "/tmp/tmpOA61D3/pip.zip/pip/req/req_set.py", line 290, in _walk_req_to_install
more_reqs = handler(req_to_install)
File "/tmp/tmpOA61D3/pip.zip/pip/req/req_set.py", line 415, in _prepare_file
req_to_install, finder)
File "/tmp/tmpOA61D3/pip.zip/pip/req/req_set.py", line 376, in _check_skip_installed
finder.find_requirement(req_to_install, self.upgrade)
File "/tmp/tmpOA61D3/pip.zip/pip/index.py", line 425, in find_requirement
all_versions = self._find_all_versions(req.name)
File "/tmp/tmpOA61D3/pip.zip/pip/index.py", line 349, in _find_all_versions
index_locations = self._get_index_urls_locations(project_name)
File "/tmp/tmpOA61D3/pip.zip/pip/index.py", line 323, in _get_index_urls_locations
page = self._get_page(main_index_url)
File "/tmp/tmpOA61D3/pip.zip/pip/index.py", line 789, in _get_page
return HTMLPage.get_page(link, session=self.session)
File "/tmp/tmpOA61D3/pip.zip/pip/index.py", line 878, in get_page
"Cache-Control": "max-age=600",
File "/tmp/tmpOA61D3/pip.zip/pip/_vendor/requests/sessions.py", line 476, in get
return self.request('GET', url, **kwargs)
File "/tmp/tmpOA61D3/pip.zip/pip/download.py", line 367, in request
return super(PipSession, self).request(method, url, *args, **kwargs)
File "/tmp/tmpOA61D3/pip.zip/pip/_vendor/requests/sessions.py", line 464, in request
resp = self.send(prep, **send_kwargs)
File "/tmp/tmpOA61D3/pip.zip/pip/_vendor/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/tmp/tmpOA61D3/pip.zip/pip/_vendor/cachecontrol/adapter.py", line 46, in send
resp = super(CacheControlAdapter, self).send(request, **kw)
File "/tmp/tmpOA61D3/pip.zip/pip/_vendor/requests/adapters.py", line 370, in send
timeout=timeout
File "/tmp/tmpOA61D3/pip.zip/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
body=body, headers=headers)
File "/tmp/tmpOA61D3/pip.zip/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 341, in _make_request
self._validate_conn(conn)
File "/tmp/tmpOA61D3/pip.zip/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 762, in _validate_conn
conn.connect()
File "/tmp/tmpOA61D3/pip.zip/pip/_vendor/requests/packages/urllib3/connection.py", line 238, in connect
ssl_version=resolved_ssl_version)
File "/tmp/tmpOA61D3/pip.zip/pip/_vendor/requests/packages/urllib3/contrib/pyopenssl.py", line 296, in ssl_wrap_socket
cnx.set_tlsext_host_name(server_hostname)
AttributeError: '_socketobject' object has no attribute 'set_tlsext_host_name'
``` | I found a potential solution [here](https://github.com/passslot/passslot-python-sdk/issues/1). Here's the relevant quote:
"That happend because Ubuntu 12.04 (that is my server's OS) has old `pyOpenSSL` library which not accept attribute 'set\_tlsext\_host\_name'.
For fix that, you need to add dependence `pyOpenSSL` >= 0.13.
On Ubuntu for update `pyOpenSSL` use `pip`, you also need to install `libffi-dev` and remove `python-openssl` by `apt`."
```
$ sudo apt-get purge python-openssl
$ sudo apt-get install libffi-dev
$ sudo pip install pyOpenSSL
```
Let me know if this is unclear or if it doesn't work for you. |
Can a Python function return only the second of two values? | 29,850,511 | 7 | 2015-04-24T14:45:05Z | 29,850,591 | 15 | 2015-04-24T14:48:25Z | [
"python",
"matlab",
"function",
"return"
] | I have a Python function that returns multiple values. As an example for this question, consider the function below, which returns two values.
```
def function():
...
return x, y
```
I know this function can return both values `x, y = function()`. But is it possible for this function to only return the second value?
In MATLAB, for example, it would be possible to do something like this: `~, y = function()`. I have not found an equivalent approach in Python. | The pythonic idiom is just to ignore the first return value by assigning it to `_`:
```
_, y = function()
``` |
Combining two lists into a list of lists | 29,853,511 | 4 | 2015-04-24T17:10:29Z | 29,853,537 | 9 | 2015-04-24T17:11:35Z | [
"python",
"list"
] | I have two lists:
```
a = ['1', '2']
b = ['11', '22', '33', '44']
```
And I to combine them to create a list like the one below:
```
op = [('1', '11'), ('2', '22'), ('', '33'), ('', '44')]
```
How could I achieve this? | You want [itertools.zip\_longest](https://docs.python.org/3/library/itertools.html#itertools.zip_longest) with a `fillvalue` of an empty string:
```
a = ['1', '2']
b = ['11', '22', '33', '44']
from itertools import zip_longest # izip_longest for python2
print(list(zip_longest(a,b, fillvalue="")))
[('1', '11'), ('2', '22'), ('', '33'), ('', '44')]
```
For python2 it is **izip\_longest**:
```
from itertools import izip_longest
print(list(izip_longest(a,b, fillvalue="")))
[('1', '11'), ('2', '22'), ('', '33'), ('', '44')]
```
If you just want to use the values you can iterate over the the izip object:
```
for i,j in izip_longest(a,b, fillvalue=""):
# do whatever
```
Some timings vs using map:
```
In [51]: a = a * 10000
In [52]: b = b * 9000
In [53]: timeit list(izip_longest(a,b,fillvalue=""))
100 loops, best of 3: 1.91 ms per loop
In [54]: timeit [('', i[1]) if i[0] == None else i for i in map(None, a, b)]
100 loops, best of 3: 6.98 ms per loop
```
`map` also creates another list using python2 so for large lists or if you have memory restrictions it is best avoided. |
Convert a string to a list of length one | 29,854,130 | 2 | 2015-04-24T17:44:04Z | 29,854,161 | 8 | 2015-04-24T17:45:41Z | [
"python",
"string",
"list",
"split"
] | I created a method that requires a list in order to work properly. However, you can send in a list OR a simple string. I want to turn that string into a list that contains that entire string as an element. For example, if I have:
```
"I am a string"
```
I want to convert that to:
```
["I am a string"]
```
I am able to do it like so:
```
"I am a string".split("!@#$%^&*")
```
Because I will never have that combination of symbols, it will always convert it to a list without removing any characters. However, this doesn't seem like that great of a way to do it. Is there another way? | ```
>>> "abc"
'abc'
>>> ["abc"]
['abc']
>>> abc = "abc"
>>> abc
'abc'
>>> [abc]
['abc']
>>> "I am a string".split("!@#$%^&*") == ["I am a string"]
True
```
Putting the value in square brackets makes a list with one item, just as multiple values makes a list with multiple items. The only container which does not follow this pattern is the tuple, as the round brackets are also used for grouping. In that case, just add a comma after the single item:
```
>>> abc
'abc'
>>> (abc)
'abc'
>>> (abc,)
('abc',)
```
If you want your function to handle list and strings differently under the cover, code your function like:
```
def f(maybe_list):
if not isinstance(maybe_list, list):
maybe_list = [maybe_list]
# carry on, you have a list.
``` |
Error message: "'chromedriver' executable needs to be available in the path" | 29,858,752 | 4 | 2015-04-24T22:46:16Z | 29,858,817 | 9 | 2015-04-24T22:52:25Z | [
"python",
"selenium",
"selenium-chromedriver"
] | I am using selenium with python and have downloaded the chromedriver for my windows computer from this site: <http://chromedriver.storage.googleapis.com/index.html?path=2.15/>
After downloading the zip file, I unpacked the zip file to my downloads folder. Then I put the path to the executable binary (C:\Users\michael\Downloads\chromedriver\_win32) into the Environment Variable "Path".
However, when I run the following code:
```
from selenium import webdriver
driver = webdriver.Chrome()
```
... I keep getting the following error message:
```
WebDriverException: Message: 'chromedriver' executable needs to be available in the path. Please look at http://docs.seleniumhq.org/download/#thirdPartyDrivers and read up at http://code.google.com/p/selenium/wiki/ChromeDriver
```
But - as explained above - the executable is(!) in the path ... what is going on here? | You can test if it actually is in the PATH, if you open a cmd and type in `chromedriver` (assuming your chromedriver executable is still named like this) and hit Enter. If `Starting ChromeDriver 2.15.322448` is appearing, the PATH is set appropriately and there is something else going wrong.
Alternatively you can use a direct path to the chromedriver like this:
```
driver = webdriver.Chrome('/path/to/chromedriver')
```
So in your specific case:
```
driver = webdriver.Chrome("C:/Users/michael/Downloads/chromedriver_win32/chromedriver.exe")
``` |
Do numerical programming languages distinguish between a "largest finite number" and "infinity"? | 29,859,509 | 8 | 2015-04-25T00:11:42Z | 29,869,185 | 7 | 2015-04-25T18:42:58Z | [
"python",
"matlab",
"numpy",
"integer-overflow"
] | **Question motivation:**
In standard numerical languages of which I am aware (e.g. Matlab, Python numpy, etc.), if, for example, you take the exponential of a modestly large number, the output is infinity as the result of numerical overflow. If this is multiplied by 0, you get NaN. Separately, these steps are reasonable enough, but they reveal a logical mistake in the implementation of the math. The first number resulting from overflow is known to be finite and we clearly want the result of a multiplication by 0 with this large finite number to be 0.
Explicitly:
```
>>> import numpy as np
>>> np.exp(709)*0
0.0
>>> np.exp(710)*0
nan
```
I imagine we could here introduce a notion of the "largest finite value" (LFV) which would have the following properties:
* LFV would be the default for numerical overflow that would otherwise
round up to infinity
* LFV < infinity
* any explicit number < LFV (for example if LEV stands for "largest explicit value", then LEV* (MATLAB detail: realmax < LFV)
* LFV\*0 = 0
On the other hand, infinity should not simply be redefined in the way described for LFV. It does not make sense for 0 \*infinity = 0...appropriately, the current standard implementation of infinity yields NaN in this setting. Also, sometimes there arises a need to initialize numbers to infinity and you'd want the result of any numerical operation, even one that yielded LFV to be strictly less than the initialized value (this is convenient for some logical statements). I'm sure other situations exist where a proper infinity is necessary -- my point is simply that infinity should **not** simply be redefined to have some of the LFV properties above.
**The Question:**
I want to know if there is any language which uses a scheme like this and if there are any problems with such a scheme. This problem does not arise in proper math since there aren't these numerical limits on the size of numbers, but I think it is a real problem when implementing a consistent mathematics in programming languages. Essentially, by LFV, I think I want a shorthand for the open interval between the largest explicit value and infinity LFV = (LEV,infinity), but maybe this intuition is wrong.
*Update: In the comments, people seem to be objecting a little to the utility of issue I'm bringing up. My question arises not because many related issues occur, but rather because the same issue frequently arises in many different settings. From talking to people who do data analysis, this is something that often enough contributes to runtime errors when training/fitting models. The question is basically why this isn't handled by numerical languages. From the comments, I'm essentially gathering that the people who write the languages don't see the utility of handling things this way. In my opinion, when certain specific issues occur frequently enough for people using a language, it might make sense to handle those exceptions in a principled way so each user doesn't have to.* | So... I got curious and dug around a little.
As I already mentioned in the comments, a "largest finite value" kind of exists in IEEE 754, if you consider the [exception status flags](https://en.wikipedia.org/wiki/IEEE_floating_point#Exception_handling). A value of infinity with the overflow flag set corresponds to your proposed LFV, with the difference that the flag is only available to be read out after the operation, instead of being stored as part of the value itself. Which means you have to manually check the flag and act if overflow occurs, instead of just having LFV\*0 = 0 built in.
There is quite an interesting [paper](http://grouper.ieee.org/groups/1788/email/pdfmPSi1DgZZf.pdf) on exception handling and its support in programming languages. Quote:
> The IEEE 754 model of setting a flag and returning an infinity or a quiet NaN assumes that the user tests the status frequently (or at least appropriately.) Diagnosing what the original problem was requires the user to check all results for exceptional values, which in turn assumes that they are percolated through all operations, so that erroneous data can be flagged. Given these assumptions, everything should work, but unfortunately they are not very realistic.
The paper also bemoans the poor support for floating point exception handling, especially in C99 and Java (I'm sure most other languages aren't better though). Given that in spite of this, there are no major efforts to fix this or create a better standard, to me seems to indicate that IEEE 754 and its support are, in a sense, "good enough" (more on that later).
---
Let me give a solution to your example problem to demonstrate something. I'm using numpy's [`seterr`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.seterr.html) to make it raise an exception on overflow:
```
import numpy as np
def exp_then_mult_naive(a, b):
err = np.seterr(all='ignore')
x = np.exp(a) * b
np.seterr(**err)
return x
def exp_then_mult_check_zero(a, b):
err = np.seterr(all='ignore', over='raise')
try:
x = np.exp(a)
return x * b
except FloatingPointError:
if b == 0:
return 0
else:
return exp_then_mult_naive(a, b)
finally:
np.seterr(**err)
def exp_then_mult_scaling(a, b):
err = np.seterr(all='ignore', over='raise')
e = np.exp(1)
while abs(b) < 1:
try:
x = np.exp(a) * b
break
except FloatingPointError:
a -= 1
b *= e
else:
x = exp_then_mult_naive(a, b)
np.seterr(**err)
return x
large = np.float_(710)
tiny = np.float_(0.01)
zero = np.float_(0.0)
print('naive: e**710 * 0 = {}'.format(exp_then_mult_naive(large, zero)))
print('check zero: e**710 * 0 = {}'
.format(exp_then_mult_check_zero(large, zero)))
print('check zero: e**710 * 0.01 = {}'
.format(exp_then_mult_check_zero(large, tiny)))
print('scaling: e**710 * 0.01 = {}'.format(exp_then_mult_scaling(large, tiny)))
# output:
# naive: e**710 * 0 = nan
# check zero: e**710 * 0 = 0
# check zero: e**710 * 0.01 = inf
# scaling: e**710 * 0.01 = 2.233994766161711e+306
```
* `exp_then_mult_naive` does what you did: expression that will overflow multiplied by `0` and you get a `nan`.
* `exp_then_mult_check_zero` catches the overflow and returns `0` if the second argument is `0`, otherwise same as the naive version (note that `inf * 0 == nan` while `inf * positive_value == inf`). This is the best you could do if there were a LFV constant.
* `exp_then_mult_scaling` uses information about the problem to get results for inputs the other two couldn't deal with: if `b` is small, we can multiply it by e while decrementing `a` without changing the result. So if `np.exp(a) < np.inf` before `b >= 1`, the result fits. (I know I could check if it fits in one step instead of using the loop, but this was easier to write right now.)
So now you have a situation where a solution that doesn't require an LFV is able to provide correct results for more input pairs than one that does. The only advantage an LFV has here is using fewer lines of code while still giving a correct result in that one particular case.
By the way, I'm not sure about thread safety with `seterr`. So if you're using it in multiple threads with different settings in each thread, test it out before to avoid headache later.
---
Bonus factoid: the [original standard](http://homepages.math.uic.edu/~jan/mcs471/Lec3/ieee754.pdf) actually stipulated that you should be able to register a trap handler that would, on overflow, be given the result of the operation divided by a large number (see section 7.3). That would allow you to carry on the computation, as long as you keep in mind that the value is actually much larger. Although I guess it could become a minefield of WTF in a multithreaded environment, never mind that I didn't really find support for it.
---
To get back to the "good enough" point from above: In my understanding, IEEE 754 was designed as a general purpose format, usable for practically any application. When you say "the same issue frequently arises in many different settings", it is (or at least was) apparently not frequently enough to justify inflating the standard.
Let me quote from the [Wikipedia article](https://en.wikipedia.org/wiki/Floating_point#IEEE_754_design_rationale):
> [...] the more esoteric features of the IEEE 754 standard discussed here, such as extended formats, NaN, infinities, subnormals etc. [...] are designed to give safe robust defaults for numerically unsophisticated programmers, in addition to supporting sophisticated numerical libraries by experts.
Putting aside that, in my opinion, even having NaN as a special value is a bit of a dubious decision, adding an LFV isn't really going to make it easier or safer for the "numerically unsophisticated", and doesn't allow experts to do anything they couldn't already.
I guess the bottom line is that representing rational numbers is hard. IEEE 754 does a pretty good job of making it simple for a lot of applications. If yours isn't one of them, in the end you'll just have to deal with the hard stuff by either
* using a higher precision float, if available (ok, this one's pretty easy),
* carefully selecting the order of execution such that you don't get overflows in the first place,
* adding an offset to all your values if you know they're all going to be very large,
* using an arbitrary-precision representation that can't overflow (unless you run out of memory), or
* something else I can't think of right now. |
Create "The Economist" style graphs from python | 29,859,565 | 7 | 2015-04-25T00:20:28Z | 29,872,466 | 9 | 2015-04-26T00:54:10Z | [
"python",
"matplotlib",
"seaborn"
] | Using python and marplotlib and a tool like seaborn, I'd like to create a graph like this one from The Economist (because I think the style is great.)

It's a time series graph and the key things I'd like to reproduce are the horizontal grid lines with labels matched with a lower horizontal axis with tick marks. The different colored labels at either end of the grid lines would be a bonus, with the corresponding titles (left and right justified). The annotations would be a double bonus.
I tried to make something like it using seaborn, but could not get to the first step. | Not perfect (I've haven't had long to play with it), but to give you an idea of the kind of Matplotlib methods you need to use to customize a plot the way you want, there's some code below.
Note that to fine-tune a plot like this it is hard to keep the content and presentation separate (You may have to manually set tick labels and the like, so it won't work automatically if you change the data). *The Economist*'s graphics people obviously do this because they seem to have got the top left hand tick label wrong (280 should be 260).
```
# -*- coding: utf-8 -*-
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib.ticker as ticker
from datetime import datetime
# Load in some sample data
bond_yields = np.loadtxt('bond_yields.txt',
converters={0: mdates.strpdate2num('%Y-%m-%d')},
dtype = {'names': ('date', 'bond_yield'),
'formats': (datetime, float)})
bank_deposits = np.loadtxt('bank_deposits.txt',
converters={0: mdates.strpdate2num('%Y-%m-%d')},
dtype = {'names': ('date', 'bank_deposits'),
'formats': (datetime, float)})
# Bond yields line is in light blue, bank deposits line in dark red:
bond_yield_color = (0.424, 0.153, 0.090)
bank_deposits_color = (0.255, 0.627, 0.843)
# Set up a figure, and twin the x-axis so we can have two different y-axes
fig = plt.figure(figsize=(8, 4), frameon=False, facecolor='white')
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
# Make sure the gridlines don't end up on top of the plotted data
ax1.set_axisbelow(True)
ax2.set_axisbelow(True)
# The light gray, horizontal gridlines
ax1.yaxis.grid(True, color='0.65', ls='-', lw=1.5, zorder=0)
# Plot the data
l1, = ax1.plot(bank_deposits['date'], bank_deposits['bank_deposits'],
c=bank_deposits_color, lw=3.5)
l2, = ax2.plot(bond_yields['date'], bond_yields['bond_yield'],
c=bond_yield_color, lw=3.5)
# Set the y-tick ranges: chosen so that ax2 labels will match the ax1 gridlines
ax1.set_yticks(range(120,280,20))
ax2.set_yticks(range(0, 40, 5))
# Turn off spines left, top, bottom and right (do it twice because of the twinning)
ax1.spines['left'].set_visible(False)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
ax2.spines['left'].set_visible(False)
ax2.spines['right'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax1.spines['bottom'].set_visible(False)
ax2.spines['bottom'].set_visible(False)
# We do want ticks on the bottom x-axis only
ax1.xaxis.set_ticks_position('bottom')
ax2.xaxis.set_ticks_position('bottom')
# Remove ticks from the y-axes
ax1.tick_params(axis='y', length=0)
ax2.tick_params(axis='y', length=0)
# Set tick-labels for the two y-axes in the appropriate colors
for tick_label in ax1.yaxis.get_ticklabels():
tick_label.set_fontsize(12)
tick_label.set_color(bank_deposits_color)
for tick_label in ax2.yaxis.get_ticklabels():
tick_label.set_fontsize(12)
tick_label.set_color(bond_yield_color)
# Set the x-axis tick marks to two-digit years
ax1.xaxis.set_major_locator(mdates.YearLocator())
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%y'))
# Tweak the x-axis tick label sizes
for tick in ax1.xaxis.get_major_ticks():
tick.label.set_fontsize(12)
tick.label.set_horizontalalignment('center')
# Lengthen the bottom x-ticks and set them to dark gray
ax1.tick_params(direction='in', axis='x', length=7, color='0.1')
# Add the line legends as annotations
ax1.annotate(u'private-sector bank deposits, â¬bn', xy=(0.09, 0.95),
xycoords='figure fraction', size=12, color=bank_deposits_color,
fontstyle='italic')
ax2.annotate(u'ten-year government bond yield, %', xy=(0.6, 0.95),
xycoords='figure fraction', size=12, color=bond_yield_color,
fontstyle='italic')
# Add an annotation at the date of the first bail-out. relpos=(0,0) ensures
# that the label lines up on the right of a vertical line
first_bailout_date = datetime.strptime('2010-05-02', '%Y-%m-%d')
xpos = mdates.date2num(first_bailout_date)
ax1.annotate(u'FIRST BAIL-OUT', xy=(xpos, 120), xytext=(xpos, 250), color='r',
arrowprops=dict(arrowstyle='-', edgecolor='r', ls='dashed',
relpos=(0,0)), fontsize=9, fontstyle='italic')
fig.savefig('fig.png', facecolor=fig.get_facecolor(), edgecolor='none')
```
 |
Memoization: Making change with coins | 29,862,046 | 2 | 2015-04-25T06:52:39Z | 29,862,260 | 7 | 2015-04-25T07:18:29Z | [
"python",
"dynamic-programming",
"memoization",
"python-decorators",
"coin-change"
] | I am working on the classic *making change with coins* problem with Python. This is my implementation.
```
def memo(fn):
def helper(*args): # here, * indicate the fn take arbitrary number of argumetns
d = {}
if args in d:
return d[args] # args is a tuple, immutable, hashable
else:
res = fn(*args) # here * expand a tuple as arguments
d[args] = res
return res
return helper
@memo
def change(options, n):
if n < 0 or options ==():
return 0
elif n == 0:
return 1
else:
return change(options, n- options[0]) + change(options[1:], n)
```
And it turns out, the memoized version is even slower than original version! Why? What is going wrong in my implementation?
This is without memoization:
```
In [172]: %timeit change((50, 25, 10, 5, 1), 100)
100 loops, best of 3: 7.12 ms per loop
```
This is with memoization:
```
In [170]: %timeit change((50, 25, 10, 5, 1), 100)
10 loops, best of 3: 21.2 ms per loop
``` | In your current code:
```
def memo(fn):
def helper(*args):
d = {}
```
you create a new *"cache"* dictionary `d` **every time the decorated function is called**. It's no wonder it's slower! The minimal fix is:
```
def memo(fn):
d = {}
def helper(*args):
```
but it could be neater generally. I use:
```
def memo(func):
def wrapper(*args):
if args not in wrapper.cache:
wrapper.cache[args] = func(*args)
return wrapper.cache[args]
wrapper.cache = {}
return wrapper
```
This makes it easier to access a decorated function's `cache` for bug fixing etc. |
Convert list to dict in Python | 29,862,081 | 3 | 2015-04-25T06:57:08Z | 29,862,096 | 19 | 2015-04-25T06:58:57Z | [
"python",
"list",
"dictionary"
] | How can I convert a list
```
my_list = ["a", "b", "c"]
```
into a dictionary
```
my_dict = {
1: "a",
2: "b",
3: "c"
}
```
The keys should just be the indexes + 1 as in my example. | A simple solution is:
```
dict(enumerate(my_list, 1))
```
For example:
```
>>> dict(enumerate(["a", "b", "c"], 1))
{1: 'a', 2: 'b', 3: 'c'}
``` |
How to write a function which takes a slice? | 29,862,628 | 11 | 2015-04-25T08:02:03Z | 29,862,837 | 10 | 2015-04-25T08:27:20Z | [
"python",
"numpy",
"slice"
] | I would like to write a function in Python which takes a slice as a parameter. Ideally a user would be to be able to call the function as follows:
```
foo(a:b:c)
```
Unfortunately, this syntax is not permitted by Python - the use of `a:b:c` is only allowed within `[]`, not `()`.
I therefore see three possibilities for my function:
1. Require the user to use a slice "constructor" (where `s_` acts like [the version provided by numpy](http://docs.scipy.org/doc/numpy/reference/generated/numpy.s_.html)):
```
foo(slice(a, b, c))
foo(s_[a:b:c])
```
2. Put the logic of my function into a `__getitem__` method:
```
foo[a:b:c]
```
3. Give up trying to take a slice and take start, stop and step individually:
```
foo(a, b, c)
```
Is there a way to get the original syntax to work? If not, which of the workaround syntaxes would be preferred? Or is there another, better option? | Don't surprise your users.
If you use the slicing syntax consistently with what a developer expects from a slicing syntax, that same developer will expect square brackets operation, i.e. a `__getitem__()` method.
If instead the returned object is not somehow a slice of the original object, people will be confused if you stick to a `__getitem__()` solution. Use a function call `foo(a, b, c)`, don't mention slices at all, and optionally assign default values if that makes sense. |
os.path.isfile() returns false for existing Windows file | 29,863,318 | 5 | 2015-04-25T09:19:53Z | 29,863,357 | 9 | 2015-04-25T09:23:52Z | [
"python"
] | For some reason **os.path.isfile()** occasionally returns **false** for some existing Windows files. At first, I assumed that spaces in the filename were causing a problem, but other file paths with spaces worked fine. Here's copy from the Python console that illustrates this issue:
```
>>> import os
>>> os.path.isfile("C:\Program Files\Internet Explorer\images\bing.ico")
False
>>> os.path.isfile("C:\Program Files\Internet Explorer\images\PinnedSiteLogo.contrast-black_scale-80.png")
True
```
How can I fix this problem? | `\b` in a string means backspace. If you want actual backslashes in a string, they need to be escaped with more backslashes (`\\` instead of `\`), or you need to use a raw string (`r"..."` instead of `"..."`). For file paths, I'd recommend using forward slashes. |
python asyncio run event loop once? | 29,868,372 | 7 | 2015-04-25T17:21:37Z | 29,868,627 | 8 | 2015-04-25T17:46:39Z | [
"python",
"sockets",
"asyncsocket",
"python-asyncio"
] | I am trying to understand the asyncio library, specifically with using sockets. I have written some code in an attempt to gain understanding,
I wanted to run a sender and a receiver sockets asynchrounously. I got to the point where I get all data sent up till the last one, but then I have to run one more loop. Looking at how to do this, I found [this link from stackoverflow](http://stackoverflow.com/questions/29782377/is-it-possible-to-run-only-a-single-step-of-the-asyncio-event-loop), which I implemented below -- but what is going on here? Is there a better/more sane way to do this than to call `stop` followed by `run_forever`?
The documentation for `stop()` in the event loop is:
> Stop running the event loop.
>
> Every callback scheduled before stop() is called will run. Callbacks scheduled after stop() is called will not run. However, those callbacks will run if run\_forever() is called again later.
And `run_forever()`'s documentation is:
> Run until stop() is called.
Questions:
* why in the world is `run_forever` the only way to `run_once`? This doesn't even make sense
* Is there a better way to do this?
* Does my code look like a reasonable way to program with the asyncio library?
* Is there a better way to add tasks to the event loop besides `asyncio.async()`? `loop.create_task` gives an error on my Linux system.
<https://gist.github.com/cloudformdesign/b30e0860497f19bd6596> | The `stop(); run_forever()` trick works because of how `stop` is implemented:
```
def stop(self):
"""Stop running the event loop.
Every callback scheduled before stop() is called will run.
Callback scheduled after stop() is called won't. However,
those callbacks will run if run() is called again later.
"""
self.call_soon(_raise_stop_error)
def _raise_stop_error(*args):
raise _StopError
```
So, next time the event loop runs and executes pending callbacks, it's going to call `_raise_stop_error`, which raises `_StopError`. The `run_forever` loop will break only on that specific exception:
```
def run_forever(self):
"""Run until stop() is called."""
if self._running:
raise RuntimeError('Event loop is running.')
self._running = True
try:
while True:
try:
self._run_once()
except _StopError:
break
finally:
self._running = False
```
So, by scheduling a `stop()` and then calling `run_forever`, you end up running one iteration of the event loop, then stopping once it hits the `_raise_stop_error` callback. You may have also noticed that `_run_once` is defined and called by `run_forever`. You could call that directly, but that can sometimes block if there aren't any callbacks ready to run, which may not be desirable. I don't think there's a cleaner way to do this currently - That answer was provided by Andrew Svetlov, who is an `asyncio` contributor; he would probably know if there's a better option. :)
In general, your code looks reasonable, though I think that you shouldn't be using this `run_once` approach to begin with. It's not deterministic; if you had a longer list or a slower system, it might require more than two extra iterations to print everything. Instead, you should just send a sentinel that tells the receiver to shut down, and then wait on both the send and receive coroutines to finish:
```
import sys
import time
import socket
import asyncio
addr = ('127.0.0.1', 1064)
SENTINEL = b"_DONE_"
# ... (This stuff is the same)
@asyncio.coroutine
def sending(addr, dataiter):
loop = asyncio.get_event_loop()
for d in dataiter:
print("Sending:", d)
sock = socket.socket()
yield from send_close(loop, sock, addr, str(d).encode())
# Send a sentinel
sock = socket.socket()
yield from send_close(loop, sock, addr, SENTINEL)
@asyncio.coroutine
def receiving(addr):
loop = asyncio.get_event_loop()
sock = socket.socket()
try:
sock.setblocking(False)
sock.bind(addr)
sock.listen(5)
while True:
data = yield from accept_recv(loop, sock)
if data == SENTINEL: # Got a sentinel
return
print("Recevied:", data)
finally: sock.close()
def main():
loop = asyncio.get_event_loop()
# add these items to the event loop
recv = asyncio.async(receiving(addr), loop=loop)
send = asyncio.async(sending(addr, range(10)), loop=loop)
loop.run_until_complete(asyncio.wait([recv, send]))
main()
```
Finally, `asyncio.async` is the right way to add tasks to the event loop. `create_task` was added in Python 3.4.2, so if you have an earlier version it won't exist. |
Counting consecutive alphabets and hyphens and encode them as run length | 29,869,057 | 7 | 2015-04-25T18:31:29Z | 29,869,126 | 11 | 2015-04-25T18:38:13Z | [
"python",
"python-2.7",
"collections",
"counter"
] | How do I encode my hyphenated fasta format string to group all consecutive Nucleotide and hyphens and [encode them as run length](http://en.wikipedia.org/wiki/Run-length_encoding).
Consider my sequence as "ATGC----CGCTA-----G---". The string has sequence of [Nucleotide](http://en.wikipedia.org/wiki/Nucleotide) followed by sequence of hyphens. I am trying to group all consecutive Nucleotide as the letter `M` and consecutive hyphens as letter `D` and prefix it with the size of the sub sequence.
The final result out of this encoding should be `4M4D5M5D1M3D`.
The following pictorial graphic explains it further
```
ATGC----CGCTA-----G---
| | | | | |
V V V V V V
4M 4D 5M 5D 1M 3D
```
When I use `Counter` or `list.count()`, I get `"M":10 "D":12`:
```
from collections import Counter
seq="ATGC----CGCTA-----G---"
M=0
D=0
cigar=[]
for char in seq:
if char.isalpha():
M+=1
cigar.append("M")
else:
D+=1
cigar.append("D")
print Counter(cigar)
``` | This problem is ideal for [itertools.groupby](https://docs.python.org/2/library/itertools.html#itertools.groupby)
**Implementation**
```
from itertools import groupby
''.join('{}{}'.format(len(list(g)), 'DM'[k])
for k, g in groupby(seq, key = str.isalpha))
```
**Output**
'4M4D5M5D1M3D'
**Explanation**
Notably, the key function is crucial here. Group the sequence based on it being an alphabet or not. Once done, it should be straight forward to count the size of each group and figure out the type of the group from the key element.
**Some explanation of code**
* `'DM'[k]`: This is just a nifty way of representing `"M" if k == True else "D"`
* `len(list(g))`: Determines the size of each group. Alternatively, it could have been written as `sum(1 for e in g)`
* `'{}{}'.format`: String formatting to create a concatenation of the consecutive frequency and the type
* `''.join(`: To join the list elements as a string sequence. |
Why is it valid to assign to an empty list but not to an empty tuple? | 29,870,019 | 26 | 2015-04-25T19:55:52Z | 29,870,228 | 12 | 2015-04-25T20:14:40Z | [
"python",
"iterable-unpacking"
] | This came up in [a recent PyCon talk](https://youtu.be/MCs5OvhV9S4?t=42m17s).
The statement
```
[] = []
```
does nothing meaningful, but it does not throw an exception either. I have the feeling this must be due to unpacking rules. You can do [tuple unpacking](http://openbookproject.net/thinkcs/python/english3e/tuples.html#tuple-assignment) with lists too, e.g.,
```
[a, b] = [1, 2]
```
does what you would expect. As logical consequence, this also should work, when the number of elements to unpack is 0, which would explain why assigning to an empty list is valid. This theory is further supported by what happens when you try to assign a non-empty list to an empty list:
```
>>> [] = [1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: too many values to unpack
```
I would be happy with this explanation, if the same would also be true for tuples. If we can unpack to a list with 0 elements, we should also be able to unpack to a tuple with 0 elements, no? However:
```
>>> () = ()
File "<stdin>", line 1
SyntaxError: can't assign to ()
```
It seems like unpacking rules are not applied for tuples as they are for lists. I cannot think of any explanation for this inconsistency. Is there a reason for this behavior? | I decided to try to use `dis` to figure out what's going on here, when I tripped over something curious:
```
>>> def foo():
... [] = []
...
>>> dis.dis(foo)
2 0 BUILD_LIST 0
3 UNPACK_SEQUENCE 0
6 LOAD_CONST 0 (None)
9 RETURN_VALUE
>>> def bar():
... () = ()
...
File "<stdin>", line 2
SyntaxError: can't assign to ()
```
Somehow the Python *compiler* special-cases an empty tuple on the LHS. This difference varies from [the specification](https://docs.python.org/2/reference/simple_stmts.html#assignment-statements), which states:
> Assignment of an object to a single target is recursively defined as follows.
>
> ...
>
> * If the target is a target list enclosed in parentheses or in square brackets: The object must be an iterable with the same number of items as there are targets in the target list, and its items are assigned, from left to right, to the corresponding targets.
So it looks like you've found a legitimate, although ultimately inconsequential, bug in CPython (2.7.8 and 3.4.1 tested).
IronPython 2.6.1 exhibits the same difference, but Jython 2.7b3+ has a stranger behavior, with `() = ()` starting a statement with seemingly no way to end it. |
Why is it valid to assign to an empty list but not to an empty tuple? | 29,870,019 | 26 | 2015-04-25T19:55:52Z | 29,870,332 | 18 | 2015-04-25T20:23:37Z | [
"python",
"iterable-unpacking"
] | This came up in [a recent PyCon talk](https://youtu.be/MCs5OvhV9S4?t=42m17s).
The statement
```
[] = []
```
does nothing meaningful, but it does not throw an exception either. I have the feeling this must be due to unpacking rules. You can do [tuple unpacking](http://openbookproject.net/thinkcs/python/english3e/tuples.html#tuple-assignment) with lists too, e.g.,
```
[a, b] = [1, 2]
```
does what you would expect. As logical consequence, this also should work, when the number of elements to unpack is 0, which would explain why assigning to an empty list is valid. This theory is further supported by what happens when you try to assign a non-empty list to an empty list:
```
>>> [] = [1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: too many values to unpack
```
I would be happy with this explanation, if the same would also be true for tuples. If we can unpack to a list with 0 elements, we should also be able to unpack to a tuple with 0 elements, no? However:
```
>>> () = ()
File "<stdin>", line 1
SyntaxError: can't assign to ()
```
It seems like unpacking rules are not applied for tuples as they are for lists. I cannot think of any explanation for this inconsistency. Is there a reason for this behavior? | The comment by @user2357112 that this seems to be coincidence appears to be correct. The relevant part of the Python source code is in [`Python/ast.c`](https://hg.python.org/cpython/file/0351b0cb31d6/Python/ast.c#l916):
```
switch (e->kind) {
# several cases snipped
case List_kind:
e->v.List.ctx = ctx;
s = e->v.List.elts;
break;
case Tuple_kind:
if (asdl_seq_LEN(e->v.Tuple.elts)) {
e->v.Tuple.ctx = ctx;
s = e->v.Tuple.elts;
}
else {
expr_name = "()";
}
break;
# several more cases snipped
}
/* Check for error string set by switch */
if (expr_name) {
char buf[300];
PyOS_snprintf(buf, sizeof(buf),
"can't %s %s",
ctx == Store ? "assign to" : "delete",
expr_name);
return ast_error(c, n, buf);
}
```
`tuple`s have an explicit check that the length is not zero and raise an error when it is. `list`s do not have any such check, so there's no exception raised.
I don't see any particular reason for allowing assignment to an empty list when it is an error to assign to an empty tuple, but perhaps there's some special case that I'm not considering. I'd suggest that this is probably a (trivial) bug and that the behaviors should be the same for both types. |
Python read a huge file and eliminate duplicate lines | 29,880,603 | 5 | 2015-04-26T16:57:39Z | 29,880,709 | 7 | 2015-04-26T17:07:12Z | [
"python",
"large-files"
] | I have a huge text file that has duplicate lines. The size would be about 150000000 lines. I'd like to find the most efficient way to read these lines in and eliminate duplicates. Some of the approaches I'm considering are as follows :-
1. Read the whole file in, do a list(set(lines)).
2. Read 10k lines in at a time, do a list(set(lines)) on what I have, read another 10k lines into the list, do a list(set(lines)). Repeat.
How would you approach this problem? Would any form of multiprocessing help? | Multiprocessing will not really help, because your bottleneck is memory. You will need to use hashes:
1. Read line
2. Calculate hash, e.g. md5, look it up in a set of all encountered hashes.
3. Output line if hash not found in set and add this hash to set.
Couple things to be mindful of:
* md5 takes 128 bits, so even without overhead it is more than 2G of ram.
* set and dict have large memory overhead.
So if you have 4+ gigs, it is doable. A more scalable solution would be to store encountered hashes in sorted file(s) on disk, and search through them every time. This will be (a lot!) slower, but you can have as low memory footprint as you want.
Also if you don't care about line ordering in resulting file, you can split your file into smaller files based on some hash function (lines with md5 starting with a, lines with md5 starting with b etc). This will allow you to make them small enough to just `sort | uniq` them (or sort in-memory with python, if you wish) and concatenate results. |
How to mock asyncio coroutines? | 29,881,236 | 13 | 2015-04-26T17:51:07Z | 29,892,903 | 9 | 2015-04-27T10:13:54Z | [
"python",
"unit-testing",
"mocking",
"python-asyncio"
] | The following code fails with `TypeError: 'Mock' object is not iterable` in `ImBeingTested.i_call_other_coroutines` because I've replaced `ImGoingToBeMocked` by a Mock object.
How can I mock coroutines?
```
class ImGoingToBeMocked:
@asyncio.coroutine
def yeah_im_not_going_to_run(self):
yield from asyncio.sleep(1)
return "sup"
class ImBeingTested:
def __init__(self, hidude):
self.hidude = hidude
@asyncio.coroutine
def i_call_other_coroutines(self):
return (yield from self.hidude.yeah_im_not_going_to_run())
class TestImBeingTested(unittest.TestCase):
def test_i_call_other_coroutines(self):
mocked = Mock(ImGoingToBeMocked)
ibt = ImBeingTested(mocked)
ret = asyncio.get_event_loop().run_until_complete(ibt.i_call_other_coroutines())
``` | Since `mock` library doesn't support coroutines I create mocked coroutines manually and assign those to mock object. A bit more verbose but it works.
Your example may look like this:
```
import asyncio
import unittest
from unittest.mock import Mock
class ImGoingToBeMocked:
@asyncio.coroutine
def yeah_im_not_going_to_run(self):
yield from asyncio.sleep(1)
return "sup"
class ImBeingTested:
def __init__(self, hidude):
self.hidude = hidude
@asyncio.coroutine
def i_call_other_coroutines(self):
return (yield from self.hidude.yeah_im_not_going_to_run())
class TestImBeingTested(unittest.TestCase):
def test_i_call_other_coroutines(self):
mocked = Mock(ImGoingToBeMocked)
ibt = ImBeingTested(mocked)
@asyncio.coroutine
def mock_coro():
return "sup"
mocked.yeah_im_not_going_to_run = mock_coro
ret = asyncio.get_event_loop().run_until_complete(
ibt.i_call_other_coroutines())
self.assertEqual("sup", ret)
if __name__ == '__main__':
unittest.main()
``` |
How to mock asyncio coroutines? | 29,881,236 | 13 | 2015-04-26T17:51:07Z | 30,177,263 | 8 | 2015-05-11T20:41:16Z | [
"python",
"unit-testing",
"mocking",
"python-asyncio"
] | The following code fails with `TypeError: 'Mock' object is not iterable` in `ImBeingTested.i_call_other_coroutines` because I've replaced `ImGoingToBeMocked` by a Mock object.
How can I mock coroutines?
```
class ImGoingToBeMocked:
@asyncio.coroutine
def yeah_im_not_going_to_run(self):
yield from asyncio.sleep(1)
return "sup"
class ImBeingTested:
def __init__(self, hidude):
self.hidude = hidude
@asyncio.coroutine
def i_call_other_coroutines(self):
return (yield from self.hidude.yeah_im_not_going_to_run())
class TestImBeingTested(unittest.TestCase):
def test_i_call_other_coroutines(self):
mocked = Mock(ImGoingToBeMocked)
ibt = ImBeingTested(mocked)
ret = asyncio.get_event_loop().run_until_complete(ibt.i_call_other_coroutines())
``` | I am writting a wrapper to unittest which aims at cutting the boilerplate when writting tests for asyncio.
The code lives here: <https://github.com/Martiusweb/asynctest>
You can mock a coroutine with `asynctest.CoroutineMock`:
```
>>> mock = CoroutineMock(return_value='a result')
>>> asyncio.iscoroutinefunction(mock)
True
>>> asyncio.iscoroutine(mock())
True
>>> asyncio.run_until_complete(mock())
'a result'
```
It also works with the `side_effect` attribute, and an `asynctest.Mock` with a `spec` can return CoroutineMock:
```
>>> asyncio.iscoroutinefunction(Foo().coroutine)
True
>>> asyncio.iscoroutinefunction(Foo().function)
False
>>> asynctest.Mock(spec=Foo()).coroutine
<class 'asynctest.mock.CoroutineMock'>
>>> asynctest.Mock(spec=Foo()).function
<class 'asynctest.mock.Mock'>
```
All the features of unittest.Mock are expected to work correctly (patch(), etc). |
String split in python | 29,887,425 | 3 | 2015-04-27T04:50:36Z | 29,887,448 | 7 | 2015-04-27T04:53:08Z | [
"python",
"for-loop",
"split"
] | In the loop below, `content` is a list containing an unknown amount of strings. each string contains a name with a set of numbers after the name, each delimited by a space. I am trying to use `split` to put the name and each score into a variable but I am having trouble because each name has a variable amount of scores. How would I be able to do this without knowing how many scores each name will have?
```
for i in content:
name, score1, score2 = i.split()
print name, score1, score2
``` | You can use [slicing for assignment](https://docs.python.org/3.4/reference/expressions.html#slicings) :
```
for i in content:
s=i.split()
name,scores=s[0],s[1:]
```
At the end you'll have the name in `name` variable and list of scores in `scores`.
In python 3 you can use `star expressions` :
```
for i in content:
name,*scores=i.split()
``` |
subprocess.check_output(): OSError file not found in Python | 29,891,059 | 2 | 2015-04-27T08:47:44Z | 29,891,132 | 11 | 2015-04-27T08:51:42Z | [
"python"
] | Executing following command and its variations always results in an error, which I just cannot figure out:
```
command = "/bin/dd if=/dev/sda8 count=100 skip=$(expr 19868431049 / 512)"
print subprocess.check_output([command])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/subprocess.py", line 566, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
```
WHich file it is referring to ? other commands like ls,wc are running correctly though, the command is also running well on terminal but not python script. | Your `command` is a list with one element. Imagine if you tried to run this at the shell:
```
/bin/'dd if='/dev/'sda8 count=100 skip=$(expr 19868431049 '/' 512)'
```
That's effectively what you're doing. There's almost certainly no directory named `dd if=` in your `bin` directory, and there's even more almost certainly no `dev` directory under that with an `sd8 count=100 skip=$(expr 19868431049` directory with a program named `512` in it.
What you want is a list where each argument is its own element:
```
command = ['/bin/dd', 'if=/dev/sda8', 'count=100', 'skip=$(expr 19868431049 / 512)']
print subprocess.check_output(command) # notice no []
```
---
But that brings us to your second problem: `$(expr 19868431049 / 512)` isn't going to be parsed by Python or by `dd`; that's bash syntax. You can, of course, just do the same thing in Python instead of in bash:
```
command = ['/bin/dd', 'if=/dev/sda8', 'count=100',
'skip={}'.format(19868431049 // 512)]
print subprocess.check_output(command)
```
---
Or, if you really want to use bash for no good reason, pass a string, rather than a list, and use `shell=True`:
```
command = "/bin/dd if=/dev/sda8 count=100 skip=$(expr 19868431049 / 512)"
print subprocess.check_output(command, shell=True) # still no []
```
Although that still isn't going to work portably, because the default shell is `/bin/sh`, which may not know how to handle bashisms like `$(â¦)` (and `expr`, although I think POSIX requires that `expr` exist as a separate processâ¦). So:
```
command = "/bin/dd if=/dev/sda8 count=100 skip=$(expr 19868431049 / 512)"
print subprocess.check_output(command, shell=True, executable='/bin/bash')
``` |
Nim equivalent of Python's list comprehension | 29,895,918 | 9 | 2015-04-27T12:33:50Z | 29,896,427 | 8 | 2015-04-27T12:55:20Z | [
"python",
"list-comprehension",
"nim",
"nimrod"
] | Since Nim shares a lot of features with Python, i would not be surprised if it implements [Python's list comprehension](http://python-3-patterns-idioms-test.readthedocs.org/en/latest/Comprehensions.html#list-comprehensions) too:
```
string = "Hello 12345 World"
numbers = [x for x in string if x.isdigit()]
# ['1', '2', '3', '4', '5']
```
Is this actually possible in Nim? If not, could be implemented with templates/macros ? | List comprehension is already implemented in Nim but currently still in the `future` package (i.e., you have to `import future`). It is implemented as a macro called `lc` and allows to write list comprehensions like this:
```
lc[x | (x <- 1..10, x mod 2 == 0), int]
lc[(x,y,z) | (x <- 1..n, y <- x..n, z <- y..n, x*x + y*y == z*z), tuple[a,b,c: int]]
```
Note that the macro requires to specify the type of the elements. |
Run .py file until specified line number | 29,896,999 | 5 | 2015-04-27T13:20:14Z | 29,897,131 | 11 | 2015-04-27T13:26:53Z | [
"python",
"terminal"
] | In a linux terminal typing
```
python script.py
```
Will run run `script.py` and exit the python console, but what if I just want to run a part of the script and leave the console open? For example, run `script.py` until line 15 and leave the console open for further scripting. How would I do this?
Let's say it's possible, then with the console still open and `script.py` run until line 15, can I then from inside the console call line fragments from other py files?
...something like
```
python script.py 15 #(opens script and runs lines 1-15 and leaves console open)
```
Then having the console open, I would like to run lines 25-42 from `anotherscript.py`
```
>15 lines of python code run from script.py
> run('anotherscript.py', lines = 25-42)
> print "I'm so happy the console is still open so I can script some more")
I'm so happy the console is still open so I can script some more
>
``` | Your best bet might be `pdb`, the [Python debugger](https://docs.python.org/3/library/pdb.html). You can start you script under `pdb`, set a breakpoint on line 15, and then run your script.
```
python -m pdb script.py
b 15 # <-- Set breakpoint on line 15
c # "continue" -> run your program
# will break on line 15
```
You can then inspect your variables and call functions. Since Python 3.2, you can also use the `interact` command inside `pdb` to get a regular Python shell at the current execution point!
If that fits your bill and you also like IPython, you can check out [IPdb](https://pypi.python.org/pypi/ipdb), which is a bit nicer than normal pdb, and drops you into an IPython shell with `interact`. |
Print the complete string of a pandas dataframe | 29,902,714 | 6 | 2015-04-27T17:54:00Z | 29,902,819 | 8 | 2015-04-27T17:59:42Z | [
"python",
"string",
"pandas"
] | I am struggling with the seemingly very simple thing.I have a pandas data frame containing very long string.
```
df = pd.DataFrame({'one' : ['one', 'two', 'This is very long string very long string very long string veryvery long string']})
```
Now when I try to print the same, I do not see the full string I rather see only part of the string.
I tried following options
* using print(df.iloc[2])
* using to\_html
* using to\_string
* One of the stackoverflow answer suggested to increase column width by
using pandas display option, that did not work either.
* I also did not get how set\_printoptions will help me.
Any ideas appreciated. Looks very simple, but not able to get it! | You can use `options.display.max_colwidth` to specify you want to see more in the default representation:
```
In [2]: df
Out[2]:
one
0 one
1 two
2 This is very long string very long string very...
In [3]: pd.options.display.max_colwidth
Out[3]: 50
In [4]: pd.options.display.max_colwidth = 100
In [5]: df
Out[5]:
one
0 one
1 two
2 This is very long string very long string very long string veryvery long string
```
And indeed, if you just want to inspect the one value, by accessing it (as a scalar, not as a row as `df.iloc[2]` does) you also see the full string:
```
In [7]: df.iloc[2,0] # or df.loc[2,'one']
Out[7]: 'This is very long string very long string very long string veryvery long string'
``` |
Count most frequent 100 words from sentences in Dataframe Pandas | 29,903,025 | 3 | 2015-04-27T18:11:22Z | 29,903,102 | 7 | 2015-04-27T18:15:59Z | [
"python",
"pandas"
] | I have text reviews in one column in Pandas dataframe and I want to count the N-most frequent words with their frequency counts (in whole column - NOT in single cell). One approach is Counting the words using a counter, by iterating through each row. Is there a better alternative?
Representative data.
```
0 a heartening tale of small victories and endu
1 no sophomore slump for director sam mendes w
2 if you are an actor who can relate to the sea
3 it's this memory-as-identity obviation that g
4 boyd's screenplay ( co-written with guardian
``` | ```
Counter(" ".join(df["text"]).split()).most_common(100)
```
im pretty sure would give you what you want (you might have to remove some non-words from the counter result before calling most\_common) |
Django: How to automatically change a field's value at the time mentioned in the same object? | 29,903,134 | 10 | 2015-04-27T18:18:11Z | 31,628,267 | 10 | 2015-07-25T15:47:34Z | [
"python",
"django",
"django-models",
"celery",
"django-celery"
] | I am working on a django project for racing event in which a table in the database has three fields.
1)Boolean field to know whether race is active or not
2)Race start time
3)Race end time
While creating an object of it,the start\_time and end\_time are specified. How to change the value of boolean field to True when the race starts and
to False when it ends? How to schedule these activities? | **To automatically update a model field after a specific time, you can use [Celery tasks](http://docs.celeryproject.org/en/latest/userguide/tasks.html).**
**Step-1: Create a Celery Task**
We will first create a celery task called `set_race_as_inactive` which will set the `is_active` flag of the `race_object` to `False` after the current date is greater than the `end_time` of the `race_object`.
This task will be executed by `Celery` only if the current time is greater than the race object's `end_time`.
```
@app.task
def set_race_as_inactive(race_object):
"""
This celery task sets the 'is_active' flag of the race object
to False in the database after the race end time has elapsed.
"""
race_object.is_active = False # set the race as not active
race_object.save() # save the race object
```
**Step-2: Call this celery task using [`eta`](http://ask.github.io/celery/userguide/executing.html#eta-and-countdown) argument**
After creating the celery task `set_race_as_inactive`, we need to call this celery task.
We will call this task whenever we save a new `race_object` into our database. So, whenever a new`race_object` will be saved, a celery task will be fired which will execute only after the `end_time` of the `race_object`.
We will call the task using `apply_async()` and pass the `eta` argument as the `end_time` of the `race_object`.
As per [Celery docs,](http://celery.readthedocs.org/en/latest/userguide/calling.html#eta-and-countdown)
> The ETA (estimated time of arrival) lets you set a specific date and
> time that is the earliest time at which your task will be executed.
>
> The task is guaranteed to be executed at some time after the specified
> date and time, but not necessarily at that exact time.
```
from my_app.tasks import set_race_as_inactive
class RaceModel(models.Model):
...
def save(self, *args, **kwargs):
..
create_task = False # variable to know if celery task is to be created
if self.pk is None: # Check if instance has 'pk' attribute set
# Celery Task is to created in case of 'INSERT'
create_task = True # set the variable
super(RaceModel, self).save(*args, **kwargs) # Call the Django's "real" save() method.
if create_task: # check if task is to be created
# pass the current instance as 'args' and call the task with 'eta' argument
# to execute after the race `end_time`
set_race_as_inactive.apply_async(args=[self], eta=self.end_time) # task will be executed after 'race_end_time'
```
This checking of `self.pk` with `None` is done so that only in case of new objects creation, a celery task is created. If we don't do this, then for every `.save()` call (either `INSERT` or `UPDATE`) a celery task will be created which we don't want. This will lead to many unnecessary celery tasks waiting to be executed and will overload our celery queues.
The **benefit** of using Celery is that updation of the `is_active` flag will occur automatically in the background asynchronously without you needing to worry about manually updating them. Every time a new race object is created, a task will be fired and Celery will defer its execution until the `end_time` of the race. After the `end_time` has elapsed, Celery will execute that task. |
Operate on a list in a pythonic way when output depends on other elements | 29,903,211 | 11 | 2015-04-27T18:22:38Z | 29,903,458 | 7 | 2015-04-27T18:36:25Z | [
"python"
] | I have a task requiring an operation on every element of a list, with the outcome of the operation depending on other elements in the list.
For example, I might like to concatenate a list of strings conditional on them starting with a particular character:
This code solves the problem:
```
x = ['*a', 'b', 'c', '*d', 'e', '*f', '*g']
concat = []
for element in x:
if element.startswith('*'):
concat.append(element)
else:
concat[len(concat) - 1] += element
```
resulting in:
```
concat
Out[16]: ['*abc', '*de', '*f', '*g']
```
But this seems horribly un-Pythonic. How should one operate on the elements of a `list` when the outcome of the operation depends on previous outcomes? | You could use regex to accomplish this succinctly. This does however, sort of circumvent your question regarding how to operate on dependent list elements. Credits to [mbomb007](http://stackoverflow.com/users/2415524/mbomb007) for improving the allowed character functionality.
```
import re
z = re.findall('\*[^*]+',"".join(x))
```
**Outputs:**
```
['*abc', '*de', '*f', '*g']
```
---
**Small benchmarking:**
[Donkey Kong's answer](http://stackoverflow.com/a/29903458/4686625):
```
import timeit
setup = '''
import re
x = ['*a', 'b', 'c', '*d', 'e', '*f', '*g']
y = ['*a', 'b', 'c', '*d', 'e', '*f', '*g'] * 100
'''
print (min(timeit.Timer('re.findall("\*[^\*]+","".join(x))', setup=setup).repeat(7, 1000)))
print (min(timeit.Timer('re.findall("\*[^\*]+","".join(y))', setup=setup).repeat(7, 1000)))
```
Returns `0.00226416693456`, and `0.06827958075`, respectively.
[Chepner's answer](http://stackoverflow.com/a/29903809/4686625):
```
setup = '''
x = ['*a', 'b', 'c', '*d', 'e', '*f', '*g']
y = ['*a', 'b', 'c', '*d', 'e', '*f', '*g'] * 100
def chepner(x):
partials = []
for element in x:
if element.startswith('*'):
partials.append([])
partials[-1].append(element)
concat = map("".join, partials)
return concat
'''
print (min(timeit.Timer('chepner(x)', setup=setup).repeat(7, 1000)))
print (min(timeit.Timer('chepner(y)', setup=setup).repeat(7, 1000)))
```
Returns `0.00456210269896` and `0.364635824689`, respectively.
[Saksham's answer](http://stackoverflow.com/a/29904042/4686625)
```
setup = '''
x = ['*a', 'b', 'c', '*d', 'e', '*f', '*g']
y = ['*a', 'b', 'c', '*d', 'e', '*f', '*g'] * 100
'''
print (min(timeit.Timer("['*'+item for item in ''.join(x).split('*') if item]", setup=setup).repeat(7, 1000)))
print (min(timeit.Timer("['*'+item for item in ''.join(y).split('*') if item]", setup=setup).repeat(7, 1000))))
```
Returns `0.00104848906006`, and `0.0556093171512` respectively.
**tl;dr** Saksham's is slightly faster than mine, then Chepner's follows both of ours. |
Operate on a list in a pythonic way when output depends on other elements | 29,903,211 | 11 | 2015-04-27T18:22:38Z | 29,903,809 | 13 | 2015-04-27T18:56:36Z | [
"python"
] | I have a task requiring an operation on every element of a list, with the outcome of the operation depending on other elements in the list.
For example, I might like to concatenate a list of strings conditional on them starting with a particular character:
This code solves the problem:
```
x = ['*a', 'b', 'c', '*d', 'e', '*f', '*g']
concat = []
for element in x:
if element.startswith('*'):
concat.append(element)
else:
concat[len(concat) - 1] += element
```
resulting in:
```
concat
Out[16]: ['*abc', '*de', '*f', '*g']
```
But this seems horribly un-Pythonic. How should one operate on the elements of a `list` when the outcome of the operation depends on previous outcomes? | A few relevant excerpts from `import this` (the arbiter of what is Pythonic):
* Simple is better than complex
* Readability counts
* Explicit is better than implicit.
I would just use code like this, and not worry about replacing the for loop with something "flatter".
```
x = ['*a', 'b', 'c', '*d', 'e', '*f', '*g']
partials = []
for element in x:
if element.startswith('*'):
partials.append([])
partials[-1].append(element)
concat = map("".join, partials)
``` |
Why is my computation so much faster in C# than Python | 29,903,320 | 7 | 2015-04-27T18:28:05Z | 29,904,752 | 8 | 2015-04-27T19:53:51Z | [
"c#",
"python"
] | Below is a simple piece of process coded in `C#` and `Python` respectively (for those of you curious about the process, it's the solution for Problem No. 5 of [Project Euler](https://projecteuler.net/problem=5)).
My question is, the `C#` code below takes only 9 seconds to iterate, while completion of `Python` code takes 283 seconds (to be exact, 283 seconds on Python 3.4.3 - 64 bits and 329 seconds on Python 2.7.9 - 32 bits).
So far, I've coded similar processes both in `C#` and `Python` and the execution time differences were comparable. This time however, there is an extreme difference between the elapsed times.
I think, some part of this difference arise from the flexible variable type of python language (I suspect, python converts some part of variables into double) but this much is still hard to explain.
What am I doing wrong?
My system: Windows-7 64 bits,
C# - VS Express 2012 (9 seconds)
Python 3.4.3 64 bits (283 seconds)
Python 2.7.9 32 bits (329 seconds)
c-sharp code:
```
using System;
namespace bug_vcs {
class Program {
public static void Main(string[] args) {
DateTime t0 = DateTime.Now;
int maxNumber = 20;
bool found = false;
long start = maxNumber;
while (!found) {
found = true;
int i = 2;
while ((i < maxNumber + 1) && found) {
if (start % i != 0) {
found = false;
}
i++;
}
start++;
}
Console.WriteLine("{0:d}", start - 1);
Console.WriteLine("time elapsed = {0:f} sec.", (DateTime.Now - t0).Seconds);
Console.ReadLine();
}
}
}
```
and python code:
```
from datetime import datetime
t0 = datetime.now()
max_number = 20
found = False
start = max_number
while not found:
found = True
i = 2
while ((i < max_number + 1) and found):
if (start % i) != 0:
found = False
i += 1
start += 1
print("number {0:d}\n".format(start - 1))
print("time elapsed = {0:f} sec.\n".format((datetime.now() - t0).seconds))
``` | The answer is simply that Python deals with objects for everything and that it doesn't have [JIT](http://en.wikipedia.org/wiki/Just-in-time_compilation) by default. So rather than being very efficient by modifying a few bytes on the stack and optimizing the hot parts of the code (i.e., the iteration) â Python chugs along with rich objects representing numbers and no on-the-fly optimizations.
If you tried this in a variant of Python that has JIT (for example, PyPy) I guarantee you that you'll see a massive difference.
A general tip is to avoid standard Python for very computationally expensive operations (especially if this is for a backend serving requests from multiple clients). Java, C#, JavaScript, etc. with JIT are incomparably more efficient.
By the way, if you want to write your example in a more Pythonic manner, you could do it like this:
```
from datetime import datetime
start_time = datetime.now()
max_number = 20
x = max_number
while True:
i = 2
while i <= max_number:
if x % i: break
i += 1
else:
# x was not divisible by 2...20
break
x += 1
print('number: %d' % x)
print('time elapsed: %d seconds' % (datetime.now() - start_time).seconds)
```
The above executed in 90 seconds for me. The reason it's faster relies on seemingly stupid things like `x` being shorter than `start`, that I'm not assigning variables as often, and that I'm relying on Python's own control structures rather than variable checking to jump in/out of loops. |
How to find the points of intersection of a line and multiple curves in Python? | 29,904,423 | 8 | 2015-04-27T19:35:09Z | 29,905,260 | 14 | 2015-04-27T20:23:53Z | [
"python",
"numpy",
"scipy",
"equation"
] | I have data represented in the figure.

The curves were extrapolated and I have a line whose equation is known. The equation of curves are unknown. Now, how do I find the points of intersection of this line with each of the curves?
The reproducible code:
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
x = np.array([[0.12, 0.11, 0.1, 0.09, 0.08],
[0.13, 0.12, 0.11, 0.1, 0.09],
[0.15, 0.14, 0.12, 0.11, 0.1],
[0.17, 0.15, 0.14, 0.12, 0.11],
[0.19, 0.17, 0.16, 0.14, 0.12],
[0.22, 0.19, 0.17, 0.15, 0.13],
[0.24, 0.22, 0.19, 0.16, 0.14],
[0.27, 0.24, 0.21, 0.18, 0.15],
[0.29, 0.26, 0.22, 0.19, 0.16]])
y = np.array([[71.64, 78.52, 84.91, 89.35, 97.58],
[66.28, 73.67, 79.87, 85.36, 93.24],
[61.48, 69.31, 75.36, 81.87, 89.35],
[57.61, 65.75, 71.7, 79.1, 86.13],
[55.12, 63.34, 69.32, 77.29, 83.88],
[54.58, 62.54, 68.7, 76.72, 82.92],
[56.58, 63.87, 70.3, 77.69, 83.53],
[61.67, 67.79, 74.41, 80.43, 85.86],
[70.08, 74.62, 80.93, 85.06, 89.84]])
x1 = np.linspace(0, 0.4, 100)
y1 = -100 * x1 + 100
plt.figure(figsize = (5.15,5.15))
plt.subplot(111)
for i in range(5):
x_val = np.linspace(x[0, i] - 0.05, x[-1, i] + 0.05, 100)
x_int = np.interp(x_val, x[:, i], y[:, i])
poly = np.polyfit(x[:, i], y[:, i], deg=2)
y_int = np.polyval(poly, x_val)
plt.plot(x[:, i], y[:, i], linestyle = '', marker = 'o')
plt.plot(x_val, y_int, linestyle = ':', linewidth = 0.25, color = 'black')
plt.plot(x1, y1, linestyle = '-.', linewidth = 0.5, color = 'black')
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
``` | We *do* know the equations of the curves. They are of the form `a*x**2 + b*x + c`, where `a`,`b`, and `c` are the elements of the vector returned by `np.polyfit`. Then we just need to find the roots of a quadratic equation in order to find the intersections:
```
def quadratic_intersections(p, q):
"""Given two quadratics p and q, determines the points of intersection"""
x = np.roots(np.asarray(p) - np.asarray(q))
y = np.polyval(p, x)
return x, y
```
The above function isn't super robust: there doesn't *need* to be a real root, and it doesn't really check for that. You're free to make it better.
Anyways, we give `quadratic_intersections` two quadratics, and it returns the two points of intersection. Putting it into your code, we have:
```
x1 = np.linspace(0, 0.4, 100)
y1 = -100 * x1 + 100
plt.figure(figsize = (7,7))
plt.subplot(111)
plt.plot(x1, y1, linestyle = '-.', linewidth = 0.5, color = 'black')
for i in range(5):
x_val = np.linspace(x[0, i] - 0.05, x[-1, i] + 0.05, 100)
poly = np.polyfit(x[:, i], y[:, i], deg=2)
y_int = np.polyval(poly, x_val)
plt.plot(x[:, i], y[:, i], linestyle = '', marker = 'o')
plt.plot(x_val, y_int, linestyle = ':', linewidth = 0.25, color = 'black')
ix = quadratic_intersections(poly, [0, -100, 100])
plt.scatter(*ix, marker='x', color='black', s=40, linewidth=2)
plt.xlabel('X')
plt.ylabel('Y')
plt.xlim([0,.35])
plt.ylim([40,110])
plt.show()
```
This makes the following figure:

Now, if you don't know that the functions you are dealing with are polynomials, you can use the optimization tools in [scipy.optimize](http://docs.scipy.org/doc/scipy/reference/optimize.html) to find the root. For example:
```
import scipy.optimize
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.style
matplotlib.style.use('fivethirtyeight')
%matplotlib inline
f = lambda x: np.cos(x) - x
g = np.sin
h = lambda x: f(x) - g(x)
x = np.linspace(0,3,100)
plt.plot(x, f(x), zorder=1)
plt.plot(x, g(x), zorder=1)
x_int = scipy.optimize.fsolve(h, 1.0)
y_int = f(x_int)
plt.scatter(x_int, y_int, marker='x', s=150, zorder=2,
linewidth=2, color='black')
plt.xlim([0,3])
plt.ylim([-4,2])
```
Which plots:
 |
Pip install -e packages don't appear in Docker | 29,905,909 | 4 | 2015-04-27T21:04:37Z | 30,135,576 | 9 | 2015-05-09T03:18:03Z | [
"python",
"docker",
"pip",
"docker-compose"
] | I have a `requirements.txt` file containing, amongst others:
```
Flask-RQ==0.2
-e git+https://token:[email protected]/user/repo.git#egg=repo
```
When I try to build a Docker container using Docker Compose, it downloads both packages, and install them both, but when I do a `pip freeze` there is no sign of the `-e` package. When I try to run the app, it looks as if this package hasn't been installed. Here's the relevant output from the build:
```
Collecting Flask-RQ==0.2 (from -r requirements.txt (line 3))
Downloading Flask-RQ-0.2.tar.gz
Obtaining repo from git+https://token:[email protected]/user/repo.git#egg=repo (from -r requirements.txt (line 4))
Cloning https://token:[email protected]/user/repo.git to ./src/repo
```
And here's my `Dockerfile`:
```
FROM python:2.7
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install -r requirements.txt
COPY . /usr/src/app
```
I find this situation very strange and would appreciate any help. | I ran into a similar issue, and one possible way that the problem can appear is from:
```
WORKDIR /usr/src/app
```
being set before `pip install`. pip will create the `src/` directory (where the package is installed) inside of the WORKDIR. Now all of this shouldn't be an issue since your app files, when copied over, should not overwrite the `src/` directory.
However, you might be *mounting a volume* to `/usr/src/app`. When you do that, you'll overwrite the `/usr/src/app/src` directory and then your package will not be found.
So one fix is to move WORKDIR after the `pip install`. So your `Dockerfile` will look like:
```
FROM python:2.7
RUN mkdir -p /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install -r /usr/src/app/requirements.txt
COPY . /usr/src/app
WORKDIR /usr/src/app
```
This fixed it for me. Hopefully it'll work for you. |
Microsoft Visual C++ Compiler for Python 3.4 | 29,909,330 | 26 | 2015-04-28T02:38:09Z | 29,910,249 | 12 | 2015-04-28T04:19:00Z | [
"python",
"windows",
"python-3.x",
"compilation"
] | I know that there is a ["Microsoft Visual C++ Compiler for Python 2.7"](http://www.microsoft.com/en-gb/download/details.aspx?id=44266) but is there, currently or planned, a Microsoft Visual C++ Compiler for Python 3.4 or eve Microsoft Visual C++ Compiler for Python 3.x for that matter? It would be supremely beneficial if I didn't have to install a different version of visual studio on my entire lab. | Unfortunately to be able to use the extension modules provided by others you'll be forced to use the official compiler to compile Python. These are:
* Visual Studio 2008 for Python 2.7.
See: <https://docs.python.org/2.7/using/windows.html#compiling-python-on-windows>
* Visual Studio 2010 for Python 3.4.
See: <https://docs.python.org/3.4/using/windows.html#compiling-python-on-windows>
Alternatively, you can use MinGw to compile extensions in a way that won't depend on others.
See: <https://docs.python.org/2/install/#gnu-c-cygwin-MinGW> or <https://docs.python.org/3.4/install/#gnu-c-cygwin-mingw>
This allows you to have one compiler to build your extensions for both versions of Python, Python 2.x and Python 3.x. |
Reading first n lines of a CSV into a dictionary | 29,911,507 | 3 | 2015-04-28T06:04:50Z | 29,911,540 | 8 | 2015-04-28T06:07:11Z | [
"python",
"csv",
"python-3.x",
"dictionary"
] | I have a CSV file I'd like to read into a dictionary for subsequent insertion into a MongoDB collection entitled projects.
I accomplished this with the following:
```
with open('opendata_projects.csv') as f:
records = csv.DictReader(f)
projects.insert(records)
```
However, I found my poor sandbox account couldn't hold all the data. In turn, I'd like to read in the first *n* lines so I can play around with the data and get used to working with MongoDB.
First I checked the docs for the `csv.DictReader` function:
> class csv.DictReader(csvfile, fieldnames=None, restkey=None, restval=None, dialect='excel', \*args, \*\*kwds)
But the function doesn't seem to allow for entering in the number of rows I'd like as a parameter.
So I attempted to do so by writing the following code:
```
with open('opendata_projects.csv') as f:
records = csv.DictReader(f)
for i in records:
if i <= 100:
projects.insert(i)
```
Which was followed by the error:
```
TypeError: unorderable types: dict() <= int()
```
This prompted me to look into dictionaries further, and I found they are unordered. Nevertheless, it seems an example from the Python [csv docs](https://docs.python.org/2/library/csv.html#csv-fmt-params) suggests I can iterate with `csv.DictReader`:
```
with open('names.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
print(row['first_name'], row['last_name'])
```
Is there a way to accomplish what I'd like to do by using these functions? | You can use [`itertools.islice`](https://docs.python.org/3/library/itertools.html#itertools.islice), like this
```
import csv, itertools
with open('names.csv') as csvfile:
for row in itertools.islice(csv.DictReader(csvfile), 100):
print(row['first_name'], row['last_name'])
```
`islice` will create an iterator from the iterable object you pass and it will allow you iterate till the limit, you pass as the second parameter.
---
Apart from that, if you want to count yourself, you can use [`enumerate`](https://docs.python.org/3/library/functions.html#enumerate) function, like this
```
for index, row in enumerate(csv.DictReader(csvfile)):
if index >= 100:
break
print(row['first_name'], row['last_name'])
``` |
How to get meaningful network interface names instead of GUIDs with netifaces under Windows? | 29,913,516 | 6 | 2015-04-28T07:53:08Z | 29,918,755 | 10 | 2015-04-28T11:54:26Z | [
"python",
"python-2.7"
] | I use the `netifaces` module.
```
import netifaces
print netifaces.interfaces()
```
but this shows the result below:
```
['{CDC97813-CC28-4260-BA1E-F0CE3081DEC7}',
'{846EE342-7039-11DE-9D20-806E6F6E6963}',
'{A51BA5F0-738B-4405-975F-44E67383513F}',
'{A646FA85-2EC6-4E57-996E-96E1B1C5CD59}',
'{B5DC7787-26DC-4540-8424-A1D5598DC175}']
```
I want to get a "friendly" interface name like "Local Area Connection" in Windows.
How can I get that? | It looks like `netifaces` leaves it up to us to pull the information out of the Windows Registry. The following functions work for me under Python 3.4 on Windows 8.1.
To get the connection name ...
```
import netifaces as ni
import winreg as wr
from pprint import pprint
def get_connection_name_from_guid(iface_guids):
iface_names = ['(unknown)' for i in range(len(iface_guids))]
reg = wr.ConnectRegistry(None, wr.HKEY_LOCAL_MACHINE)
reg_key = wr.OpenKey(reg, r'SYSTEM\CurrentControlSet\Control\Network\{4d36e972-e325-11ce-bfc1-08002be10318}')
for i in range(len(iface_guids)):
try:
reg_subkey = wr.OpenKey(reg_key, iface_guids[i] + r'\Connection')
iface_names[i] = wr.QueryValueEx(reg_subkey, 'Name')[0]
except FileNotFoundError:
pass
return iface_names
x = ni.interfaces()
pprint(get_connection_name_from_guid(x))
```
.. which on my machine produces:
```
['Local Area Connection* 12',
'Bluetooth Network Connection',
'Wi-Fi',
'Ethernet',
'VirtualBox Host-Only Network',
'(unknown)',
'isatap.{4E4150B0-643B-42EA-AEEA-A14FBD6B1844}',
'isatap.{BB05D283-4CBF-4514-B76C-7B7EBB2FC85B}']
```
To get the driver name ...
```
import netifaces as ni
import winreg as wr
from pprint import pprint
def get_driver_name_from_guid(iface_guids):
iface_names = ['(unknown)' for i in range(len(iface_guids))]
reg = wr.ConnectRegistry(None, wr.HKEY_LOCAL_MACHINE)
reg_key = wr.OpenKey(reg, r'SYSTEM\CurrentControlSet\Control\Class\{4d36e972-e325-11ce-bfc1-08002be10318}')
for i in range(wr.QueryInfoKey(reg_key)[0]):
subkey_name = wr.EnumKey(reg_key, i)
try:
reg_subkey = wr.OpenKey(reg_key, subkey_name)
guid = wr.QueryValueEx(reg_subkey, 'NetCfgInstanceId')[0]
try:
idx = iface_guids.index(guid)
iface_names[idx] = wr.QueryValueEx(reg_subkey, 'DriverDesc')[0]
except ValueError:
pass
except PermissionError:
pass
return iface_names
x = ni.interfaces()
pprint(get_driver_name_from_guid(x))
```
... which gives me:
```
['Microsoft Wi-Fi Direct Virtual Adapter',
'Bluetooth Device (Personal Area Network)',
'Dell Wireless 1395 WLAN Mini-Card',
'Broadcom 440x 10/100 Integrated Controller',
'VirtualBox Host-Only Ethernet Adapter',
'(unknown)',
'Microsoft ISATAP Adapter',
'Microsoft ISATAP Adapter']
``` |
Upgrading from Django 1.6 (with south) to 1.8 doesn't modify 'last_login' on the user table | 29,913,612 | 24 | 2015-04-28T07:57:40Z | 29,921,518 | 20 | 2015-04-28T13:52:36Z | [
"python",
"mysql",
"django",
"django-south",
"django-1.8"
] | I have upgraded from Django 1.6.5 (with south migrations) to Django 1.8. I have followed the instructions here:
<https://docs.djangoproject.com/en/1.8/topics/migrations/#upgrading-from-south>
So, I remove South, delete my previous migrations and run `python manage.py makemigrations` which makes a new migration file. Then I run `python manage.py migrate --fake-initial` to fake the initial migration. Then I run `python manage.py migrate`.
It all runs fine with no errors.
I have a custom user model which inherits `AbstractBaseUser`. In Django 1.8 it seems there is a change to the `last_login` field where it is now able to accept a `null` value (<https://docs.djangoproject.com/fr/1.8/ref/contrib/auth/#django.contrib.auth.models.User.last_login>).
The problem I have, is that the migration does not change the `last_login` field in the database table to accept null values. As a result, when I try and save I get a `IntegrityError` saying `(1048, "Column 'last_login' cannot be null")`.
How can I fix this? Do I need to manually change the field in the database, or is there a way to fix it with migrations?
**Edit**
When I run `python manage.py migrate --fake-initial` the output is this:
Running migrations:
```
Rendering model states... DONE
Applying contenttypes.0001_initial... FAKED
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... FAKED
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying core.0001_initial... FAKED
Applying admin.0001_initial... FAKED
Applying sessions.0001_initial... FAKED
```
However, when I look in the database I still see `user_last_login` as not-null. | As noted in the [1.8 release notes](https://docs.djangoproject.com/en/1.8/releases/1.8/#abstractuser-last-login-allows-null-values):
> If you are using a custom user model that inherits from `AbstractUser`, youâll need to run `makemigrations` and generate a migration for your app that contains that model. |
How to write a list to a file in python | 29,915,158 | 2 | 2015-04-28T09:13:08Z | 29,915,199 | 10 | 2015-04-28T09:14:52Z | [
"python",
"list"
] | I have a program that encrypts the contents of a file into cipher text. I want the program to write the ciphertext, that Is in a list, to a file.
The part of my code I need help with is:
```
for char in encryptFile:
cipherTextList = []
if char == (" "):
print(" ",end=" ")
else:
cipherText = (ord(char)) + offsetFactor
if cipherText > 126:
cipherText = cipherText - 94
cipherText = (chr(cipherText))
cipherTextList.append(cipherText)
for cipherText in cipherTextList:
print (cipherText,end=" ")
with open ("newCipherFile.txt","w") as cFile:
cFile.writelines(cipherTextList)
```
The whole program runs smoothly, however the file that is called "newCipherFile.txt" only has one character in it.
I think this has something to do with the location of the empty list "cipherTextList = []", however I have tried moving this list out of the for loop, into the function, but when I print it the part that prints the ciphertext is in an infinite loop and prints the normal text over and over again.
Any help would be lovely. | You keep overwriting opening the file with `w` so you only ever see the very last values, use `a` to append:
```
with open("newCipherFile.txt","a") as cFile:
```
Or a better idea so to open it outside the loop once:
```
with open("newCipherFile.txt","w") as cFile:
for char in encryptFile:
cipherTextList = []
............
``` |
Change user agent for selenium driver | 29,916,054 | 12 | 2015-04-28T09:52:22Z | 29,966,769 | 33 | 2015-04-30T11:40:17Z | [
"python",
"selenium",
"http-headers",
"user-agent"
] | I have the following code in `Python`:
```
from selenium.webdriver import Firefox
from contextlib import closing
with closing(Firefox()) as browser:
browser.get(url)
```
I would like to print the user-agent HTTP header and
possibly change it. Is it possible? | There is no way in Selenium to read the request or response headers. You could do it by instructing your browser to connect through a proxy that records this kind of information.
### Setting the User Agent in Firefox
The usual way to change the user agent for Firefox is to set the variable `"general.useragent.override"` in your Firefox profile. Note that this is independent from Selenium.
You can direct Selenium to use a profile different from the default one, like this:
```
from selenium import webdriver
profile = webdriver.FirefoxProfile()
profile.set_preference("general.useragent.override", "whatever you want")
driver = webdriver.Firefox(profile)
```
### Setting the User Agent in Chrome
With Chrome, what you want to do is use the `user-agent` command line option. Again, this is not a Selenium thing. You can invoke Chrome at the command line with `chrome --user-agent=foo` to set the agent to the value `foo`.
With Selenium you set it like this:
```
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
opts = Options()
opts.add_argument("user-agent=whatever you want")
driver = webdriver.Chrome(chrome_options=opts)
```
Both methods above were tested and found to work. I don't know about other browsers.
### Getting the User Agent
Selenium does not have methods to query the user agent from an instance of `WebDriver`. Even in the case of Firefox, you cannot discover the default user agent by checking what `general.useragent.override` would be if not set to a custom value. (This setting does not *exist* before it is set to some value.)
Once the browser is started, however, you can get the user agent by executing:
```
agent = driver.execute_script("return navigator.userAgent")
```
The `agent` variable will contain the user agent. |
Why does Python allow abstract methods to have code? | 29,917,870 | 2 | 2015-04-28T11:13:52Z | 29,918,032 | 7 | 2015-04-28T11:19:56Z | [
"python",
"abstract"
] | Why does python allow one to have code inside an abstract method? I know we can invoke that code through super, but I am not able to think of reasons why would I want my abstract method to have code in it. | This is explained in the [`abc` module docs](https://docs.python.org/3/library/abc.html#abc.abstractmethod):
> Note: Unlike Java abstract methods, these abstract methods may have an implementation. This implementation can be called via the `super()` mechanism from the class that overrides it. **This could be useful as an end-point for a super-call in a framework that uses cooperative multiple-inheritance.**
(Emphasis added.)
It's also explained in the PEP that introduced ABCs, and you can follow links from the PEP to python-dev and -ideas threads that discussed the decision in more detail. (That's one of the advantages of a language being developed out in the wide-open; for anything added after about 2.2, you don't have to wonder about questions like this, you can just look up the whole discussion.) |
Find the column name which has maximum value for each row [pandas] | 29,919,306 | 12 | 2015-04-28T12:18:57Z | 29,919,489 | 21 | 2015-04-28T12:25:57Z | [
"python",
"pandas",
"max"
] | I have a dataframe like this one:
```
In [7]:
frame.head()
Out[7]:
Communications and Search Business General Lifestyle
0 0.745763 0.050847 0.118644 0.084746
0 0.333333 0.000000 0.583333 0.083333
0 0.617021 0.042553 0.297872 0.042553
0 0.435897 0.000000 0.410256 0.153846
0 0.358974 0.076923 0.410256 0.153846
```
In here, I want to ask how to get column name which has maximum value for each row, the desired output is like this:
```
In [7]:
frame.head()
Out[7]:
Communications and Search Business General Lifestyle Max
0 0.745763 0.050847 0.118644 0.084746 Communications
0 0.333333 0.000000 0.583333 0.083333 Business
0 0.617021 0.042553 0.297872 0.042553 Communications
0 0.435897 0.000000 0.410256 0.153846 Communications
0 0.358974 0.076923 0.410256 0.153846 Business
``` | You can use [`idxmax()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html) to find the column with the greatest value on each row:
```
>>> df.idxmax(axis=1)
0 Communications
1 Business
2 Communications
3 Communications
4 Business
dtype: object
```
To create the new column use `df['Max'] = df.idxmax(axis=1)`. |
Function decorated using functools.wraps raises TypeError with the name of the wrapper. Why? How to avoid? | 29,919,804 | 11 | 2015-04-28T12:40:57Z | 29,919,965 | 7 | 2015-04-28T12:47:37Z | [
"python",
"python-decorators",
"functools"
] | ```
def decorated(f):
@functools.wraps(f)
def wrapper():
return f()
return wrapper
@decorated
def g():
pass
```
`functools.wraps` does its job at preserving the name of `g`:
```
>>> g.__name__
'g'
```
But if I pass an argument to `g`, I get a `TypeError` containing the name of the wrapper:
```
>>> g(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: wrapper() takes no arguments (1 given)
```
Where does this name come from? Where is it preserved? And is there a way to make the exception look like `g() takes no arguments`? | The name comes from the code object; both the function and the code object (containing the bytecode to be executed, among others) contain that name:
```
>>> g.__name__
'g'
>>> g.__code__.co_name
'wrapper'
```
The attribute on the code object is read-only:
```
>>> g.__code__.co_name = 'g'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: readonly attribute
```
You'd have to create a whole new code object to rename that, see [a previous answer of mine](http://stackoverflow.com/questions/29488327/changing-the-name-of-a-generator/29488561#29488561) where I defined a function to do that; using the `rename_code_object()` function on your decorated function:
```
>>> g = rename_code_object(g, 'g')
>>> g(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: g() takes no arguments (1 given)
```
Note, however, that this will entirely mask what code was being run! You generally want to see that a decorator wrapper was involved; it is the wrapper that throws the exception, not the original function, after all. |
How to gauss-filter (blur) a floating point numpy array | 29,920,114 | 6 | 2015-04-28T12:53:58Z | 29,920,953 | 10 | 2015-04-28T13:29:52Z | [
"python",
"numpy",
"filtering",
"python-imaging-library"
] | I have got a numpy array `a` of type `float64`. How can I blur this data with a Gauss filter?
I have tried
```
from PIL import Image, ImageFilter
image = Image.fromarray(a)
filtered = image.filter(ImageFilter.GaussianBlur(radius=7))
```
, but this yields `ValueError: 'image has wrong mode'`. (It has mode `F`.)
I could create an image of suitable mode by multiplying `a` with some constant, then rounding to integer. That should work, but I would like to have a more direct way.
(I am using Pillow 2.7.0.) | If you have a two-dimensional numpy array `a`, you can use a Gaussian filter on it directly without using Pillow to convert it to an image first. scipy has a function [`gaussian_filter`](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.ndimage.filters.gaussian_filter.html) that does the same.
```
from scipy.ndimage.filters import gaussian_filter
blurred = gaussian_filter(a, sigma=7)
``` |
When can a Python object be pickled | 29,922,373 | 10 | 2015-04-28T14:28:46Z | 29,922,428 | 7 | 2015-04-28T14:33:05Z | [
"python",
"multiprocessing",
"pickle"
] | I'm doing a fair amount of parallel processing in Python using the multiprocessing module. I know certain objects CAN be pickle (thus passed as arguments in multi-p) and others can't. E.g.
```
class abc():
pass
a=abc()
pickle.dumps(a)
'ccopy_reg\n_reconstructor\np1\n(c__main__\nabc\np2\nc__builtin__\nobject\np3\nNtRp4\n.'
```
But I have some larger classes in my code (a dozen methods, or so), and this happens:
```
a=myBigClass()
pickle.dumps(a)
Traceback (innermost last):
File "<stdin>", line 1, in <module>
File "/usr/apps/Python279/python-2.7.9-rhel5-x86_64/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle file objects
```
It's not a file object, but at other times, I'll get other messages that say basically: "I can't pickle this".
So what's the rule? Number of bytes? Depth of hierarchy? Phase of the moon? | From the [docs](https://docs.python.org/2/library/pickle.html#what-can-be-pickled-and-unpickled):
> The following types can be pickled:
>
> * None, True, and False
> * integers, long integers, floating point numbers, complex numbers
> * normal and Unicode strings
> * tuples, lists, sets, and dictionaries containing only picklable objects
> * functions defined at the top level of a module
> + built-in functions defined at the top level of a module
> * classes that are defined at the top level of a module
> * instances of such classes whose **dict** or the result of calling **getstate**() is picklable (see section The pickle protocol for details).
>
> Attempts to pickle unpicklable objects will raise the PicklingError
> exception; when this happens, an unspecified number of bytes may have
> already been written to the underlying file. Trying to pickle a highly
> recursive data structure may exceed the maximum recursion depth, a
> RuntimeError will be raised in this case. You can carefully raise this
> limit with sys.setrecursionlimit(). |
How to read in IRAF multispec spectra? | 29,923,315 | 4 | 2015-04-28T15:17:00Z | 29,923,316 | 8 | 2015-04-28T15:17:00Z | [
"python",
"spectrum",
"astropy"
] | I have a spectrum in a fits file that I generated with Iraf. The wavelength axis is encoded in the header as:
```
WAT0_001= 'system=multispec'
WAT1_001= 'wtype=multispec label=Wavelength units=angstroms'
WAT2_001= 'wtype=multispec spec1 = "1 1 2 1. 2.1919422441886 4200 0. 452.53 471'
WAT3_001= 'wtype=linear'
WAT2_002= '.60 1. 0. 3 3 1. 4200.00000000001 1313.88904209266 1365.65012876239 '
WAT2_003= '1422.67911152069 1479.0560707956 1535.24082980747 1584.94609332243'
```
Is there an easy way to load this into python? | I have been using [this](https://github.com/kgullikson88/General/blob/master/readmultispec.py) code, which was given to me by Rick White. However, the [specutils](http://specutils.readthedocs.org/en/latest/specutils/read_fits.html) package is probably the better way to do it:
```
from specutils.io import read_fits
spectra_list = read_fits.read_fits_spectrum1d('mymultispec.fits')
print spectra_list[0]
Spectrum1D([ 338.06109619, 395.59234619, 326.0012207 , ...,
660.0098877 , 686.54498291, 689.58374023])
print spectra_list[1].dispersion
<Quantity [ 8293.44875263, 8293.40459999, 8293.36044556,...,
8166.53073537, 8166.48250242, 8166.43426803] Angstrom>
``` |
Why variable = object doesn't work like variable = number | 29,926,485 | 17 | 2015-04-28T17:49:20Z | 29,926,513 | 16 | 2015-04-28T17:51:12Z | [
"python",
"python-2.7",
"variables",
"object",
"python-3.x"
] | These variable assignments work as I expect:
```
>>> a = 3
>>> b = a
>>> print(a, b)
(3, 3)
>>> b=4
>>> print(a, b)
(3, 4)
```
However, these assignments behave differently:
```
>>> class number():
... def __init__(self, name, number):
... self.name = name
... self.number = number
...
>>> c = number("one", 1)
>>> d = c
>>> print(c.number, d.number)
(1, 1)
>>> d.number = 2
>>> print(c.number, d.number)
(2, 2)
```
Why is `c` is same as `d`, unlike in `(a, b)` example? How can I do something like in `(a, b)` in `(c, d)` classes example? That is, copy the object and then change one part of it (that won't affect the object that I borrowed properties from)? | These lines:
```
c = number("one", 1)
d = c
```
...are effectively:
* Create a new instance of `number` and assign it to `c`
* Assign the existing reference called `c` to a new variable `d`
You haven't changed or modified anything about `c`; `d` is another name that points to the same instance.
Without cloning the instance or creating a new instance, you can't do anything similar to how the primitive int is behaving.
---
To correct a bit of information, the explanation above is [rather simplified](http://www.jeffknupp.com/blog/2012/11/13/is-python-callbyvalue-or-callbyreference-neither/) and a bit incomplete [in its nature](http://stackoverflow.com/q/986006/1079354), although it *mostly* describes what's going on at 10,000 feet.
For a closer look, we have to realize a few things about Python's variables, or "names", and how they interact with this program.
As mentioned above, [you have the notion of "names" and "bindings"](http://www.jeffknupp.com/blog/2013/02/14/drastically-improve-your-python-understanding-pythons-execution-model/), which are pretty straightforward to reason at:
```
a = 3
b = a
```
In this context, `a` is a name, and `b` is a binding to `a`. We haven't modified or changed anything about `a`.
As noted before, there are two types of data in Python: mutable and immutable. A name that points to immutable data, such as primitives and tuples, can be reassigned without any ill effect to any other bindings present on it, because no state is changing with respect to the binding.
This is why this reassignment does what we would expect it to:
```
print(a, b)
b = 4
print(a, b)
```
The result of `b = 4` is that `b` is now pointing at a new copy of an integer, the value 4.
Recall that I did mention tuples as immutable data. You can't change the binding of a particular entity in your tuple...
```
t = ('foo', 'bar')
t[0] = 'baz' # illegal
```
...but you *can* have mutable data structures as part of those bindings.
```
t = ([1, 2, 3], 'bar')
t[0].append([4, 5, 6]) # ([1, 2, 3, [4, 5, 6]], 'bar')
```
---
So where does that leave our example?
```
c = number("one", 1)
d = c
```
`number` is a **mutable** type which is named as `c`, and its values can be changed at will between multiple different bindings to `c`.
Effectively, we've got a name and a binding to a name:
* We have a new instance of `number` and refer to it by the name `c`.
* Bind the reference `c` to another name `d`.
Again, nothing's changed about `c`, but it can be referenced through another name.
Unlike with the immutable data, when we reassign the value of `d.number`, we're reassigning the same binding that `c` is aware of:
```
>>> id(d.number)
36696408
>>> id(c.number)
36696408
```
This is why you **require** either a new instance or a copy. You have to refer to a different instance of `number`. With this simple binding, you're not going to accomplish that.
```
from copy import copy
c = number("one", 1)
d = copy(c)
id(c) # 140539175695784
id(d) # 140539175695856
``` |
Why variable = object doesn't work like variable = number | 29,926,485 | 17 | 2015-04-28T17:49:20Z | 29,926,592 | 7 | 2015-04-28T17:55:46Z | [
"python",
"python-2.7",
"variables",
"object",
"python-3.x"
] | These variable assignments work as I expect:
```
>>> a = 3
>>> b = a
>>> print(a, b)
(3, 3)
>>> b=4
>>> print(a, b)
(3, 4)
```
However, these assignments behave differently:
```
>>> class number():
... def __init__(self, name, number):
... self.name = name
... self.number = number
...
>>> c = number("one", 1)
>>> d = c
>>> print(c.number, d.number)
(1, 1)
>>> d.number = 2
>>> print(c.number, d.number)
(2, 2)
```
Why is `c` is same as `d`, unlike in `(a, b)` example? How can I do something like in `(a, b)` in `(c, d)` classes example? That is, copy the object and then change one part of it (that won't affect the object that I borrowed properties from)? | You are focusing on the fact that these two pairs of lines are the same (both use plain `=`):
```
# one
a = 3
b = a
#two
c = number("one", 1)
d = c
```
What you're missing is that these two lines are different:
```
# one
b = 4
# two
d.number = 2
```
The reason they aren't the same is that `d.number` has a dot in it, and `b` does not.
Setting `d = c` *does* have the same effect as settign `b = a`. The difference is that doing `d.number = 2` is *not* the same as doing `b = 4`. When you do `b = 4`, you assign a new object to the name `b`. When you do `d.number = 2`, you modify the object that is already referred to by the name `d`, without assigning a new object. If you change your second example to `d = 2`, using plain assignment instead of attribute assignment, you will see that `c` is unaffected, just as `a` is unaffected in your first example.
Although it can be confusing, `=` does not always mean the same thing in all contexts in Python. Assigning to a bare name (`blah = ...`) is not the same as assigning to an attribute (`blah.attr = ...`) or an item (`blah[item] = ...`). To understand what an `=` means, you need to look at the left-hand side (the assignment target) to see whether it is a bare name or some sort of expression.
As for how to get the effect of your a/b example in your c/d example, it depends on exactly what effect you want. If you want to wind up with `d` and `c` pointing to different objects that have different `number` attributes, you could do:
```
d = number("one", 2)
```
Notice that this is now parallel to `b = 4` (because it uses assignment to a bare name). There are also other more complex solutions involving making a copy of the existing `d` object; have a look at the [copy](https://docs.python.org/2/library/copy.html) module in the standard library. Exactly what to do depends on what you're trying to accomplish with the code. |
Why variable = object doesn't work like variable = number | 29,926,485 | 17 | 2015-04-28T17:49:20Z | 29,926,864 | 9 | 2015-04-28T18:10:29Z | [
"python",
"python-2.7",
"variables",
"object",
"python-3.x"
] | These variable assignments work as I expect:
```
>>> a = 3
>>> b = a
>>> print(a, b)
(3, 3)
>>> b=4
>>> print(a, b)
(3, 4)
```
However, these assignments behave differently:
```
>>> class number():
... def __init__(self, name, number):
... self.name = name
... self.number = number
...
>>> c = number("one", 1)
>>> d = c
>>> print(c.number, d.number)
(1, 1)
>>> d.number = 2
>>> print(c.number, d.number)
(2, 2)
```
Why is `c` is same as `d`, unlike in `(a, b)` example? How can I do something like in `(a, b)` in `(c, d)` classes example? That is, copy the object and then change one part of it (that won't affect the object that I borrowed properties from)? | I didn't see that anyone provided details on how to make these two cases work the same by copying the object instead of just assigning a new reference to the same object.
```
import copy
c = number("one", 1)
d = c
e = copy.copy(c)
print(c.number, d.number, e.number)
d.number = 2
e.number = 5
print(c.number, d.number, e.number)
```
This will give you:
```
1 1 1
2 2 5
``` |
Why variable = object doesn't work like variable = number | 29,926,485 | 17 | 2015-04-28T17:49:20Z | 29,930,723 | 10 | 2015-04-28T22:00:56Z | [
"python",
"python-2.7",
"variables",
"object",
"python-3.x"
] | These variable assignments work as I expect:
```
>>> a = 3
>>> b = a
>>> print(a, b)
(3, 3)
>>> b=4
>>> print(a, b)
(3, 4)
```
However, these assignments behave differently:
```
>>> class number():
... def __init__(self, name, number):
... self.name = name
... self.number = number
...
>>> c = number("one", 1)
>>> d = c
>>> print(c.number, d.number)
(1, 1)
>>> d.number = 2
>>> print(c.number, d.number)
(2, 2)
```
Why is `c` is same as `d`, unlike in `(a, b)` example? How can I do something like in `(a, b)` in `(c, d)` classes example? That is, copy the object and then change one part of it (that won't affect the object that I borrowed properties from)? | ## A picture worth a thousand words
```
a = 3
b = a
c = number("one", 1)
d = c
```

---
Step 2â¦
```
b = 4
d.number = 2
```

You can see why changing `d.number` would also affect `c`.
---
If, before Step 2, you do
```
import copy
d = copy.copy(c)
```
⦠then `c` and `d` are independent. Changing `d.number` will not affect `c`.
 |
Calculating the averages for each KEY in a Pairwise (K,V) RDD in Spark with Python | 29,930,110 | 14 | 2015-04-28T21:18:11Z | 29,930,162 | 19 | 2015-04-28T21:21:22Z | [
"python",
"apache-spark",
"aggregate",
"average",
"rdd"
] | I want to share this particular Apache Spark with Python solution because documentation for it is quite poor.
I wanted to calculate the average value of K/V pairs (stored in a Pairwise RDD), by KEY. Here is what the sample data looks like:
```
>>> rdd1.take(10) # Show a small sample.
[(u'2013-10-09', 7.60117302052786),
(u'2013-10-10', 9.322709163346612),
(u'2013-10-10', 28.264462809917358),
(u'2013-10-07', 9.664429530201343),
(u'2013-10-07', 12.461538461538463),
(u'2013-10-09', 20.76923076923077),
(u'2013-10-08', 11.842105263157894),
(u'2013-10-13', 32.32514177693762),
(u'2013-10-13', 26.249999999999996),
(u'2013-10-13', 10.693069306930692)]
```
Now the following code sequence is a **less than optimal** way to do it, but it does work. It is what I was doing before I figured out a better solution. It's not terrible but -- as you'll see in the answer section -- there is a more concise, efficient way.
```
>>> import operator
>>> countsByKey = sc.broadcast(rdd1.countByKey()) # SAMPLE OUTPUT of countsByKey.value: {u'2013-09-09': 215, u'2013-09-08': 69, ... snip ...}
>>> rdd1 = rdd1.reduceByKey(operator.add) # Calculate the numerators (i.e. the SUMs).
>>> rdd1 = rdd1.map(lambda x: (x[0], x[1]/countsByKey.value[x[0]])) # Divide each SUM by it's denominator (i.e. COUNT)
>>> print(rdd1.collect())
[(u'2013-10-09', 11.235365503035176),
(u'2013-10-07', 23.39500642456595),
... snip ...
]
``` | Now a much better way to do this is to use the rdd.aggregateByKey() method. Because that method is so poorly documented in the Apache Spark with Python documentation (which is why I'm writing this), until recently I had been using the above code sequence. But again, it's less efficient, **so don't do it that way unless you need to**.
Here's how to do the same using the rdd.aggregateByKey() method (**recommended**) ...
By KEY, simultaneously calculate the SUM (numerator for the average we want to compute), and COUNT (denominator for the average we want to compute).
```
>>> rdd1 = rdd1.aggregateByKey((0,0), lambda a,b: (a[0] + b, a[1] + 1),
lambda a,b: (a[0] + b[0], a[1] + b[1]))
```
Where the following is true about the meaning of each 'a' and 'b' pair above (just so you can visualize what's happening):
```
First lambda expression for Within-Partition Reduction Step::
a: is a TUPLE that holds: (runningSum, runningCount).
b: is a SCALAR that holds the next Value
Second lambda expression for Cross-Partition Reduction Step::
a: is a TUPLE that holds: (runningSum, runningCount).
b: is a TUPLE that holds: (nextPartitionsSum, nextPartitionsCount).
```
Finally, calculate the average for each KEY, and collect results.
```
>>> finalResult = rdd1.mapValues(lambda v: v[0]/v[1]).collect()
>>> print(finalResult)
[(u'2013-09-09', 11.235365503035176),
(u'2013-09-01', 23.39500642456595),
(u'2013-09-03', 13.53240060820617),
(u'2013-09-05', 13.141148418977687),
... snip ...
]
```
I hope this aggregateByKey() illustration will help others. |
Why is `if` so much faster when checked before a statement than after a statement? | 29,931,471 | 11 | 2015-04-28T23:05:26Z | 29,931,705 | 7 | 2015-04-28T23:30:08Z | [
"python",
"performance",
"python-3.x",
"performance-testing"
] | Here's an example of what I mean:
```
s = """
if x > 10:
x -= 10
else:
x = 0
"""
import timeit
print(timeit.timeit(s, setup="x=5", number=99999999))
```
Outputs approximately 3 seconds on my computer, regardless of the setup (`x=5` vs `x=15`, no difference)
---
If I were to use much shorter code, one that first decreases `x -= 10` and only then checks if `x < 0`, I will get much worse results:
```
s = """
x -= 10
if x < 0:
x = 0
"""
import timeit
print(timeit.timeit(s, setup="x=5", number=99999999))
```
It outputs around 6 seconds, again regardless whether the initial value of `x` was `5` or `15`.
---
I understand that it would be slower when `x < 10` since we'd first call `x -= 10` and then set `x = 0` instead of simply setting `x` once.
The thing is, 99% of the time the `x`'s initial value in my program is set to a number much higher than 10, so I thought I'd use the shorter version since most of the time I should see no difference in performance.
However, there's a huge difference in performance even when `x > 10`, why's this? | Your premise is wrong. `setup` only gets run once for the entire `timeit`. If you make sure that `x` stays above `10` then the symptoms disappear:
```
>>> s1 = """
... if x > 10:
... x -= 10
... else:
... x = 0
... """
>>> s2 = """
... x -= 10
... if x < 0:
... x = 0
... """
>>> import timeit
>>> print(timeit.timeit(s1, setup="x=1000000000", number=99999999))
8.934118068675566
>>> print(timeit.timeit(s2, setup="x=1000000000", number=99999999))
8.744505329313448
``` |
Reduce by key in python | 29,933,189 | 4 | 2015-04-29T02:15:23Z | 29,933,270 | 7 | 2015-04-29T02:25:46Z | [
"python",
"reduce"
] | I'm trying to think through the most efficient way to do this in python.
Suppose I have a list of tuples:
```
[('dog',12,2), ('cat',15,1), ('dog',11,1), ('cat',15,2), ('dog',10,3), ('cat',16,3)]
```
And suppose I have a function which takes two of these tuples and combines them:
```
def my_reduce(obj1, obj2):
return (obj1[0],max(obj1[1],obj2[1]),min(obj1[2],obj2[2]))
```
How do I perform an efficient reduce by 'key' where the key here could be the first value, so the final result would be something like:
```
[('dog',12,1), ('cat',16,1)]
``` | I don't think `reduce` is a good tool for this job, because you will have to first use itertools or similar to group the list by the key. Otherwise you will be comparing `cats` and `dogs` and all hell will break loose!
Instead just a simple loop is fine:
```
>>> my_list = [('dog',12,2), ('cat',15,1), ('dog',11,1), ('cat',15,2)]
>>> output = {}
>>> for animal, high, low in my_list:
... try:
... prev_high, prev_low = output[animal]
... except KeyError:
... output[animal] = high, low
... else:
... output[animal] = max(prev_high, high), min(prev_low, low)
```
Then if you want the original format back:
```
>>> output = [(k,) + v for k, v in output.items()]
>>> output
[('dog', 12, 1), ('cat', 15, 1)]
```
Note this will destroy the ordering from the original list. If you want to preserve the order the keys first appear in, initialise output with an `OrderedDict` instead. |
Reduce by key in python | 29,933,189 | 4 | 2015-04-29T02:15:23Z | 29,933,308 | 7 | 2015-04-29T02:30:24Z | [
"python",
"reduce"
] | I'm trying to think through the most efficient way to do this in python.
Suppose I have a list of tuples:
```
[('dog',12,2), ('cat',15,1), ('dog',11,1), ('cat',15,2), ('dog',10,3), ('cat',16,3)]
```
And suppose I have a function which takes two of these tuples and combines them:
```
def my_reduce(obj1, obj2):
return (obj1[0],max(obj1[1],obj2[1]),min(obj1[2],obj2[2]))
```
How do I perform an efficient reduce by 'key' where the key here could be the first value, so the final result would be something like:
```
[('dog',12,1), ('cat',16,1)]
``` | Alternatively, if you have **pandas** installed:
```
import pandas as pd
l = [('dog',12,2), ('cat',15,1), ('dog',11,1), ('cat',15,2), ('dog',10,3), ('cat',16,3)]
pd.DataFrame(data=l, columns=['animal', 'm', 'n']).groupby('animal').agg({'m':'max', 'n':'min'})
Out[6]:
m n
animal
cat 16 1
dog 12 1
```
To get the original format:
```
zip(df.index, *df.values.T) # df is the result above
Out[14]: [('cat', 16, 1), ('dog', 12, 1)]
``` |
Virtualenv - Python 3 - Ubuntu 14.04 64 bit | 29,934,032 | 6 | 2015-04-29T03:54:33Z | 35,024,841 | 9 | 2016-01-26T21:58:56Z | [
"python",
"python-3.x",
"pip",
"virtualenv",
"ubuntu-14.04"
] | I am trying to install virtualenv for Python 3 on Ubuntu 64bit 14.04.
I have installed pip for Python3 using:
```
pip3 install virtualenv
```
and everything works fine. Now though I am trying to use virtualenv command to actually create the environment and getting the error that it is not install (i guess because I haven't installed it for Python 2 and that is what it is trying to use)
How do I use the virtualenv for Python 3? I have searched the documentation but can't see where it says what to do. | I had the same issue coming from development environments on OS X where I could create Python 3 virtual environments by simply invoking `virtualenv` and the path to the target directory. You should be able to create a Python 3.x virtual environment in one of two ways:
1. Install `virtualenv` from the PyPi as you've done (`$ pip3 install virtualenv`), then by calling it as a module from the command line:
`$ python3 -m virtualenv /path/to/directory`
2. Use the [venv](https://docs.python.org/3.4/library/venv.html) module, which you can install through `apt-get`. (Note that Python 3.3 is when this module was introduced, so this answer assumes you're working with at least that):
`$ sudo apt-get install python3.4-venv`
Then you can set up your virtual environment with
`$ pyvenv-3.4 /path/to/directory`
and activate the environment with
`$ source /path/to/directory/bin/activate`
You might also look at [this post](http://stackoverflow.com/questions/12971443/is-python-package-virtualenv-necessary-when-i-use-python-3-3), which discusses differences between the `venv` module and `virtualenv`. Best of luck! |
Linear Regression on Pandas DataFrame using Sci-kit Learn | 29,934,083 | 4 | 2015-04-29T03:58:18Z | 29,937,049 | 9 | 2015-04-29T07:28:43Z | [
"python",
"pandas",
"scikit-learn",
"dataframe",
"linear-regression"
] | I'm new to Python and trying to perform linear regression using sklearn on a pandas dataframe. This is what I did:
```
data = pd.read_csv('xxxx.csv')
```
After that I got a DataFrame of two columns, let's call them 'c1', 'c2'. Now I want to do linear regression on the set of (c1,c2) so I entered
```
X=data['c1'].values
Y=data['c2'].values
linear_model.LinearRegression().fit(X,Y)
```
which resulted in the following error
```
IndexError: tuple index out of range
```
What's wrong here? Also, I'd like to know
1. visualize the result
2. make predictions based on the result?
I've searched and browsed a large number of sites but none of them seemed to instruct beginners on the proper syntax. Perhaps what's obvious to experts is not so obvious to a novice like myself.
Can you please help? Thank you very much for your time.
PS: I have noticed that a large number of beginner questions were down-voted in stackoverflow. Kindly take into account the fact that things that seem obvious to an expert user may take a beginner days to figure out. Please use discretion when pressing the down arrow lest you'd harm the vibrancy of this discussion community. | Let's assume your csv looks something like:
```
c1,c2
0.000000,0.968012
1.000000,2.712641
2.000000,11.958873
3.000000,10.889784
...
```
I generated the data as such:
```
import numpy as np
from sklearn import datasets, linear_model
import matplotlib.pyplot as plt
length = 10
x = np.arange(length, dtype=float).reshape((length, 1))
y = x + (np.random.rand(length)*10).reshape((length, 1))
```
This data is saved to test.csv (just so you know where it came from, obviously you'll use your own).
```
data = pd.read_csv('test.csv', index_col=False, header=0)
x = data.c1.values
y = data.c2.values
print x # prints: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
```
You need to take a look at the shape of the data you are feeding into `.fit()`.
Here `x.shape = (10,)` but we need it to be `(10, 1)`, see [sklearn](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html). Same goes for `y`. So we reshape:
```
x = x.reshape(length, 1)
y = y.reshape(length, 1)
```
Now we create the regression object and then call `fit()`:
```
regr = linear_model.LinearRegression()
regr.fit(x, y)
# plot it as in the example at http://scikit-learn.org/
plt.scatter(x, y, color='black')
plt.plot(x, regr.predict(x), color='blue', linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
```
See sklearn linear regression [example](http://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html).
 |
In Python, is there a way to sort a list made of lists and tuples, consistently? | 29,934,300 | 3 | 2015-04-29T04:17:54Z | 29,934,338 | 12 | 2015-04-29T04:22:39Z | [
"python"
] | sort of `a` and `b` are as expected to me, then why is `c` different? Is there a ways to make it consistent with `a` and `b`, without converting everything to either lists or tuples?
```
>>> a = [(1, 0), (0, 0)]
>>> a.sort()
>>> print a
[(0, 0), (1, 0)]
>>>
>>> b = [[1], (0)]
>>> b.sort()
>>> print b
[0, [1]]
>>>
>>> c = [[1, 0], (0, 0)]
>>> c.sort()
>>> print c
[[1, 0], (0, 0)]
>>>
``` | It's possible to convert them only for the purpose of sorting:
```
>>> c = [[1, 0], (0, 0)]
>>> c.sort(key=tuple)
>>> c
[(0, 0), [1, 0]]
```
That being said, a list containing a mix of lists and tuples is a code smell. |
django.db.utils.OperationalError Could not connect to server | 29,937,378 | 5 | 2015-04-29T07:44:59Z | 29,937,718 | 14 | 2015-04-29T08:01:27Z | [
"python",
"django",
"postgresql"
] | I am not sure how to fix this issue
I have no idea why I am getting this error when I try to `runserver`:
```
Performing system checks...
System check identified no issues (0 silenced).
Unhandled exception in thread started by <function wrapper at 0x1085589b0>
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/django/utils/autoreload.py", line 222, in wrapper
fn(*args, **kwargs)
File "/Library/Python/2.7/site-packages/django/core/management/commands/runserver.py", line 107, in inner_run
self.check_migrations()
File "/Library/Python/2.7/site-packages/django/core/management/commands/runserver.py", line 159, in check_migrations
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
File "/Library/Python/2.7/site-packages/django/db/migrations/executor.py", line 17, in __init__
self.loader = MigrationLoader(self.connection)
File "/Library/Python/2.7/site-packages/django/db/migrations/loader.py", line 49, in __init__
self.build_graph()
File "/Library/Python/2.7/site-packages/django/db/migrations/loader.py", line 184, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/Library/Python/2.7/site-packages/django/db/migrations/recorder.py", line 59, in applied_migrations
self.ensure_schema()
File "/Library/Python/2.7/site-packages/django/db/migrations/recorder.py", line 49, in ensure_schema
if self.Migration._meta.db_table in self.connection.introspection.get_table_list(self.connection.cursor()):
File "/Library/Python/2.7/site-packages/django/db/backends/__init__.py", line 165, in cursor
cursor = self.make_debug_cursor(self._cursor())
File "/Library/Python/2.7/site-packages/django/db/backends/__init__.py", line 138, in _cursor
self.ensure_connection()
File "/Library/Python/2.7/site-packages/django/db/backends/__init__.py", line 133, in ensure_connection
self.connect()
File "/Library/Python/2.7/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/Library/Python/2.7/site-packages/django/db/backends/__init__.py", line 133, in ensure_connection
self.connect()
File "/Library/Python/2.7/site-packages/django/db/backends/__init__.py", line 122, in connect
self.connection = self.get_new_connection(conn_params)
File "/Library/Python/2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 134, in get_new_connection
return Database.connect(**conn_params)
File "/Library/Python/2.7/site-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
```
When I try to connect to postgres:
```
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
```
settings.py:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'beerad',
'USER': 'bli1',
'PASSWORD': '',
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
``` | It can be some issues:
1. PostgreSQL is not running. Check it with sudo `service postgresql status`
2. Your PostgresSQl is not running on port 5432. You can check it typing `sudo netstat -nl | grep postgres`
3. You have something wrong trying to connect to your db like the username, the password or the databasename. Check that they are what postgres ask for you to connect it and that is the db\_name that you want to access to.
4. Problems with postmaster.pid in postgres. It can happen because of a shutdown unproperly done. It makes to remind a pid alive that doesn't allow your server start. To fix it you have to:
```
* rm /usr/local/var/postgres/postmaster.pid
* pg_resetxlog -f /usr/local/var/postgres
```
After this it should run properly if you make the runserver of postgres
Help in Mac OSX: [how to start postgresql server on mac os x](http://stackoverflow.com/questions/7975556/how-to-start-postgresql-server-on-mac-os-x) |
What's wrong with my Python code containing recursive function? | 29,943,875 | 4 | 2015-04-29T12:32:56Z | 29,943,948 | 8 | 2015-04-29T12:36:30Z | [
"python",
"recursion"
] | I am using recursive to find a [happy number](https://github.com/karan/Projects).
The following is my Python code:
```
deepth = 0
def is_happy_number(number):
astring = str(number)
global deepth
digits = [int(char) for char in astring]
sum_digit = sum([digit**2 for digit in digits])
if sum_digit == 1:
deepth = 0
return True
else:
deepth += 1
if deepth >800:
return False
return is_happy_number(sum_digit)
print '7',is_happy_number(7)
for number in range(1,11):
print number,is_happy_number(number)
```
The results are :
```
7 True
1 True
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
10 True
```
When I test number 7 alone, it returns 'True'. While I run the last two raws code, number 7 return 'False' .
I don't know which part is wrong.
After a few minutes, I find the wrong part in my Python code.
And I add:
```
deepth = 0
```
after:
```
if deepth > 800:
```
With the remind of @Will, I find another solution to this problem.
Code modified is as following:
```
def is_happy_number(number, deepth=0):
astring = str(number)
digits = [int(char) for char in astring]
sum_digit = sum([digit**2 for digit in digits])
if sum_digit == 1:
return True
else:
deepth += 1
if deepth >800:
return False
return is_happy_number(sum_digit,deepth)
print '7',is_happy_number(7,0)
for number in range(1,10):
if is_happy_number(number,0):
print number,
``` | You're failing to reset the global variable `depth`. A better way to deal with this is to pass the depth into the recursive call.
Something like this:
```
def is_happy_number(number, depth=0):
# ... as before ...
return is_happy_number(sum_digit, depth)
``` |
Opposite of set.intersection in python? | 29,947,844 | 7 | 2015-04-29T15:13:54Z | 29,947,893 | 18 | 2015-04-29T15:15:44Z | [
"python",
"set"
] | In Python you can use `a.intersection(b)` to find the items common to both sets.
Is there a way to do the *disjoint* opposite version of this? Items that are not common to both `a` and `b`; the unique items in `a` unioned with the unique items in `b`? | You are looking for the *symmetric difference*; all elements that appear only in set a or in set b, but not both:
```
a. symmetric_difference(b)
```
From the [`set.symmetric_difference()` method documentation](https://docs.python.org/2/library/stdtypes.html#set.symmetric_difference):
> Return a new set with elements in either the set or *other* but not both.
You can use the `^` operator too, if both `a` and `b` are sets:
```
a ^ b
```
while `set.symmetric_difference()` takes any iterable for the *other* argument.
The output is the equivalent of `(a | b) - (a & b)`, the union of both sets minus the intersection of both sets. |
Python anaconda conda issue: updating anaconda package impossible because processes are running | 29,949,087 | 8 | 2015-04-29T16:08:39Z | 30,039,122 | 8 | 2015-05-04T20:12:28Z | [
"python",
"anaconda",
"conda"
] | I have an issue with updating anaconda's packages with conda. when I make a conda update --all there is an issue which says to me:
```
Error: Unable to remove files for package: cryptography
Please close all processes running code from cryptography and try again.
```
however, none process are running, I have just the cmd window open.
Same story when I want update dateutile for instance.
It is like if conda uses some package and then I cannot update them?
Somebody knows a method to close or remove those packages in order to reinstall them?
For information:
```
C:\Anaconda3\Scripts>conda info -a
Current conda install:
platform : win-64
conda version : 3.11.0
conda-build version : 1.11.0
python version : 3.4.3.final.0
requests version : 2.6.2
root environment : C:\Anaconda3 (writable)
default environment : C:\Anaconda3
envs directories : C:\Anaconda3\envs
package cache : C:\Anaconda3\pkgs
channel URLs : https://conda.binstar.org/juanlu001/win-64/
https://conda.binstar.org/juanlu001/noarch/
https://repo.continuum.io/pkgs/free/win-64/
https://repo.continuum.io/pkgs/free/noarch/
https://repo.continuum.io/pkgs/pro/win-64/
https://repo.continuum.io/pkgs/pro/noarch/
config file : C:\Users\maxime.condarc
is foreign system : False
``` | This situation will be improved in the next version of conda, but for now, you can use `conda install -f cryptography` to force conda to update cryptography. |
What is the relationship between virtualenv and pyenv? | 29,950,300 | 53 | 2015-04-29T17:13:23Z | 29,950,604 | 60 | 2015-04-29T17:30:11Z | [
"python",
"virtualenv",
"virtualenvwrapper",
"pyenv"
] | I recently learned how to use virtualenv and virtualenvwrapper in my workflow but I've seen pyenv mentioned in a few guides but I can't seem to get an understanding of what pyenv is and how it is different/similar to virtualenv. Is pyenv a better/newer replacement for virtualenv or a complimentary tool? If the latter what does it do differently and how do the two (and virtualenvwrapper if applicable) work together? | **Pyenv** and **virtualenv** are very different tools that work in different ways to do different things:
* **Pyenv** is a bash extension - will not work on Windows - that intercepts your calls to python, pip, etc., to direct them to one of several of the system python tool-chains. So you always have all the libraries that you have installed in the selected python version available - as such it is good for users who have to switch between different versions of python.
* **VirtualEnv**, is pure python so works everywhere, it makes a copy of, *optionally a specific version of,* python and pip local to the activate environment which may or may not include links to the current system tool-chain, if it does not you can install just a known subset of libraries into that environment. As such it is almost certainly much better for testing and deployment as you know **exactly** which libraries, at which versions, are used and a global change will not impact your module.
## Additional Tools
There are a number of tools that it is worth mentioning, and considering, as they can help with the use of one or more of the above:
* [VirtualEnvWrapper](https://pypi.python.org/pypi/virtualenvwrapper) Manage and simplify the use and management of VirtualEnv - **Cross Platform**.
* [pyenv-virtualenv](https://github.com/yyuu/pyenv-virtualenv), *installed by [pyenv-installer](https://github.com/yyuu/pyenv-installer)*, which gives PyEnv tools for managing and interfacing to VirtualEnv - with this you can have a base installation that includes more than one version of python and create isolated environments within each of them - **Linux Only**. *Suggested by Johann Visagie*
* [PyInstaller](http://www.pyinstaller.org/) can take your python code, possibly developed & tested under VirtualEnv, and bundle it up so that it can run one platforms that do not have *your version of* python installed - Note that it is not a cross compiler you will need a Windows (virtual-)machine to build Windows installs, etc., but it can be handy even where you can be sure that python will be installed but cannot be sure that the version of python and all the libraries will be compatible with your code. |
Get last element of type string in a list | 29,950,493 | 3 | 2015-04-29T17:24:20Z | 29,950,558 | 12 | 2015-04-29T17:27:33Z | [
"python",
"arrays",
"list",
"loops"
] | Suppose I have a list of different types:
i.e.
```
[7, 'string 1', 'string 2', [a, b c], 'string 3', 0, (1, 2, 3)]
```
Is there a Pythonic way to return 'string 3' ? | If you have a given type, you can use several kinds of comprehensions to get what you need.
```
[el for el in lst if isinstance(el, given_type)][-1]
# Gives the last element if there is one, else IndexError
```
or
```
next((el for el in reversed(lst) if isinstance(el, given_type)), None)
# Gives the last element if there is one, else None
```
If this is something you're doing often enough, you can factor it into a function:
```
def get_last_of_type(type_, iterable):
for el in reversed(iterable):
if isinstance(el, type_):
return el
return None
``` |
Ignore divide by 0 warning in python | 29,950,557 | 2 | 2015-04-29T17:27:31Z | 29,950,752 | 7 | 2015-04-29T17:38:13Z | [
"python",
"numpy",
"suppress-warnings",
"divide-by-zero"
] | I have a function for statistic issues:
```
import numpy as np
from scipy.special import gamma as Gamma
def Foo(xdata):
...
return x1 * (
( #R is a numpy vector
( ((R - x2)/beta) ** (x3 -1) ) *
( np.exp( - ((R - x2) / x4) ) ) /
( x4 * Gamma(x3))
).real
)
```
Sometimes I get from the shell the following warning:
```
RuntimeWarning: divide by zero encountered in...
```
I use the numpy `isinf` function to correct the results of the function in other files where I need to do. so I do not need to be warning.
There is a way to ignore the message?
In other words, I do not want that the shell print this message.
I do not want to disable all the python warning, just this one. | You can disable the warning with [`numpy.seterr`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.seterr.html). Put this before the possible division by zero:
```
np.seterr(divide='ignore')
```
That'll disable zero division warnings globally. If you just want to disable them for a little bit, you can use [`numpy.errstate`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.errstate.html) in a `with` clause:
```
with np.errstate(divide='ignore'):
# some code here
``` |
DRF: Simple foreign key assignment with nested serializers? | 29,950,956 | 14 | 2015-04-29T17:50:33Z | 29,953,188 | 15 | 2015-04-29T19:49:20Z | [
"python",
"django",
"django-rest-framework"
] | With Django REST Framework, a standard ModelSerializer will allow ForeignKey model relationships to be assigned or changed by POSTing an ID as an Integer.
What's the *simplest* way to get this behavior out of a nested serializer?
Note, I am only talking about assigning existing database objects, *not* nested creation.
I have hacked away around this in the past with additional 'id' fields in the serializer and with custom `create` and `update` methods, but this is such a seemingly simple and frequent issue for me that I'm curious to know the best way.
```
class Child(models.Model):
name = CharField(max_length=20)
class Parent(models.Model):
name = CharField(max_length=20)
phone_number = models.ForeignKey(PhoneNumber)
child = models.ForeignKey(Child)
class ChildSerializer(ModelSerializer):
class Meta:
model = Child
class ParentSerializer(ModelSerializer):
# phone_number relation is automatic and will accept ID integers
children = ChildSerializer() # this one will not
class Meta:
model = Parent
``` | The best solution here is to use two different fields: one for reading and the other for writing. Without doing some *heavy* lifting, it is difficult to get what you are looking for *in a single field*.
The read-only field would be your nested serializer (`ChildSerializer` in this case) and it will allow you to get the same nested representation that you are expecting. Most people define this as just `child`, because they already have their front-end written by this point and changing it would cause problems.
The write-only field would be a [`PrimaryKeyRelatedField`](http://www.django-rest-framework.org/api-guide/relations/#primarykeyrelatedfield), which is what you would typically use for assigning objects based on their primary key. This does not have to be write-only, especially if you are trying to go for symmetry between what is received and what is sent, but it sounds like that might suit you best. This field should have [a `source`](http://www.django-rest-framework.org/api-guide/fields/#source) set to the foreign key field (`child` in this example) so it assigns it properly on creation and updating.
---
This has been brought up on the discussion group a few times, and I think this is still the best solution. Thanks to [Sven Maurer for pointing it out](https://groups.google.com/d/msg/django-rest-framework/XBJztipJsYE/5_JFFzrHcJQJ). |
DRF: Simple foreign key assignment with nested serializers? | 29,950,956 | 14 | 2015-04-29T17:50:33Z | 33,048,798 | 10 | 2015-10-10T00:01:29Z | [
"python",
"django",
"django-rest-framework"
] | With Django REST Framework, a standard ModelSerializer will allow ForeignKey model relationships to be assigned or changed by POSTing an ID as an Integer.
What's the *simplest* way to get this behavior out of a nested serializer?
Note, I am only talking about assigning existing database objects, *not* nested creation.
I have hacked away around this in the past with additional 'id' fields in the serializer and with custom `create` and `update` methods, but this is such a seemingly simple and frequent issue for me that I'm curious to know the best way.
```
class Child(models.Model):
name = CharField(max_length=20)
class Parent(models.Model):
name = CharField(max_length=20)
phone_number = models.ForeignKey(PhoneNumber)
child = models.ForeignKey(Child)
class ChildSerializer(ModelSerializer):
class Meta:
model = Child
class ParentSerializer(ModelSerializer):
# phone_number relation is automatic and will accept ID integers
children = ChildSerializer() # this one will not
class Meta:
model = Parent
``` | Here's an example of what Kevin's answer is talking about, if you want to take that approach and use 2 separate fields.
In your models.py...
```
class Child(models.Model):
name = CharField(max_length=20)
class Parent(models.Model):
name = CharField(max_length=20)
phone_number = models.ForeignKey(PhoneNumber)
child = models.ForeignKey(Child)
```
then serializers.py...
```
class ChildSerializer(ModelSerializer):
class Meta:
model = Child
class ParentSerializer(ModelSerializer):
# if child is required
child = ChildSerializer(read_only=True)
# if child is a required field and you want write to child properties through parent
# child = ChildSerializer(required=False)
# otherwise the following should work (untested)
# child = ChildSerializer()
child_id = serializers.PrimaryKeyRelatedField(
queryset=Child.objects.all(), source='child', write_only=True)
class Meta:
model = Parent
```
Setting `source=child` lets child\_id act as child would by default had it not be overridden (our desired behavior). `write_only=True` makes child\_id available to write to, but keeps it from showing up in the response since the id already shows up in the ChildSerializer |
Why does this empty dict break shared references? | 29,952,200 | 2 | 2015-04-29T18:55:02Z | 29,952,257 | 8 | 2015-04-29T18:57:50Z | [
"python",
"dictionary",
"pass-by-reference"
] | I have found some Python behavior that confuses me.
```
>>> A = {1:1}
>>> B = A
>>> A[2] = 2
>>> A
{1: 1, 2: 2}
>>> B
{1: 1, 2: 2}
```
So far, everything is behaving as expected. A and B both reference the same, mutable, dictionary and altering one alters the other.
```
>>> A = {}
>>> A
{} # As expected
>>> B
{1: 1, 2: 2} # Why is this not an empty dict?
```
Why do `A` and `B` no longer reference the same object?
I have seen this question: [Python empty dict not being passed by reference?](http://stackoverflow.com/questions/24048802/python-empty-dict-not-being-passed-by-reference) and it verifies this behavior, but the answers explain how to fix the provided script not why this behavior occurs. | Here is a pictorial representation \*:
```
A = {1: 1}
# A -> {1: 1}
B = A
# A -> {1: 1} <- B
A[2] = 2
# A -> {1: 1, 2: 2} <- B
A = {}
# {1: 1, 2: 2} <- B
# A -> {}
```
`A = {}` creates a **completely new object** and reassigns the identifier `A` to it, but **does not** affect `B` or the dictionary `A` previously referenced. You should read [this article](http://nedbatchelder.com/text/names.html), it covers this sort of thing pretty well.
---
Note that, as an alternative, you can use the [`dict.clear`](https://docs.python.org/2/library/stdtypes.html#dict.clear) method to empty the dictionary *in-place*:
```
>>> A = {1: 1}
>>> B = A
>>> A[2] = 2
>>> A.clear()
>>> B
{}
```
As `A` and `B` are still references to the same object, both now "see" the empty version.
---
\* *To a first approximation - similar referencing behaviour is going on **within** the dictionary too, but as the values are immutable it's less relevant.* |
Python generate all n-permutations of n lists | 29,952,531 | 3 | 2015-04-29T19:11:44Z | 29,952,567 | 13 | 2015-04-29T19:13:46Z | [
"python",
"algorithm",
"permutation"
] | I have n lists of different lengths of wich I want to create all possible permutations.
so e.g. if `a=[1,2]` and `b=[3,4,5]` then I would love to obtain `res=[[1,3],[1,4],[1,5],[2,3],[2,4],[2,5]]`
I've been trying to achieve this using a recursive function, which turned out to be neither very efficient nor very pythonic.
How would an experienced python programmer tackle the problem? | It's called the [Cartesian product](https://en.wikipedia.org/wiki/Cartesian_product) of two sequences.
This is already available in Python as a library function: [**`itertools.product`**](https://docs.python.org/3/library/itertools.html#itertools.product).
Example:
```
>>> import itertools
>>> a = [1, 2]
>>> b = [3, 4, 5]
>>> list(itertools.product(a, b))
[(1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5)]
``` |
How can memoized functions be tested? | 29,953,764 | 6 | 2015-04-29T20:20:23Z | 29,954,160 | 7 | 2015-04-29T20:44:35Z | [
"python",
"unit-testing",
"python-decorators"
] | I have a simple memoizer which I'm using to save some time around expensive network calls. Roughly, my code looks like this:
```
# mem.py
import functools
import time
def memoize(fn):
"""
Decorate a function so that it results are cached in memory.
>>> import random
>>> random.seed(0)
>>> f = lambda x: random.randint(0, 10)
>>> [f(1) for _ in range(10)]
[9, 8, 4, 2, 5, 4, 8, 3, 5, 6]
>>> [f(2) for _ in range(10)]
[9, 5, 3, 8, 6, 2, 10, 10, 8, 9]
>>> g = memoize(f)
>>> [g(1) for _ in range(10)]
[3, 3, 3, 3, 3, 3, 3, 3, 3, 3]
>>> [g(2) for _ in range(10)]
[8, 8, 8, 8, 8, 8, 8, 8, 8, 8]
"""
cache = {}
@functools.wraps(fn)
def wrapped(*args, **kwargs):
key = args, tuple(sorted(kwargs))
try:
return cache[key]
except KeyError:
cache[key] = fn(*args, **kwargs)
return cache[key]
return wrapped
def network_call(user_id):
time.sleep(1)
return 1
@memoize
def search(user_id):
response = network_call(user_id)
# do stuff to response
return response
```
And I have tests for this code, where I mock out different return values of `network_call()` to make sure some modifications I do in `search()` work as expected.
```
import mock
import mem
@mock.patch('mem.network_call')
def test_search(mock_network_call):
mock_network_call.return_value = 2
assert mem.search(1) == 2
@mock.patch('mem.network_call')
def test_search_2(mock_network_call):
mock_network_call.return_value = 3
assert mem.search(1) == 3
```
However, when I run these tests, I get a failure because `search()` returns a cached result.
```
CAESAR-BAUTISTA:~ caesarbautista$ py.test test_mem.py
============================= test session starts ==============================
platform darwin -- Python 2.7.8 -- py-1.4.26 -- pytest-2.6.4
collected 2 items
test_mem.py .F
=================================== FAILURES ===================================
________________________________ test_search_2 _________________________________
args = (<MagicMock name='network_call' id='4438999312'>,), keywargs = {}
extra_args = [<MagicMock name='network_call' id='4438999312'>]
entered_patchers = [<mock._patch object at 0x108913dd0>]
exc_info = (<class '_pytest.assertion.reinterpret.AssertionError'>, AssertionError(u'assert 2 == 3\n + where 2 = <function search at 0x10893f848>(1)\n + where <function search at 0x10893f848> = mem.search',), <traceback object at 0x1089502d8>)
patching = <mock._patch object at 0x108913dd0>
arg = <MagicMock name='network_call' id='4438999312'>
@wraps(func)
def patched(*args, **keywargs):
# don't use a with here (backwards compatability with Python 2.4)
extra_args = []
entered_patchers = []
# can't use try...except...finally because of Python 2.4
# compatibility
exc_info = tuple()
try:
try:
for patching in patched.patchings:
arg = patching.__enter__()
entered_patchers.append(patching)
if patching.attribute_name is not None:
keywargs.update(arg)
elif patching.new is DEFAULT:
extra_args.append(arg)
args += tuple(extra_args)
> return func(*args, **keywargs)
/opt/boxen/homebrew/lib/python2.7/site-packages/mock.py:1201:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
mock_network_call = <MagicMock name='network_call' id='4438999312'>
@mock.patch('mem.network_call')
def test_search_2(mock_network_call):
mock_network_call.return_value = 3
> assert mem.search(1) == 3
E assert 2 == 3
E + where 2 = <function search at 0x10893f848>(1)
E + where <function search at 0x10893f848> = mem.search
test_mem.py:15: AssertionError
====================== 1 failed, 1 passed in 0.03 seconds ======================
```
Is there a way to test memoized functions? I've considered some alternatives but they each have drawbacks.
One solution is to mock `memoize()`. I am reluctant to do this because it leaks implementation details to the tests. Theoretically, I should be able to memoize and unmemoize functions without the rest of the system, including tests, noticing from a functional standpoint.
Another solution is to rewrite the code to expose the decorated function. That is, I could do something like this:
```
def _search(user_id):
return network_call(user_id)
search = memoize(_search)
```
However, this runs into the same problems as above, although it's arguably worse because it will not work for recursive functions. | Is it really desirable for your memoization to be defined at the function level?
This effectively makes the memoized data a *global variable* (just like the function, whose scope it shares).
Incidentally, that's why you're having difficulty testing it!
So, how about wrapping this into an object?
```
import functools
import time
def memoize(meth):
@functools.wraps(meth)
def wrapped(self, *args, **kwargs):
# Prepare and get reference to cache
attr = "_memo_{0}".format(meth.__name__)
if not hasattr(self, attr):
setattr(self, attr, {})
cache = getattr(self, attr)
# Actual caching
key = args, tuple(sorted(kwargs))
try:
return cache[key]
except KeyError:
cache[key] = meth(self, *args, **kwargs)
return cache[key]
return wrapped
def network_call(user_id):
print "Was called with: %s" % user_id
return 1
class NetworkEngine(object):
@memoize
def search(self, user_id):
return network_call(user_id)
if __name__ == "__main__":
e = NetworkEngine()
for v in [1,1,2]:
e.search(v)
NetworkEngine().search(1)
```
Yields:
```
Was called with: 1
Was called with: 2
Was called with: 1
```
In other words, each instance of `NetworkEngine` gets its own cache. Just reuse the same one to share a cache, or instantiate a new one to get a new cache.
---
In your test code, you'd use:
```
@mock.patch('mem.network_call')
def test_search(mock_network_call):
mock_network_call.return_value = 2
assert mem.NetworkEngine().search(1) == 2
``` |
What does the term "broadcasting" mean in Pandas documentation? | 29,954,263 | 9 | 2015-04-29T20:50:04Z | 29,955,358 | 18 | 2015-04-29T22:00:53Z | [
"python",
"numpy",
"pandas"
] | I'm reading through the Pandas documentation, and the term "broadcasting" is [used extensively](http://pandas.pydata.org/pandas-docs/stable/basics.html#flexible-binary-operations), but never really defined or explained.
What does it mean? | So the term *broadcasting* comes from [numpy](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html), simply put it explains the rules of the output that will result when you perform operations between n-dimensional arrays (could be panels, dataframes, series) or scalar values.
## Broadcasting using a scalar value
So the simplest case is just multiplying by a scalar value:
```
In [4]:
s = pd.Series(np.arange(5))
s
Out[4]:
0 0
1 1
2 2
3 3
4 4
dtype: int32
In [5]:
s * 10
Out[5]:
0 0
1 10
2 20
3 30
4 40
dtype: int32
```
and we get the same expected results with a dataframe:
```
In [6]:
df = pd.DataFrame({'a':np.random.randn(4), 'b':np.random.randn(4)})
df
Out[6]:
a b
0 0.216920 0.652193
1 0.968969 0.033369
2 0.637784 0.856836
3 -2.303556 0.426238
In [7]:
df * 10
Out[7]:
a b
0 2.169204 6.521925
1 9.689690 0.333695
2 6.377839 8.568362
3 -23.035557 4.262381
```
So what is technically happening here is that the scalar value has been *broadcasted* along the same dimensions of the Series and DataFrame above.
## Broadcasting using a 1-D array
Say we have a 2-D dataframe of shape 4 x 3 (4 rows x 3 columns) we can perform an operation along the x-axis by using a 1-D Series that is the same length as the row-length:
```
In [8]:
df = pd.DataFrame({'a':np.random.randn(4), 'b':np.random.randn(4), 'c':np.random.randn(4)})
df
Out[8]:
a b c
0 0.122073 -1.178127 -1.531254
1 0.011346 -0.747583 -1.967079
2 -0.019716 -0.235676 1.419547
3 0.215847 1.112350 0.659432
In [26]:
df.iloc[0]
Out[26]:
a 0.122073
b -1.178127
c -1.531254
Name: 0, dtype: float64
In [27]:
df + df.iloc[0]
Out[27]:
a b c
0 0.244146 -2.356254 -3.062507
1 0.133419 -1.925710 -3.498333
2 0.102357 -1.413803 -0.111707
3 0.337920 -0.065777 -0.871822
```
the above looks funny at first until you understand what is happening, I took the first row of values and added this row-wise to the df, it can be visualised using this pic (sourced from [`scipy`](http://wiki.scipy.org/EricsBroadcastingDoc)):

The general rule is this:
> In order to broadcast, the size of the trailing axes for both arrays
> in an operation must either be the same size or one of them must be
> one.
So if I tried to add a 1-D array that didn't match in length, say one with 4 elements, unlike numpy which will raise a `ValueError`, in Pandas you'll get a df full of `NaN` values:
```
In [30]:
df + pd.Series(np.arange(4))
Out[30]:
a b c 0 1 2 3
0 NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN
```
Now some of the great things about pandas is that it will try to align using existing column names and row labels, this can get in the way of trying to perform a fancier broadcasting like this:

```
In [55]:
df[['a']] + df.iloc[0]
Out[55]:
a b c
0 0.244146 NaN NaN
1 0.133419 NaN NaN
2 0.102357 NaN NaN
3 0.337920 NaN NaN
```
In the above I use double subscripting to force the shape to be (4,1) but we see a problem when trying to broadcast using the first row as the column alignment only aligns on the first column. To get the same form of broadcasting to occur like the diagram above shows we have to decompose to numpy arrays which then become anonymous data:
```
In [56]:
df[['a']].values + df.iloc[0].values
Out[56]:
array([[ 0.24414608, -1.05605392, -1.4091805 ],
[ 0.13341899, -1.166781 , -1.51990758],
[ 0.10235701, -1.19784299, -1.55096957],
[ 0.33792013, -0.96227987, -1.31540645]])
```
It's also possible to broadcast in 3-dimensions but I don't go near that stuff often but the numpy, scipy and pandas book have examples that show how that works.
Generally speaking the thing to remember is that aside from scalar values which are simple, for n-D arrays the minor/trailing axes length must match or one of them must be 1. |
Finding substring (nonconsecutive) | 29,954,748 | 11 | 2015-04-29T21:17:15Z | 29,954,829 | 7 | 2015-04-29T21:22:10Z | [
"python",
"string"
] | In Python if I have string `needle` and I want to see if it exists (contiguously) as a substring in `haystack`, I only need to check `if needle in haystack`.
What happens in the case of a subsequence?
Example:
```
haystack = "qabcdzzzefgyyyh"
needle = "acgh"
```
"acgh" is a subsequence of `qabcdzzzefgyyyh` -- does not exist contiguously in `haystack`, but it does non-contiguously. The `c` appears after the `a`, the `g` appears after the `c`, and the `h` appears after the `g`. | I don't know if there's builtin function, but it is rather simple to do manually
```
def exists(a, b):
"""checks if b exists in a as a subsequence"""
pos = 0
for ch in a:
if pos < len(b) and ch == b[pos]:
pos += 1
return pos == len(b)
```
```
>>> exists("moo", "mo")
True
>>> exists("moo", "oo")
True
>>> exists("moo", "ooo")
False
>>> exists("haystack", "hack")
True
>>> exists("haystack", "hach")
False
>>>
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.