title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Multiply high order matrices with numpy | 30,816,747 | 9 | 2015-06-13T08:43:49Z | 30,817,037 | 13 | 2015-06-13T09:22:53Z | [
"python",
"numpy",
"matrix",
"scipy",
"linear-algebra"
] | I created this toy problem that reflects my much bigger problem:
```
import numpy as np
ind = np.ones((3,2,4)) # shape=(3L, 2L, 4L)
dist = np.array([[0.1,0.3],[1,2],[0,1]]) # shape=(3L, 2L)
ans = np.array([np.dot(dist[i],ind[i]) for i in xrange(dist.shape[0])]) # shape=(3L, 4L)
print ans
""" prints:
[[ 0.4 0.4 0.4 0.4]
[ 3. 3. 3. 3. ]
[ 1. 1. 1. 1. ]]
"""
```
I want to do it as fast as possible, so using numpy's functions to calculate `ans` should be the best approach, since this operation is heavy and my matrices are quite big.
I saw [this post](http://stackoverflow.com/questions/4490961/numpy-multiplying-a-matrix-with-a-3d-tensor-suggestion), but the shapes are different and I cannot understand which `axes` I should use for this problem. However, I'm certain that [tensordot](http://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html) should have the answer. Any suggestions?
EDIT: I accepted [@ajcr's answer](http://stackoverflow.com/a/30817037/3523490), but please read my own answer as well, it may help others... | You could use `np.einsum` to do the operation since it allows for very careful control over which axes are multiplied and which are summed:
```
>>> np.einsum('ijk,ij->ik', ind, dist)
array([[ 0.4, 0.4, 0.4, 0.4],
[ 3. , 3. , 3. , 3. ],
[ 1. , 1. , 1. , 1. ]])
```
The function multiplies the entries in the first axis of `ind` with the entries in the first axis of `dist` (subscript `'i'`). Ditto for the second axis of each array (subscript `'j'`). Instead of returning a 3D array, we tell einsum to sum the values along axis `'j'` by omitting it from the output subscripts, thereby returning a 2D array.
---
`np.tensordot` is more difficult to apply to this problem. It automatically sums the products of axes. However, we want *two* sets of products but to sum only *one* of them.
Writing `np.tensordot(ind, dist, axes=[1, 1])` (as in the answer you linked to) computes the correct values for you, but returns a 3D array with shape `(3, 4, 3)`. If you can afford the memory cost of a larger array, you could use:
```
np.tensordot(ind, dist, axes=[1, 1])[0].T
```
This gives you the correct result, but because `tensordot` creates a much larger-than-necessary array first, `einsum` seems to be a better option. |
Multiply high order matrices with numpy | 30,816,747 | 9 | 2015-06-13T08:43:49Z | 30,819,011 | 13 | 2015-06-13T13:07:21Z | [
"python",
"numpy",
"matrix",
"scipy",
"linear-algebra"
] | I created this toy problem that reflects my much bigger problem:
```
import numpy as np
ind = np.ones((3,2,4)) # shape=(3L, 2L, 4L)
dist = np.array([[0.1,0.3],[1,2],[0,1]]) # shape=(3L, 2L)
ans = np.array([np.dot(dist[i],ind[i]) for i in xrange(dist.shape[0])]) # shape=(3L, 4L)
print ans
""" prints:
[[ 0.4 0.4 0.4 0.4]
[ 3. 3. 3. 3. ]
[ 1. 1. 1. 1. ]]
"""
```
I want to do it as fast as possible, so using numpy's functions to calculate `ans` should be the best approach, since this operation is heavy and my matrices are quite big.
I saw [this post](http://stackoverflow.com/questions/4490961/numpy-multiplying-a-matrix-with-a-3d-tensor-suggestion), but the shapes are different and I cannot understand which `axes` I should use for this problem. However, I'm certain that [tensordot](http://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html) should have the answer. Any suggestions?
EDIT: I accepted [@ajcr's answer](http://stackoverflow.com/a/30817037/3523490), but please read my own answer as well, it may help others... | Following [@ajcr's great answer](http://stackoverflow.com/a/30817037/3523490), I wanted to determine which method is the fastest, so I used `timeit` :
```
import timeit
setup_code = """
import numpy as np
i,j,k = (300,200,400)
ind = np.ones((i,j,k)) #shape=(3L, 2L, 4L)
dist = np.random.rand(i,j) #shape=(3L, 2L)
"""
basic ="np.array([np.dot(dist[l],ind[l]) for l in xrange(dist.shape[0])])"
einsum = "np.einsum('ijk,ij->ik', ind, dist)"
tensor= "np.tensordot(ind, dist, axes=[1, 1])[0].T"
print "tensor - total time:", min(timeit.repeat(stmt=tensor,setup=setup_code,number=10,repeat=3))
print "basic - total time:", min(timeit.repeat(stmt=basic,setup=setup_code,number=10,repeat=3))
print "einsum - total time:", min(timeit.repeat(stmt=einsum,setup=setup_code,number=10,repeat=3))
```
The surprising results were:
```
tensor - total time: 6.59519493952
basic - total time: 0.159871203461
einsum - total time: 0.263569731028
```
So obviously using tensordot was the wrong way to do it (not to mention `memory error` in bigger examples, just as @ajcr stated).
Since this example was small, I changed the matrices size to be `i,j,k = (3000,200,400)`, flipped the order just to be sure it has not effect and set up another test with higher numbers of repetitions:
```
print "einsum - total time:", min(timeit.repeat(stmt=einsum,setup=setup_code,number=50,repeat=3))
print "basic - total time:", min(timeit.repeat(stmt=basic,setup=setup_code,number=50,repeat=3))
```
The results were consistent with the first run:
```
einsum - total time: 13.3184077671
basic - total time: 8.44810031351
```
However, testing another type of size growth - `i,j,k = (30000,20,40)` led the following results:
```
einsum - total time: 0.325594117768
basic - total time: 0.926416766397
```
See the comments for explanations for these results.
The moral is, when looking for the fastest solution for a specific problem, try to generate data that is as similar to the original data as possible, in term of types and shapes. In my case `i` is much smaller than `j,k` and so I stayed with the ugly version, which is also the fastest in this case. |
Python 2.7.9 Mac OS 10.10.3 Message "setCanCycle: is deprecated. Please use setCollectionBehavior instead" | 30,818,222 | 4 | 2015-06-13T11:37:51Z | 30,855,388 | 7 | 2015-06-15T21:37:45Z | [
"python",
"osx",
"tkinter",
"anaconda",
"spyder"
] | This is my first message and i hope can you help me to solve my problem.
When I launch a python script I have this message :
> 2015-06-10 23:15:44.146 python[1044:19431] setCanCycle: is deprecated.Please use setCollectionBehavior instead
>
> 2015-06-10 23:15:44.155 python[1044:19431] setCanCycle: is deprecated.Please use setCollectionBehavior instead
Below my script :
```
from Tkinter import *
root = Tk()
root.geometry("450x600+10+10")
root.title("Booleanv1.0")
Cadre_1 = Frame(root, width=400, height=100)
Cadre_1.pack(side='top')
fileA = Label(Cadre_1, text="File A")
fileA.grid(row=0,column=0)
entA = Entry(Cadre_1, width=40)
entA.grid(row=0,column=1, pady=10)
open_fileA = Button(Cadre_1, text='SELECT', width=10, height=1, command = root.destroy)
open_fileA.grid(row=0, column=2)
fileB = Label(Cadre_1, text="File B")
fileB.grid(row=1,column=0)
entB = Entry(Cadre_1, width=40)
entB.grid(row=1,column=1, pady=10)
open_fileB = Button(Cadre_1, text='SELECT', width=10, height=1, command = root.destroy)
open_fileB.grid(row=1, column=2)
root.mainloop()
```
Who can help me to explain this message ?
how can I do to remove this message ?
PS : I use Anaconda 3.10.0 and Spyder IDE, but I have the same problem when I launch my script with the terminal.
regards. | The version of the Tkinter library which Anaconda has installed was compiled on an older version of OS X. The warnings your seeing aren't actually a problem and will go away once a version of the library compiled on a more recent version of OS X are added to the Anaconda repository.
<https://groups.google.com/a/continuum.io/forum/#!topic/anaconda/y1UWpFHsDyQ> |
Test if dict contained in dict | 30,818,694 | 18 | 2015-06-13T12:30:35Z | 30,818,799 | 33 | 2015-06-13T12:41:49Z | [
"python",
"dictionary"
] | Testing for equality works fine like this for python dicts:
```
first = {"one":"un", "two":"deux", "three":"trois"}
second = {"one":"un", "two":"deux", "three":"trois"}
print(first == second) # Result: True
```
But now my second dict contains some additional keys I want to ignore:
```
first = {"one":"un", "two":"deux", "three":"trois"}
second = {"one":"un", "two":"deux", "three":"trois", "foo":"bar"}
```
**Is there a simple way to test if the first dict is part of the second dict, with all its keys and values?**
**EDIT 1:**
This question is suspected to be a duplicate of [How to test if a dictionary contains certain *keys*](http://stackoverflow.com/questions/3415347/how-to-test-if-a-dictionary-contains-certain-keys), but I'm interested in testing keys *and their values*. Just containing the same keys does not make two dicts equal.
**EDIT 2:**
OK, I got some answers now using four different methods, and proved all of them working. As I need a fast process, I tested each for execution time. I created three identical dicts with 1000 items, keys and values were random strings of length 10. The `second` and `third` got some extra key-value pairs, and the last non-extra key of the `third` got a new value. So, `first` is a subset of `second`, but not of `third`. Using module `timeit` with 10000 repetitions, I got:
```
Method Time [s]
first.viewitems() <=second.viewitems() 0.9
set(first.items()).issubset(second.items()) 7.3
len(set(first.items()) & set(second.items())) == len(first) 8.5
all(first[key] == second.get(key, sentinel) for key in first) 6.0
```
I guessed the last method is the slowest, but it's on place 2.
But method 1 beats them all.
Thanks for your answers! | You can use a [dictionary view](https://docs.python.org/2/library/stdtypes.html#dictionary-view-objects):
```
# Python 2
if first.viewitems() <= second.viewitems():
# true only if `first` is a subset of `second`
# Python 3
if first.items() <= second.items():
# true only if `first` is a subset of `second`
```
Dictionary views are the [standard in Python 3](https://docs.python.org/3/library/stdtypes.html#dictionary-view-objects), in Python 2 you need to prefix the standard methods with `view`. They act like sets, and `<=` tests if one of those is a subset of (or is equal to) another.
Demo in Python 3:
```
>>> first = {"one":"un", "two":"deux", "three":"trois"}
>>> second = {"one":"un", "two":"deux", "three":"trois", "foo":"bar"}
>>> first.items() <= second.items()
True
>>> first['four'] = 'quatre'
>>> first.items() <= second.items()
False
```
This works for *non-hashable values too*, as the keys make the key-value pairs unique already. The documentation is a little confusing on this point, but even with mutable values (say, lists) this works:
```
>>> first_mutable = {'one': ['un', 'een', 'einz'], 'two': ['deux', 'twee', 'zwei']}
>>> second_mutable = {'one': ['un', 'een', 'einz'], 'two': ['deux', 'twee', 'zwei'], 'three': ['trois', 'drie', 'drei']}
>>> first_mutable.items() <= second_mutable.items()
True
>>> first_mutable['one'].append('ichi')
>>> first_mutable.items() <= second_mutable.items()
False
```
You could also use the [`all()` function](https://docs.python.org/2/library/functions.html#all) with a generator expression; use `object()` as a sentinel to detect missing values concisely:
```
sentinel = object()
if all(first[key] == second.get(key, sentinel) for key in first):
# true only if `first` is a subset of `second`
```
but this isn't as readable and expressive as using dictionary views. |
How to label and change the scale of Seaborn kdeplot's axes | 30,819,056 | 6 | 2015-06-13T13:11:53Z | 30,846,758 | 7 | 2015-06-15T13:43:05Z | [
"python",
"matplotlib",
"seaborn"
] | Here's my code
```
import numpy as np
from numpy.random import randn
import pandas as pd
from scipy import stats
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
fig = sns.kdeplot(treze, shade=True, color=c1,cut =0, clip=(0,2000))
fig = sns.kdeplot(cjjardim, shade=True, color=c2,cut =0, clip=(0,2000))
fig.figure.suptitle("Plot", fontsize = 24)
plt.xlabel('Purchase amount', fontsize=18)
plt.ylabel('Distribution', fontsize=16)
```
, which results in the following plot:

I want to do two things:
1) Change the scale of the y-axis by multiplying its values by 10000 and, if it's possible, add a % sign to the numbers. In other words, I want the y-axis values shown in the above plot to be 0%, 5%, 10%, 15%, 20%, 25%, and 30%.
2) Add more values to the x-axis. I'm particularly interested in showing the data in intervals of 200. In other words, I want the x-axis values shown in the plot to be 0, 200, 400, 600,... and so on. | 1) what you are looking for is most probably some combination of get\_yticks() and set\_yticks:
```
plt.yticks(fig.get_yticks(), fig.get_yticks() * 100)
plt.ylabel('Distribution [%]', fontsize=16)
```
Note: as mwaskom is commenting times 10000 and a % sign is mathematically incorrect.
2) you can specify where you want your ticks via the xticks function. Then you have more ticks and data get easier to read. You do not get more data that way.
```
plt.xticks([0, 200, 400, 600])
plt.xlabel('Purchase amount', fontsize=18)
```
Note: if you wanted to limit the view to your specified x-values you might even have a glimpse at plt.xlim() and reduce the figure to the interesting range. |
Convert string date into date format in python? | 30,819,423 | 3 | 2015-06-13T13:52:59Z | 30,819,460 | 8 | 2015-06-13T13:58:02Z | [
"python",
"date",
"datetime"
] | How to convert the below string date into date format in python.
```
input:
date='15-MARCH-2015'
expected output:
2015-03-15
```
I tried to use `datetime.strftime` and `datetime.strptime`. it is not accepting this format. | You can use `datetime.strptime` with a proper format :
```
>>> datetime.strptime('15-MARCH-2015','%d-%B-%Y')
datetime.datetime(2015, 3, 15, 0, 0)
```
Read more about `datetime.strptime` and date formatting: <https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior> |
Dump a Python dictionary with array value inside to CSV | 30,819,738 | 2 | 2015-06-13T14:26:51Z | 30,819,788 | 8 | 2015-06-13T14:30:44Z | [
"python"
] | I got something like this:
```
dict = {}
dict["p1"] = [.1,.2,.3,.4]
dict["p2"] = [.4,.3,.2,.1]
dict["p3"] = [.5,.6,.7,.8]
```
How I can dump this dictionary into csv like this structure? :
```
.1 .4 .5
.2 .3 .6
.3 .2 .7
.4 .1 .8
```
Really appreciated ! | `dict`s have no order so you would need an [`OrderedDict`](https://docs.python.org/2/library/collections.html#collections.OrderedDict) and to transpose the values:
```
import csv
from collections import OrderedDict
d = OrderedDict()
d["p1"] = [.1,.2,.3,.4]
d["p2"] = [.4,.3,.2,.1]
d["p3"] = [.5,.6,.7,.8]
with open("out.csv","w") as f:
wr = csv.writer(f)
wr.writerow(list(d))
wr.writerows(zip(*d.values()))
```
Output:
```
p1,p2,p3
0.1,0.4,0.5
0.2,0.3,0.6
0.3,0.2,0.7
0.4,0.1,0.8
```
Also best to avoid shadowing builtin functions names like [`dict`](https://docs.python.org/3/library/functions.html#func-dict). |
Python NLTK pos_tag not returning the correct part-of-speech tag | 30,821,188 | 7 | 2015-06-13T16:52:28Z | 30,823,202 | 27 | 2015-06-13T20:24:08Z | [
"python",
"machine-learning",
"nlp",
"nltk",
"pos-tagger"
] | Having this:
```
text = word_tokenize("The quick brown fox jumps over the lazy dog")
```
And running:
```
nltk.pos_tag(text)
```
I get:
```
[('The', 'DT'), ('quick', 'NN'), ('brown', 'NN'), ('fox', 'NN'), ('jumps', 'NNS'), ('over', 'IN'), ('the', 'DT'), ('lazy', 'NN'), ('dog', 'NN')]
```
This is incorrect. The tags for `quick brown lazy` in the sentence should be:
```
('quick', 'JJ'), ('brown', 'JJ') , ('lazy', 'JJ')
```
Testing this through their [online tool](http://nlp.stanford.edu:8080/corenlp/process) gives the same result; `quick`, `brown` and `fox` should be adjectives not nouns. | **In short**:
> NLTK is not perfect. In fact, no model is perfect.
**Note:**
As of NLTK version 3.1, default `pos_tag` function is no longer the [old MaxEnt English pickle](http://stackoverflow.com/questions/31386224/what-created-maxent-treebank-pos-tagger-english-pickle).
It is now the **perceptron tagger** from [@Honnibal's implementation](https://github.com/nltk/nltk/blob/develop/nltk/tag/perceptron.py), see [nltk.tag.pos\_tag](https://github.com/nltk/nltk/blob/develop/nltk/tag/__init__.py#L87)
```
>>> import inspect
>>> print inspect.getsource(pos_tag)
def pos_tag(tokens, tagset=None):
tagger = PerceptronTagger()
return _pos_tag(tokens, tagset, tagger)
```
Still it's better but not perfect:
```
>>> from nltk import pos_tag
>>> pos_tag("The quick brown fox jumps over the lazy dog".split())
[('The', 'DT'), ('quick', 'JJ'), ('brown', 'NN'), ('fox', 'NN'), ('jumps', 'VBZ'), ('over', 'IN'), ('the', 'DT'), ('lazy', 'JJ'), ('dog', 'NN')]
```
At some point, if someone wants `TL;DR` solutions, see <https://github.com/alvations/nltk_cli>
---
**In long**:
**Try using other tagger (see <https://github.com/nltk/nltk/tree/develop/nltk/tag>) , e.g.**:
* HunPos
* Stanford POS
* Senna
**Using default MaxEnt POS tagger from NLTK, i.e. `nltk.pos_tag`**:
```
>>> from nltk import word_tokenize, pos_tag
>>> text = "The quick brown fox jumps over the lazy dog"
>>> pos_tag(word_tokenize(text))
[('The', 'DT'), ('quick', 'NN'), ('brown', 'NN'), ('fox', 'NN'), ('jumps', 'NNS'), ('over', 'IN'), ('the', 'DT'), ('lazy', 'NN'), ('dog', 'NN')]
```
**Using Stanford POS tagger**:
```
$ cd ~
$ wget http://nlp.stanford.edu/software/stanford-postagger-2015-04-20.zip
$ unzip stanford-postagger-2015-04-20.zip
$ mv stanford-postagger-2015-04-20 stanford-postagger
$ python
>>> from os.path import expanduser
>>> home = expanduser("~")
>>> from nltk.tag.stanford import POSTagger
>>> _path_to_model = home + '/stanford-postagger/models/english-bidirectional-distsim.tagger'
>>> _path_to_jar = home + '/stanford-postagger/stanford-postagger.jar'
>>> st = POSTagger(path_to_model=_path_to_model, path_to_jar=_path_to_jar)
>>> text = "The quick brown fox jumps over the lazy dog"
>>> st.tag(text.split())
[(u'The', u'DT'), (u'quick', u'JJ'), (u'brown', u'JJ'), (u'fox', u'NN'), (u'jumps', u'VBZ'), (u'over', u'IN'), (u'the', u'DT'), (u'lazy', u'JJ'), (u'dog', u'NN')]
```
**Using HunPOS** (NOTE: the default encoding is ISO-8859-1 not UTF8):
```
$ cd ~
$ wget https://hunpos.googlecode.com/files/hunpos-1.0-linux.tgz
$ tar zxvf hunpos-1.0-linux.tgz
$ wget https://hunpos.googlecode.com/files/en_wsj.model.gz
$ gzip -d en_wsj.model.gz
$ mv en_wsj.model hunpos-1.0-linux/
$ python
>>> from os.path import expanduser
>>> home = expanduser("~")
>>> from nltk.tag.hunpos import HunposTagger
>>> _path_to_bin = home + '/hunpos-1.0-linux/hunpos-tag'
>>> _path_to_model = home + '/hunpos-1.0-linux/en_wsj.model'
>>> ht = HunposTagger(path_to_model=_path_to_model, path_to_bin=_path_to_bin)
>>> text = "The quick brown fox jumps over the lazy dog"
>>> ht.tag(text.split())
[('The', 'DT'), ('quick', 'JJ'), ('brown', 'JJ'), ('fox', 'NN'), ('jumps', 'NNS'), ('over', 'IN'), ('the', 'DT'), ('lazy', 'JJ'), ('dog', 'NN')]
```
**Using Senna** (Make sure you've the latest version of NLTK, there were some changes made to the API):
```
$ cd ~
$ wget http://ronan.collobert.com/senna/senna-v3.0.tgz
$ tar zxvf senna-v3.0.tgz
$ python
>>> from os.path import expanduser
>>> home = expanduser("~")
>>> from nltk.tag.senna import SennaTagger
>>> st = SennaTagger(home+'/senna')
>>> text = "The quick brown fox jumps over the lazy dog"
>>> st.tag(text.split())
[('The', u'DT'), ('quick', u'JJ'), ('brown', u'JJ'), ('fox', u'NN'), ('jumps', u'VBZ'), ('over', u'IN'), ('the', u'DT'), ('lazy', u'JJ'), ('dog', u'NN')]
```
---
**Or try building a better POS tagger**:
* Ngram Tagger: <http://streamhacker.com/2008/11/03/part-of-speech-tagging-with-nltk-part-1/>
* Affix/Regex Tagger: <http://streamhacker.com/2008/11/10/part-of-speech-tagging-with-nltk-part-2/>
* Build Your Own Brill (Read the code it's a pretty fun tagger, <http://www.nltk.org/_modules/nltk/tag/brill.html>), see <http://streamhacker.com/2008/12/03/part-of-speech-tagging-with-nltk-part-3/>
* Perceptron Tagger: <https://honnibal.wordpress.com/2013/09/11/a-good-part-of-speechpos-tagger-in-about-200-lines-of-python/>
* LDA Tagger: <http://scm.io/blog/hack/2015/02/lda-intentions/>
---
**Complains about `pos_tag` accuracy on stackoverflow includes**:
* [POS tagging - NLTK thinks noun is adjective](http://stackoverflow.com/questions/13529945/pos-tagging-nltk-thinks-noun-is-adjective)
* [python NLTK POS tagger not behaving as expected](http://stackoverflow.com/questions/21786257/python-nltk-pos-tagger-not-behaving-as-expected)
* [How to obtain better results using NLTK pos tag](http://stackoverflow.com/questions/8146748/how-to-obtain-better-results-using-nltk-pos-tag)
* [pos\_tag in NLTK does not tag sentences correctly](http://stackoverflow.com/questions/8365557/pos-tag-in-nltk-does-not-tag-sentences-correctly)
-
**Issues about NLTK HunPos includes**:
* [how to i tag textfiles with hunpos in nltk?](http://stackoverflow.com/questions/5088448/how-to-i-tag-textfiles-with-hunpos-in-nltk)
* [Does anyone know how to configure the hunpos wrapper class on nltk?](http://stackoverflow.com/questions/5091389/does-anyone-know-how-to-configure-the-hunpos-wrapper-class-on-nltk)
**Issues with NLTK and Stanford POS tagger includes**:
* [trouble importing stanford pos tagger into nltk](http://stackoverflow.com/questions/7344916/trouble-importing-stanford-pos-tagger-into-nltk)
* [Java Command Fails in NLTK Stanford POS Tagger](http://stackoverflow.com/questions/27116495/java-command-fails-in-nltk-stanford-pos-tagger)
* [Error using Stanford POS Tagger in NLTK Python](http://stackoverflow.com/questions/22930328/error-using-stanford-pos-tagger-in-nltk-python)
* [How to improve speed with Stanford NLP Tagger and NLTK](http://stackoverflow.com/questions/23322674/how-to-improve-speed-with-stanford-nlp-tagger-and-nltk)
* [Nltk stanford pos tagger error : Java command failed](http://stackoverflow.com/questions/27171298/nltk-stanford-pos-tagger-error-java-command-failed)
* [Instantiating and using StanfordTagger within NLTK](http://stackoverflow.com/questions/8555312/instantiating-and-using-stanfordtagger-within-nltk)
* [Running Stanford POS tagger in NLTK leads to "not a valid Win32 application" on Windows](http://stackoverflow.com/questions/26647253/running-stanford-pos-tagger-in-nltk-leads-to-not-a-valid-win32-application-on) |
What to set `SPARK_HOME` to? | 30,824,818 | 10 | 2015-06-14T00:12:30Z | 30,825,179 | 17 | 2015-06-14T01:29:26Z | [
"python",
"apache-spark",
"pythonpath",
"pyspark",
"apache-zeppelin"
] | Installed apache-maven-3.3.3, scala 2.11.6, then ran:
```
$ git clone git://github.com/apache/spark.git -b branch-1.4
$ cd spark
$ build/mvn -DskipTests clean package
```
Finally:
```
$ git clone https://github.com/apache/incubator-zeppelin
$ cd incubator-zeppelin/
$ mvn install -DskipTests
```
Then ran the server:
```
$ bin/zeppelin-daemon.sh start
```
Running a simple notebook beginning with `%pyspark`, I got an error about `py4j` not being found. Just did `pip install py4j` ([ref](http://stackoverflow.com/q/26533169)).
Now I'm getting this error:
```
pyspark is not responding Traceback (most recent call last):
File "/tmp/zeppelin_pyspark.py", line 22, in <module>
from pyspark.conf import SparkConf
ImportError: No module named pyspark.conf
```
I've tried setting my `SPARK_HOME` to: `/spark/python:/spark/python/lib`. No change. | Two environment variables are required:
```
SPARK_HOME=/spark
PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-VERSION-src.zip:$PYTHONPATH
``` |
What does this: s[s[1:] == s[:-1]] do in numpy? | 30,831,084 | 17 | 2015-06-14T15:20:13Z | 30,831,133 | 19 | 2015-06-14T15:25:17Z | [
"python",
"numpy"
] | I've been looking for a way to efficiently check for duplicates in a numpy array and stumbled upon a question that contained an answer using this code.
What does this line mean in numpy?
```
s[s[1:] == s[:-1]]
```
Would like to understand the code before applying it. Looked in the Numpy doc but had trouble finding this information. | The slices `[1:]` and `[:-1]` mean *all but the first* and *all but the last* elements of the array:
```
>>> import numpy as np
>>> s = np.array((1, 2, 2, 3)) # four element array
>>> s[1:]
array([2, 2, 3]) #Â last three elements
>>> s[:-1]
array([1, 2, 2]) # first three elements
```
therefore the comparison generates an array of boolean comparisons between each element `s[x]` and its *"neighbour"* `s[x+1]`, which will be one shorter than the original array (as the last element has no neighbour):
```
>>> s[1:] == s[:-1]
array([False, True, False], dtype=bool)
```
and using that array to index the original array gets you the elements where the comparison is `True`, i.e. the elements that are the same as their neighbour:
```
>>> s[s[1:] == s[:-1]]
array([2])
```
Note that this only identifies **adjacent** duplicate values. |
Multiprocessing IOError: bad message length | 30,834,132 | 16 | 2015-06-14T20:23:15Z | 31,794,365 | 10 | 2015-08-03T18:33:08Z | [
"python",
"numpy",
"multiprocessing",
"pool",
"ioerror"
] | I get an `IOError: bad message length` when passing large arguments to the `map` function. How can I avoid this?
The error occurs when I set `N=1500` or bigger.
The code is:
```
import numpy as np
import multiprocessing
def func(args):
i=args[0]
images=args[1]
print i
return 0
N=1500 #N=1000 works fine
images=[]
for i in np.arange(N):
images.append(np.random.random_integers(1,100,size=(500,500)))
iter_args=[]
for i in range(0,1):
iter_args.append([i,images])
pool=multiprocessing.Pool()
print pool
pool.map(func,iter_args)
```
In the docs of `multiprocessing` there is the function `recv_bytes` that raises an IOError. Could it be because of this? (<https://python.readthedocs.org/en/v2.7.2/library/multiprocessing.html>)
**EDIT**
If I use `images` as a numpy array instead of a list, I get a different error: `SystemError: NULL result without error in PyObject_Call`.
A bit different code:
```
import numpy as np
import multiprocessing
def func(args):
i=args[0]
images=args[1]
print i
return 0
N=1500 #N=1000 works fine
images=[]
for i in np.arange(N):
images.append(np.random.random_integers(1,100,size=(500,500)))
images=np.array(images) #new
iter_args=[]
for i in range(0,1):
iter_args.append([i,images])
pool=multiprocessing.Pool()
print pool
pool.map(func,iter_args)
```
**EDIT2** The actual function that I use is:
```
def func(args):
i=args[0]
images=args[1]
image=np.mean(images,axis=0)
np.savetxt("image%d.txt"%(i),image)
return 0
```
Additionally, the `iter_args` do not contain the same set of images:
```
iter_args=[]
for i in range(0,1):
rand_ind=np.random.random_integers(0,N-1,N)
iter_args.append([i,images[rand_ind]])
``` | You're creating a pool and sending all the images at once to func(). If you can get away with working on a single image at once, try something like this, which runs to completion with N=10000 in 35s with Python 2.7.10 for me:
```
import numpy as np
import multiprocessing
def func(args):
i = args[0]
img = args[1]
print "{}: {} {}".format(i, img.shape, img.sum())
return 0
N=10000
images = ((i, np.random.random_integers(1,100,size=(500,500))) for i in xrange(N))
pool=multiprocessing.Pool(4)
pool.imap(func, images)
pool.close()
pool.join()
```
The key here is to use iterators so you don't have to hold all the data in memory at once. For instance I converted images from an array holding all the data to a generator expression to create the image only when needed. You could modify this to load your images from disk or whatever. I also used pool.imap instead of pool.map.
If you can, try to load the image data in the worker function. Right now you have to serialize all the data and ship it across to another process. If your image data is larger, this might be a bottleneck.
[update now that we know func has to handle all images at once]
You could do an iterative mean on your images. Here's a solution without using multiprocessing. To use multiprocessing, you could divide your images into chunks, and farm those chunks out to the pool.
```
import numpy as np
N=10000
shape = (500,500)
def func(images):
average = np.full(shape, 0)
for i, img in images:
average += img / N
return average
images = ((i, np.full(shape,i)) for i in range(N))
print func(images)
``` |
AttributeError: FileInput instance has no attribute '__exit__' | 30,835,090 | 7 | 2015-06-14T22:13:03Z | 30,835,248 | 13 | 2015-06-14T22:32:43Z | [
"python"
] | I am trying to read from multiple input files and print the second row from each file next to each other as a table
```
import sys
import fileinput
with fileinput.input(files=('cutflow_TTJets_1l.txt ', 'cutflow_TTJets_1l.txt ')) as f:
for line in f:
proc(line)
def proc(line):
parts = line.split("&") # split line into parts
if "&" in line: # if at least 2 parts/columns
print parts[1] # print column 2
```
But I get a "AttributeError: FileInput instance has no attribute '`__exit__`'" | The problem is that as of python 2.7.10, the fileinput module does not support being used as a context manager, i.e. the `with` statement, so you have to handle closing the sequence yourself. The following should work:
```
f = fileinput.input(files=('cutflow_TTJets_1l.txt ', 'cutflow_TTJets_1l.txt '))
for line in f:
proc(line)
f.close()
```
Note that in recent versions of python 3, you can use this module as a context manager.
---
For the second part of the question, assuming that each file is similarly formatted with an equal number of data lines of the form `xxxxxx & xxxxx`, one can make a table of the data from the second column of each data as follows:
Start with an empty list to be a table where the rows will be lists of second column entries from each file:
```
table = []
```
Now iterate over all lines in the `fileinput` sequence, using the `fileinput.isfirstline()` to check if we are at a new file and make a new row:
```
for line in f:
if fileinput.isfirstline():
row = []
table.append(row)
parts = line.split('&')
if len(parts) > 1:
row.append(parts[1].strip())
f.close()
```
Now `table` will be the transpose of what you really want, which is each row containing the second column entries of a given line of each file. To transpose the list, one can use `zip` and then loop over rows the transposed table, using the `join` string method to print each row with a comma separator (or whatever separator you want):
```
for row in zip(*table):
print(', '.join(row))
``` |
How to get the index of an integer from a list if the list contains a boolean? | 30,843,103 | 14 | 2015-06-15T10:40:23Z | 30,843,199 | 13 | 2015-06-15T10:45:54Z | [
"python",
"list",
"indexing"
] | I am just starting with Python.
How to get index of integer `1` from a list if the list contains a boolean `True` object before the `1`?
```
>>> lst = [True, False, 1, 3]
>>> lst.index(1)
0
>>> lst.index(True)
0
>>> lst.index(0)
1
```
I think Python considers `0` as `False` and `1` as `True` in the argument of the `index` method. How can I get the index of integer `1` (i.e. `2`)?
Also what is the reasoning or logic behind treating boolean object this way in list?
As from the solutions, I can see it is not so straightforward. | The [documentation](https://docs.python.org/3/library/stdtypes.html#lists) says that
> Lists are mutable sequences, typically used to store collections of
> homogeneous items (where the precise degree of similarity will vary by
> application).
You shouldn't store heterogeneous data in lists.
The implementation of `list.index` only performs the comparison using `Py_EQ` (`==` operator). In your case that comparison returns truthy value because `True` and `False` have values of the integers 1 and 0, respectively ([the bool class is a subclass of int](https://docs.python.org/3/library/functions.html#bool) after all).
However, you could use generator expression and the [built-in `next` function](https://docs.python.org/3/library/functions.html#next) (to get the first value from the generator) like this:
```
In [4]: next(i for i, x in enumerate(lst) if not isinstance(x, bool) and x == 1)
Out[4]: 2
```
Here we check if `x` is an instance of `bool` *before* comparing `x` to 1.
Keep in mind that `next` can raise `StopIteration`, in that case it may be desired to (re-)raise `ValueError` (to mimic the behavior of `list.index`).
Wrapping this all in a function:
```
def index_same_type(it, val):
gen = (i for i, x in enumerate(it) if type(x) is type(val) and x == val)
try:
return next(gen)
except StopIteration:
raise ValueError('{!r} is not in iterable'.format(val)) from None
```
Some examples:
```
In [34]: index_same_type(lst, 1)
Out[34]: 2
In [35]: index_same_type(lst, True)
Out[35]: 0
In [37]: index_same_type(lst, 42)
ValueError: 42 is not in iterable
``` |
How to get the index of an integer from a list if the list contains a boolean? | 30,843,103 | 14 | 2015-06-15T10:40:23Z | 30,843,812 | 7 | 2015-06-15T11:17:58Z | [
"python",
"list",
"indexing"
] | I am just starting with Python.
How to get index of integer `1` from a list if the list contains a boolean `True` object before the `1`?
```
>>> lst = [True, False, 1, 3]
>>> lst.index(1)
0
>>> lst.index(True)
0
>>> lst.index(0)
1
```
I think Python considers `0` as `False` and `1` as `True` in the argument of the `index` method. How can I get the index of integer `1` (i.e. `2`)?
Also what is the reasoning or logic behind treating boolean object this way in list?
As from the solutions, I can see it is not so straightforward. | Booleans **are** integers in Python, and this is why you can use them just like any integer:
```
>>> 1 + True
2
>>> [1][False]
1
```
[this doesn't mean you should :)]
This is due to the fact that `bool` is a subclass of `int`, and almost always a boolean will behave just like 0 or 1 (except when it is cast to string - you will get `"False"` and `"True"` instead).
Here is one more idea how you can achieve what you want (however, try to rethink you logic taking into account information above):
```
>>> class force_int(int):
... def __eq__(self, other):
... return int(self) == other and not isinstance(other, bool)
...
>>> force_int(1) == True
False
>>> lst.index(force_int(1))
2
```
This code redefines `int`'s method, which is used to compare elements in the `index` method, to ignore booleans. |
Linear programming with scipy.optimize.linprog | 30,849,883 | 3 | 2015-06-15T16:06:24Z | 30,850,123 | 7 | 2015-06-15T16:20:13Z | [
"python",
"numpy",
"scipy"
] | I've just check the simple linear programming problem with scipy.optimize.linprog:
```
1*x[1] + 2x[2] -> max
1*x[1] + 0*x[2] <= 5
0*x[1] + 1*x[2] <= 5
1*x[1] + 0*x[2] >= 1
0*x[1] + 1*x[2] >= 1
1*x[1] + 1*x[2] <= 6
```
And got the very strange result, I expected that x[1] will be 1 and x[2] will be 5, but:
```
>>> print optimize.linprog([1, 2], A_ub=[[1, 1]], b_ub=[6], bounds=(1, 5), method='simplex')
status: 0
slack: array([ 4., 4., 4., 0., 0.])
success: True
fun: 3.0
x: array([ 1., 1.])
message: 'Optimization terminated successfully.'
nit: 2
```
Can anyone explain, why I got this strange result? | `optimize.linprog` always minimizes your target function. If you want to maximize instead, you can use that `max(f(x)) == -min(-f(x))`
```
from scipy import optimize
optimize.linprog(
c = [-1, -2],
A_ub=[[1, 1]],
b_ub=[6],
bounds=(1, 5),
method='simplex'
)
```
This will give you your expected result, with the value `-f(x) = -11.0`
```
slack: array([ 0., 4., 0., 4., 0.])
message: 'Optimization terminated successfully.'
nit: 3
x: array([ 1., 5.])
status: 0
success: True
fun: -11.0
``` |
Benefit of using os.mkdir vs os.system("mkdir") | 30,854,465 | 2 | 2015-06-15T20:31:57Z | 30,854,516 | 10 | 2015-06-15T20:35:13Z | [
"python",
"python-2.7"
] | Simple question that I can't find an answer to:
Is there a benefit of using `os.mkdir("somedir")` over `os.system("mkdir somedir")` or `subprocess.call()`, beyond code portability?
Answers should apply to Python 2.7.
Edit: the point was raised that a hard-coded directory versus a variable (possibly containing user-defined data) introduces the question of security. My original question was intended to be from a system approach (i.e. what's going on under the hood) but security concerns are a valid issue and should be included when considering a complete answer, as well as directory names containing spaces | ## Correctness
Think about what happens if your directory name contains spaces:
```
mkdir hello world
```
...creates *two* directories, `hello` and `world`. And if you just blindly substitute in quotes, that won't work if your filename contains that quoting type:
```
'mkdir "' + somedir + '"'
```
...does very little good when `somedir` contains `hello "cruel world".d`.
---
## Security
In the case of:
```
os.system('mkdir somedir')
```
...consider what happens if the variable you're substituting for `somedir` is called `./$(rm -rf /)/hello`.
Also, calling `os.system()` (or `subprocess.call()` with `shell=True`) invokes a shell, which means that you can be open to bugs such as ShellShock; if your `/bin/sh` were provided by a ShellShock-vulnerable bash, and your code provided any mechanism for arbitrary environment variables to be present (as is the case with HTTP headers via CGI), this would provide an opportunity for code injection.
---
## Performance
```
os.system('mkdir somedir')
```
...starts a shell:
```
/bin/sh -c 'mkdir somedir'
```
...which then needs to be linked and loaded; needs to parse its arguments; and needs to invoke the *external command* `mkdir` (meaning *another* link and load cycle).
---
A significant improvement is the following:
```
subprocess.call(['mkdir', '--', somedir], shell=False)
```
...which only invokes the external `mkdir` command, with no shell; however, as it involves a fork()/exec() cycle, this is still a significant performance penalty over the C-library `mkdir()` call.
In the case of `os.mkdir(somedir)`, the Python interpreter directly invokes the appropriate syscall -- no external commands at all.
---
## Error Handling
If you call `os.mkdir('somedir')` and it fails, you get an `IOError` with the appropriate `errno` thrown, and can trivially determine the type of the error.
If the `mkdir` external command fails, you get a failed exit status, but no handle on the actual underlying problem without parsing its stderr (which is written for humans, not machine readability, and which will vary in contents depending on the system's current locale). |
ensure_future not available in module asyncio | 30,854,576 | 7 | 2015-06-15T20:39:29Z | 30,854,677 | 9 | 2015-06-15T20:45:53Z | [
"python",
"python-asyncio"
] | I'm trying to run this example from the [python asyncio tasks & coroutines documentation](https://docs.python.org/3.4/library/asyncio-task.html#example-future-with-run-forever)
```
import asyncio
@asyncio.coroutine
def slow_operation(future):
yield from asyncio.sleep(1)
future.set_result('Future is done!')
def got_result(future):
print(future.result())
loop.stop()
loop = asyncio.get_event_loop()
future = asyncio.Future()
asyncio.ensure_future(slow_operation(future))
future.add_done_callback(got_result)
try:
loop.run_forever()
finally:
loop.close()
```
However, I get this error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'ensure_future'
```
This is the line that seems to be causing me grief:
```
asyncio.ensure_future(slow_operation(future))
```
My python interpreter is 3.4.3 on OSX Yosemite, as is the version of documentation I linked to above, from which I copied the example, so I **shouldn't** be getting this errror. Here's a terminal-grab of my python interpreter:
```
Python 3.4.3 (default, Feb 25 2015, 21:28:45)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.56)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
Other examples from the page not referencing `asyncio.ensure_future` seem to work.
I tried opening a fresh interpreter session and importing `ensure_future` from `asyncio`
```
from asyncio import ensure_future
```
I get an import error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'ensure_future'
```
I have access to another machine running Ubuntu 14.04 with python 3.4.0 installed. I tried the same import there, and unfortunately faced the same import error.
Has the api for asyncio been changed and its just not reflected in the documentation examples, or maybe there's a typo and ensure\_function should really be something else in the documentation?
Does the example work (or break) for other members of the SO community?
Thanks. | <https://docs.python.org/3.4/library/asyncio-task.html#asyncio.ensure_future>
> `asyncio.ensure_future(coro_or_future, *, loop=None)`
>
> Schedule the execution of a coroutine object: wrap it in a future. Return a Task object.
>
> If the argument is a `Future`, it is returned directly.
>
> **New in version 3.4.4.**
That's about it for "[Who is to blame?](https://en.wikipedia.org/wiki/Who_is_to_Blame%3F)". And regarding "[What is to be done?](https://en.wikipedia.org/wiki/What_Is_to_Be_Done%3F_%28novel%29)":
> `asyncio.async(coro_or_future, *, loop=None)`
>
> A deprecated alias to `ensure_future().`
>
> Deprecated since version 3.4.4. |
Docker-compose and pdb | 30,854,967 | 10 | 2015-06-15T21:05:27Z | 30,901,026 | 9 | 2015-06-17T19:47:31Z | [
"python",
"docker",
"pdb",
"docker-compose"
] | I see that I'm not the first one to ask the question but there was no clear answer to this:
How to use pdb with docker-composer in Python development?
When you ask uncle Google about `django docker` you get awesome docker-composer examples and tutorials and I have an environment working - I can run `docker-compose up` and I have a neat developer environment **but the PDB is not working** (which is very sad).
I can settle with running `docker-compose run my-awesome-app python app.py 0.0.0.0:8000` but then I can access my application over <http://127.0.0.1:8000> from the host (I can with `docker-compose up`) and it seems that each time I use `run` new containers are made like: `dir_app_13` and `dir_db_4` which I don't desire at all.
People of good will please aid me.
PS
I'm using pdb++ for that example and a basic docker-compose.yml from [this django example](https://docs.docker.com/compose/django/). Also I experimented but nothing seems to help me. And I'm using docker-composer [1.3.0rc3](https://github.com/docker/compose/tree/1.3.0rc3) as it has Dockerfile pointing support. | Try running your web container with the --service-ports option: `docker-compose run --service-ports web` |
What is a reliable isnumeric() function for python 3? | 30,855,314 | 2 | 2015-06-15T21:32:34Z | 30,855,376 | 7 | 2015-06-15T21:36:59Z | [
"python",
"regex",
"validation",
"python-3.x",
"isnumeric"
] | I am attempting to do what should be very simple and check to see if a value in an `Entry` field is a valid and real number. The `str.isnumeric()` method does not account for "-" negative numbers, or "." decimal numbers.
I tried writing a function for this:
```
def IsNumeric(self, event):
w = event.widget
if (not w.get().isnumeric()):
if ("-" not in w.get()):
if ("." not in w.get()):
w.delete(0, END)
w.insert(0, '')
```
This works just fine until you go back and type letters in there. Then it fails.
I researched the possibility of using the `.split()` method, but I could not figure out a reliable regex to deal for it.
This is a perfectly normal thing that needs to be done. Any ideas? | ```
try:
float(w.get())
except ValueError:
# wasn't numeric
``` |
Analytical solution for Linear Regression using Python vs. Julia | 30,855,655 | 10 | 2015-06-15T21:56:55Z | 30,856,430 | 9 | 2015-06-15T23:09:21Z | [
"python",
"matrix",
"julia-lang"
] | Using example from Andrew Ng's class (finding parameters for Linear Regression using normal equation):
With Python:
```
X = np.array([[1, 2104, 5, 1, 45], [1, 1416, 3, 2, 40], [1, 1534, 3, 2, 30], [1, 852, 2, 1, 36]])
y = np.array([[460], [232], [315], [178]])
θ = ((np.linalg.inv(X.T.dot(X))).dot(X.T)).dot(y)
print(θ)
```
Result:
```
[[ 7.49398438e+02]
[ 1.65405273e-01]
[ -4.68750000e+00]
[ -4.79453125e+01]
[ -5.34570312e+00]]
```
With Julia:
```
X = [1 2104 5 1 45; 1 1416 3 2 40; 1 1534 3 2 30; 1 852 2 1 36]
y = [460; 232; 315; 178]
θ = ((X' * X)^-1) * X' * y
```
Result:
```
5-element Array{Float64,1}:
207.867
0.0693359
134.906
-77.0156
-7.81836
```
Furthermore, when I multiple X by Julia's â but not Python's â θ, I get numbers close to y.
I can't figure out what I am doing wrong. Thanks! | ## Using X^-1 vs the pseudo inverse
**pinv**(X) which corresponds to [the pseudo inverse](https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse#Applications) is more broadly applicable than **inv**(X), which X^-1 equates to. Neither Julia nor Python do well using **inv**, but in this case apparently Julia does better.
but if you change the expression to
```
julia> z=pinv(X'*X)*X'*y
5-element Array{Float64,1}:
188.4
0.386625
-56.1382
-92.9673
-3.73782
```
you can verify that X\*z = y
```
julia> X*z
4-element Array{Float64,1}:
460.0
232.0
315.0
178.0
``` |
Analytical solution for Linear Regression using Python vs. Julia | 30,855,655 | 10 | 2015-06-15T21:56:55Z | 30,856,590 | 9 | 2015-06-15T23:27:01Z | [
"python",
"matrix",
"julia-lang"
] | Using example from Andrew Ng's class (finding parameters for Linear Regression using normal equation):
With Python:
```
X = np.array([[1, 2104, 5, 1, 45], [1, 1416, 3, 2, 40], [1, 1534, 3, 2, 30], [1, 852, 2, 1, 36]])
y = np.array([[460], [232], [315], [178]])
θ = ((np.linalg.inv(X.T.dot(X))).dot(X.T)).dot(y)
print(θ)
```
Result:
```
[[ 7.49398438e+02]
[ 1.65405273e-01]
[ -4.68750000e+00]
[ -4.79453125e+01]
[ -5.34570312e+00]]
```
With Julia:
```
X = [1 2104 5 1 45; 1 1416 3 2 40; 1 1534 3 2 30; 1 852 2 1 36]
y = [460; 232; 315; 178]
θ = ((X' * X)^-1) * X' * y
```
Result:
```
5-element Array{Float64,1}:
207.867
0.0693359
134.906
-77.0156
-7.81836
```
Furthermore, when I multiple X by Julia's â but not Python's â θ, I get numbers close to y.
I can't figure out what I am doing wrong. Thanks! | A more numerically robust approach in Python, without having to do the matrix algebra yourself is to use [`numpy.linalg.lstsq`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html) to do the regression:
```
In [29]: np.linalg.lstsq(X, y)
Out[29]:
(array([[ 188.40031942],
[ 0.3866255 ],
[ -56.13824955],
[ -92.9672536 ],
[ -3.73781915]]),
array([], dtype=float64),
4,
array([ 3.08487554e+03, 1.88409728e+01, 1.37100414e+00,
1.97618336e-01]))
```
(Compare the solution vector with @waTeim's answer in Julia).
You can see the source of the ill-conditioning by printing the matrix inverse you're calculating:
```
In [30]: np.linalg.inv(X.T.dot(X))
Out[30]:
array([[ -4.12181049e+13, 1.93633440e+11, -8.76643127e+13,
-3.06844458e+13, 2.28487459e+12],
[ 1.93633440e+11, -9.09646601e+08, 4.11827338e+11,
1.44148665e+11, -1.07338299e+10],
[ -8.76643127e+13, 4.11827338e+11, -1.86447963e+14,
-6.52609055e+13, 4.85956259e+12],
[ -3.06844458e+13, 1.44148665e+11, -6.52609055e+13,
-2.28427584e+13, 1.70095424e+12],
[ 2.28487459e+12, -1.07338299e+10, 4.85956259e+12,
1.70095424e+12, -1.26659193e+11]])
```
Eeep!
Taking the dot product of this with `X.T` leads to a catastrophic loss of precision. |
How to delete documents from Elasticsearch | 30,859,142 | 5 | 2015-06-16T05:00:03Z | 30,859,283 | 9 | 2015-06-16T05:11:42Z | [
"python",
"elasticsearch"
] | I can't find any example of deleting documents from `Elasticsearch` in Python. Whay I've seen by now - is definition of `delete` and `delete_by_query` functions. But for some reason [documentation](http://elasticsearch-py.readthedocs.org/en/master/api.html#elasticsearch.Elasticsearch.delete_by_query) does not provide even a microscopic example of using these functions. The single list of parameters does not tell me too much, if I do not know how to correctly feed them into the function call. So, lets say, I've just inserted one new doc like so:
```
doc = {'name':'Jacobian'}
db.index(index="reestr",doc_type="some_type",body=doc)
```
Who in the world knows how can I now delete this document using `delete` and `delete_by_query` ? | Since you are not giving a document id while indexing your document, you have to get the auto-generated document id from the return value and delete according to the id. Or you can define the id yourself, try the following:
```
db.index(index="reestr",doc_type="some_type",id=1919, body=doc)
db.delete(index="reestr",doc_type="some_type",id=1919)
```
In the other case, you need to look into return value;
```
r = db.index(index="reestr",doc_type="some_type", body=doc)
# r = {u'_type': u'some_type', u'_id': u'AU36zuFq-fzpr_HkJSkT', u'created': True, u'_version': 1, u'_index': u'reestr'}
db.delete(index="reestr",doc_type="some_type",id=r['_id'])
```
Another example for delete\_by\_query. Let's say after adding several documents with name='Jacobian', run the following to delete all documents with name='Jacobian':
```
db.delete_by_query(index='reestr',doc_type='some_type', q={'name': 'Jacobian'})
``` |
How to fix Python ValueError:bad marshal data? | 30,861,493 | 8 | 2015-06-15T18:54:48Z | 30,861,494 | 13 | 2015-06-15T18:54:48Z | [
"python"
] | Running flexget Python script in Ubuntu, I get an error:
```
$ flexget series forget "Orange is the new black" s03e01
Traceback (most recent call last):
File "/usr/local/bin/flexget", line 7, in <module>
from flexget import main
File "/usr/local/lib/python2.7/dist-packages/flexget/__init__.py", line 11, in <module>
from flexget.manager import Manager
File "/usr/local/lib/python2.7/dist-packages/flexget/manager.py", line 21, in <module>
from sqlalchemy.ext.declarative import declarative_base
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/ext/declarative/__init__.py", line 8, in <module>
from .api import declarative_base, synonym_for, comparable_using, \
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/ext/declarative/api.py", line 11, in <module>
from ...orm import synonym as _orm_synonym, \
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/__init__.py", line 17, in <module>
from .mapper import (
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/mapper.py", line 27, in <module>
from . import properties
ValueError: bad marshal data (unknown type code)
``` | If you get that error, the compiled version of the Python module (the .pyc file) is corrupt probably. Gentoo Linux provides `python-updater`, but in Debian the easier way to fix: just delete the .pyc file. If you don't know the pyc, just delete all of them (as root):
```
find /usr -name '*.pyc' -delete
``` |
Installing new versions of Python on Cygwin does not install Pip? | 30,863,501 | 20 | 2015-06-16T09:19:10Z | 31,958,249 | 40 | 2015-08-12T07:07:19Z | [
"python",
"cygwin",
"pip"
] | While I am aware of the option of [installing Pip from source](https://pip.pypa.io/en/latest/installing.html), I'm trying to avoid going down that path so that updates to Pip will be managed by Cygwin's package management.
I've [recently learned](http://stackoverflow.com/a/12476379/2489598) that the latest versions of Python include Pip. However, even though I have recently installed the latest versions of Python from the Cygwin repos, Bash doesn't recognize a valid Pip install on the system.
```
896/4086 MB RAM 0.00 0.00 0.00 1/12 Tue, Jun 16, 2015 ( 3:53:22am CDT) [0 jobs]
[ethan@firetail: +2] ~ $ python -V
Python 2.7.10
892/4086 MB RAM 0.00 0.00 0.00 1/12 Tue, Jun 16, 2015 ( 3:53:27am CDT) [0 jobs]
[ethan@firetail: +2] ~ $ python3 -V
Python 3.4.3
883/4086 MB RAM 0.00 0.00 0.00 1/12 Tue, Jun 16, 2015 ( 3:53:34am CDT) [0 jobs]
[ethan@firetail: +2] ~ $ pip
bash: pip: command not found
878/4086 MB RAM 0.00 0.00 0.00 1/12 Tue, Jun 16, 2015 ( 3:53:41am CDT) [0 jobs]
[ethan@firetail: +2] ~ $ pip2
bash: pip2: command not found
876/4086 MB RAM 0.00 0.00 0.00 1/12 Tue, Jun 16, 2015 ( 3:53:42am CDT) [0 jobs]
[ethan@firetail: +2] ~ $ pip3
bash: pip3: command not found
```
Note that the installed Python 2.7.10 and Python 3.4.3 are both recent enough that they should include Pip.
Is there something that I might have overlooked? Could there be a new install of Pip that isn't in the standard binary directories referenced in the $PATH? If the Cygwin packages of Python do in fact lack an inclusion of Pip, is that something that's notable enough to warrant a bug report to the Cygwin project? | [cel](https://stackoverflow.com/users/2272172/cel) self-answered this question in a [comment above](https://stackoverflow.com/questions/30863501/installing-new-versions-of-python-on-cygwin-does-not-install-pip/31958249#comment49786910_30863501). For posterity, let's convert this helpfully working solution into a genuine answer.
Unfortunately, Cygwin currently fails to:
* Provide `pip`, `pip2`, or `pip3` packages.
* Install the `pip` and `pip2` commands when the `python` package is installed.
* Install the `pip3` command when the `python3` package is installed.
It's time to roll up our grubby command-line sleeves and get it done ourselves.
## What's the Catch?
Since *no* `pip` packages are currently available, the answer to the specific question of "Is `pip` installable as a Cygwin package?" is technically "Sorry, son."
That said, `pip` *is* trivially installable via a one-liner. This requires manually re-running said one-liner to update `pip` but has the distinct advantage of actually working. (Which is more than we usually get in Cygwin Land.)
## `pip3` Installation, Please
**To install `pip3`,** the Python 3-specific version of `pip`, under Cygwin:
```
$ python3 -m ensurepip
```
This assumes the `python3` Cygwin package to have been installed, of course.
## `pip2` Installation, Please
**To install both `pip` and `pip2`,** the Python 2-specific versions of `pip`, under Cygwin:
```
$ python -m ensurepip
```
This assumes the `python` Cygwin package to have been installed, of course. |
What's wrong with order for not() in python? | 30,863,866 | 3 | 2015-06-16T09:36:22Z | 30,863,932 | 8 | 2015-06-16T09:39:16Z | [
"python",
"operators",
"boolean-expression"
] | What's wrong with using not() in python?. I tried this
```
In [1]: not(1) + 1
Out[1]: False
```
And it worked fine. But after readjusting it,
```
In [2]: 1 + not(1)
Out[2]: SyntaxError: invalid syntax
```
It gives an error. How does the order matters? | `not` is a [*unary operator*](https://docs.python.org/2/reference/expressions.html#boolean-operations), not a function, so please don't use the `(..)` call notation on it. The parentheses are ignored when parsing the expression and `not(1) + 1` is the same thing as `not 1 + 1`.
Due to precedence rules Python tries to parse the second expression as:
```
1 (+ not) 1
```
which is invalid syntax. If you really must use `not` after `+`, use parentheses:
```
1 + (not 1)
```
For the same reasons, `not 1 + 1` first calculates `1 + 1`, then applies `not` to the result. |
Peewee KeyError: 'i' | 30,866,058 | 3 | 2015-06-16T11:17:27Z | 30,866,375 | 8 | 2015-06-16T11:33:06Z | [
"python",
"flask",
"peewee"
] | I am getting an odd error from Python's peewee module that I am not able to resolve, any ideas? I basically want to have 'batches' that contain multiple companies within them. I am making a batch instance for each batch and assigning all of the companies within it to that batch's row ID.
**Traceback**
```
Traceback (most recent call last):
File "app.py", line 16, in <module>
import models
File "/Users/wyssuser/Desktop/dscraper/models.py", line 10, in <module>
class Batch(Model):
File "/Library/Python/2.7/site-packages/peewee.py", line 3647, in __new__
cls._meta.prepared()
File "/Library/Python/2.7/site-packages/peewee.py", line 3497, in prepared
field = self.fields[item.lstrip('-')]
KeyError: 'i'
```
**models.py**
```
from datetime import datetime
from flask.ext.bcrypt import generate_password_hash
from flask.ext.login import UserMixin
from peewee import *
DATABASE = SqliteDatabase('engineering.db')
class Batch(Model):
initial_contact_date = DateTimeField(formats="%m-%d-%Y")
class Meta:
database = DATABASE
order_by = ('initial_contact_date')
@classmethod
def create_batch(cls, initial_contact_date):
try:
with DATABASE.transaction():
cls.create(
initial_contact_date=datetime.now
)
print 'Created batch!'
except IntegrityError:
print 'Whoops, there was an error!'
class Company(Model):
batch_id = ForeignKeyField(rel_model=Batch, related_name='companies')
company_name = CharField()
website = CharField(unique=True)
email_address = CharField()
scraped_on = DateTimeField(formats="%m-%d-%Y")
have_contacted = BooleanField(default=False)
current_pipeline_phase = IntegerField(default=0)
day_0_message_id = IntegerField()
day_0_response = IntegerField()
day_0_sent = DateTimeField()
day_5_message_id = IntegerField()
day_5_response = IntegerField()
day_5_sent = DateTimeField()
day_35_message_id = IntegerField()
day_35_response = IntegerField()
day_35_sent = DateTimeField()
day_125_message_id = IntegerField()
day_125_response = IntegerField()
day_125_sent = DateTimeField()
sector = CharField()
class Meta:
database = DATABASE
order_by = ('have_contacted', 'current_pipeline_phase')
@classmethod
def create_company(cls, company_name, website, email_address):
try:
with DATABASE.transaction():
cls.create(company_name=company_name, website=website, email_address=email_address, scraped_on=datetime.now)
print 'Saved {}'.format(company_name)
except IntegrityError:
print '{} already exists in the database'.format(company_name)
def initialize():
DATABASE.connect()
DATABASE.create_tables([Batch, Company, User],safe=True)
DATABASE.close()
``` | The issue lies within the Metadata for your Batch class. See peewee's [example](https://peewee.readthedocs.org/en/latest/peewee/example.html) where order\_by is used:
```
class User(BaseModel):
username = CharField(unique=True)
password = CharField()
email = CharField()
join_date = DateTimeField()
class Meta:
order_by = ('username',)
```
where order\_by is a tuple containing only username. In your example you have omitted the comma which makes it a regular string instead of a tuple. This would be the correct version of that part of your code:
```
class Batch(Model):
initial_contact_date = DateTimeField(formats="%m-%d-%Y")
class Meta:
database = DATABASE
order_by = ('initial_contact_date',)
``` |
How to sort list of strings by count of a certain character? | 30,870,933 | 2 | 2015-06-16T14:45:45Z | 30,870,984 | 9 | 2015-06-16T14:47:41Z | [
"python",
"list",
"python-2.7",
"sorting"
] | I have a list of strings and I need to order it by the appearance of a certain character, let's say `"+"`.
So, for instance, if I have a list like this:
```
["blah+blah", "blah+++blah", "blah+bl+blah", "blah"]
```
I need to get:
```
["blah", "blah+blah", "blah+bl+blah", "blah+++blah"]
```
I've been studying the `sort()` method, but I don't fully understand how to use the key parameter for complex order criteria. Obviously `sort(key=count("+"))` doesn't work. Is it possible to order the list like I want with `sort()` or do I need to make a function for it? | Yes, [`list.sort`](https://docs.python.org/3/library/stdtypes.html#list.sort) can do it, though you need to specify the `key` argument:
```
In [4]: l.sort(key=lambda x: x.count('+'))
In [5]: l
Out[5]: ['blah', 'blah+blah', 'blah+bl+blah', 'blah+++blah']
```
In this code `key` function accepts a single argument and uses `str.count` to count the occurrences of `'+'` in it.
As for `list.sort(key=count('+'))`, you *can* get it to work if you define the `count` function like this (with [`operator.methodcaller`](https://docs.python.org/3/library/operator.html#operator.methodcaller)):
```
count = lambda x: methodcaller('count', x) # from operator import methodcaller
``` |
How is a raw string useful in Python? | 30,871,384 | 3 | 2015-06-16T15:03:55Z | 30,871,458 | 7 | 2015-06-16T15:07:00Z | [
"python"
] | I know the raw string operator `r` or `R` suppresses the meaning of escape characters but in what situation would this really be helpful? | Raw strings are commonly used for regular expressions which need to include backslashes.
```
re.match(r'\b(\w)+', string) # instead of re.match('(\\w)+', string
```
They are also useful for DOS file paths, which would otherwise have to double up every path separator.
```
path = r'C:\some\dir' # instead of 'C:\\some\\dir'
``` |
How to get selected option using Selenium WebDriver with Python? | 30,872,786 | 6 | 2015-06-16T16:07:27Z | 30,872,875 | 7 | 2015-06-16T16:11:52Z | [
"python",
"selenium",
"selenium-webdriver",
"selecteditem",
"selected"
] | How to get selected option using Selenium WebDriver with Python:
Someone have a solution for a `getFirstSelectedOption`?
I'm using this to get the select element:
```
try:
FCSelect = driver.find_element_by_id('FCenter')
self.TestEventLog = self.TestEventLog + "<br>Verify Form Elements: F Center Select found"
except NoSuchElementException:
self.TestEventLog = self.TestEventLog + "<br>Error: Select FCenter element not found"
```
Is there an equivalent or something close to 'getFirstSelectedOption' like this:
```
try:
FCenterSelectedOption = FCenterSelect.getFirstSelectedOption()
self.TestEventLog = self.TestEventLog + "<br>Verify Form Elements: F Center Selected (First) found"
except NoSuchElementException:
self.TestEventLog = self.TestEventLog + "<br>Error: Selected Option element not found"
```
Then I would like to Verify the Contents with a `getText` like:
```
try:
FCenterSelectedOptionText = FCenterSelectedOption.getText()
self.TestEventLog = self.TestEventLog + "<br>Verify Form Elements: FCenter Selected Option Text found"
except NoSuchElementException:
self.TestEventLog = self.TestEventLog + "<br>Error: Selected Option Text element not found"
if FCenterSelectedOptionText == 'F Center Option Text Here':
self.TestEventLog = self.TestEventLog + "<br>Verify Form Elements: F Center Selected Option Text found"
else:
self.TestEventLog = self.TestEventLog + "<br>Error: F Center 'Selected' Option Text not found"
``` | This is something that `selenium` makes it easy to deal with - the [`Select`](https://selenium-python.readthedocs.org/api.html#selenium.webdriver.support.select.Select) class:
```
from selenium.webdriver.support.select import Select
select = Select(driver.find_element_by_id('FCenter'))
selected_option = select.first_selected_option
print selected_option.text
``` |
How to merge two lists of string in Python | 30,876,691 | 3 | 2015-06-16T19:40:49Z | 30,876,717 | 7 | 2015-06-16T19:42:25Z | [
"python",
"string",
"list",
"merge"
] | I have two lists of string:
```
a = ['a', 'b', 'c']
b = ['d', 'e', 'f']
```
I should result:
```
['ad', 'be', 'cf']
```
What is the most pythonic way to do this ? | Probably with [`zip`](https://docs.python.org/3/library/functions.html#zip):
```
c = [''.join(item) for item in zip(a,b)]
```
You can also put multiple sublists into one large iterable and use the `*` operator to unpack it, passing each sublist as a separate argument to `zip`:
```
big_list = (a,b)
c = [''.join(item) for item in zip(*biglist)]
```
You can even use the `*` operator with `zip` to go in the other direction:
```
>>> list(zip(*c))
[('a', 'b', 'c'), ('d', 'e', 'f')]
``` |
Why does slice [:-0] return empty list in Python | 30,879,473 | 3 | 2015-06-16T22:39:55Z | 30,879,490 | 11 | 2015-06-16T22:41:36Z | [
"python",
"list",
"slice",
"negative-number"
] | Stumbled upon something slightly perplexing today while writing some unittests:
```
blah = ['a', 'b', 'c']
blah[:-3] # []
blah[:-2] # ['a']
blah[:-1] # ['a', 'b']
blah[:-0] # []
```
Can't for the life of me figure out why `blah[:-0] # []` should be the case, the pattern definitely seems to suggest that it should be `['a', 'b', 'c']`. Can anybody help to shed some light on why that is the case? Haven't been able to find mention in the docs as to why that is the case. | `-0` is `0`, and a slice that goes from the beginning of a `list` inclusive to index `0` non-inclusive is an empty `list`. |
PySpark add a column to a DataFrame from a TimeStampType column | 30,882,268 | 7 | 2015-06-17T04:20:17Z | 30,992,905 | 17 | 2015-06-23T02:29:00Z | [
"python",
"apache-spark",
"apache-spark-sql",
"pyspark"
] | I have a DataFrame that look something like that. I want to operate on the day of the `date_time` field.
```
root
|-- host: string (nullable = true)
|-- user_id: string (nullable = true)
|-- date_time: timestamp (nullable = true)
```
I tried to add a column to extract the day. So far my attempts have failed.
```
df = df.withColumn("day", df.date_time.getField("day"))
org.apache.spark.sql.AnalysisException: GetField is not valid on fields of type TimestampType;
```
This has also failed
```
df = df.withColumn("day", df.select("date_time").map(lambda row: row.date_time.day))
AttributeError: 'PipelinedRDD' object has no attribute 'alias'
```
Any idea how this can be done? | You can use simple `map`:
```
df.rdd.map(lambda row:
Row(row.__fields__ + ["day"])(row + (row.date_time.day, ))
)
```
Another option is to register a function and run SQL query:
```
sqlContext.registerFunction("day", lambda x: x.day)
sqlContext.registerDataFrameAsTable(df, "df")
sqlContext.sql("SELECT *, day(date_time) as day FROM df")
```
Finally you can define udf like this:
```
from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType
day = udf(lambda date_time: date_time.day, IntegerType())
df.withColumn("day", day(df.date_time))
```
**EDIT**:
Actually if you use raw SQL `day` function is already defined (at least in Spark 1.4) so you can omit udf registration. It also provides a number of different date processing functions including:
* getters like [`year`](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.year), [`month`](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.month), [`dayofmonth`](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.dayofmonth)
* date arithmetics tools like [`date_add`](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.date_add), [`datediff`](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.datediff)
* parsers like [`from_unixtime`](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.unix_timestamp) and formatters like [`date_format`](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.date_format)
It is also possible to use simple date expressions like:
```
current_timestamp() - expr("INTERVAL 1 HOUR")
```
It mean you can build relatively complex queries without passing data to Python. For example:
```
df = sc.parallelize([
(1, "2016-01-06 00:04:21"),
(2, "2016-05-01 12:20:00"),
(3, "2016-08-06 00:04:21")
]).toDF(["id", "ts_"])
now = lit("2016-06-01 00:00:00").cast("timestamp")
five_months_ago = now - expr("INTERVAL 5 MONTHS")
(df
# Cast string to timestamp
# For Spark 1.5 use cast("double").cast("timestamp")
.withColumn("ts", unix_timestamp("ts_").cast("timestamp"))
# Find all events in the last five months
.where(col("ts").between(five_months_ago, now))
# Find first Sunday after the event
.withColumn("next_sunday", next_day(col("ts"), "Sun"))
# Compute difference in days
.withColumn("diff", datediff(col("ts"), col("next_sunday"))))
``` |
Creating deb or rpm with setuptools - data_files | 30,885,731 | 11 | 2015-06-17T08:05:51Z | 30,938,202 | 9 | 2015-06-19T12:30:53Z | [
"python",
"python-3.x",
"rpm",
"setuptools",
"deb"
] | I have a Python 3 project.
```
MKC
âââ latex
â âââ macros.tex
â âââ main.tex
âââ mkc
â âââ cache.py
â âââ __init__.py
â âââ __main__.py
âââ README.md
âââ setup.py
âââ stdeb.cfg
```
On install, I would like to move my latex files to known directory, say `/usr/share/mkc/latex`, so I've told `setuptools` to include data files
```
data_files=[("/usr/share/mkc/latex",
["latex/macros.tex", "latex/main.tex"])],
```
Now when I run
```
./setup.py bdist --formats=rpm
```
or
```
./setup.py --command-packages=stdeb.command bdist_deb
```
I get the following error:
```
error: can't copy 'latex/macros.tex': doesn't exist or not a regular file
```
Running just `./setup.py bdist` works fine, so the problem must be in package creation. | When creating a deb file (I guess the same counts for a rpm file), `./setup.py --command-packages=stdeb.command bdist_deb` first creates a source distribution and uses that archive for further processing. But your LaTeX files are not included there, so they're not found.
You need to add them to the source package. Such can be achieved by adding a [MANIFEST.in](https://packaging.python.org/en/latest/distributing.html#manifest-in) with contents:
```
recursive-include latex *.tex
```
[distutils](https://docs.python.org/3/distutils/setupscript.html#installing-additional-files) from version 3.1 on would automatically include the `data_files` in a source distribution, while [setuptools](https://pythonhosted.org/setuptools/setuptools.html#generating-source-distributions) apparently works very differently. |
python nested list comprehension string concatenation | 30,887,004 | 3 | 2015-06-17T09:04:44Z | 30,887,067 | 7 | 2015-06-17T09:07:52Z | [
"python",
"list-comprehension",
"nested-lists"
] | I have a list of lists in python looking like this:
```
[['a', 'b'], ['c', 'd']]
```
I want to come up with a string like this:
```
a,b;c,d
```
So the lists should be separated with a `;` and the values of the same list should be separated with a `,`
So far I tried `','.join([y for x in test for y in x])` which returns `a,b,c,d`. Not quite there, yet, as you can see. | ```
";".join([','.join(x) for x in a])
``` |
How to show minor tick labels on log-scale with Matplotlib | 30,887,920 | 2 | 2015-06-17T09:42:27Z | 30,890,025 | 7 | 2015-06-17T11:18:42Z | [
"python",
"matplotlib"
] | Does anyone know how to show the labels of the minor ticks on a logarithmic scale with Python/Matplotlib?
Thanks! | You can use `plt.tick_params(axis='y', which='minor')` to set the minor ticks on and format them with the `matplotlib.ticker` `FormatStrFormatter`. For example,
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import FormatStrFormatter
x = np.linspace(0,4,1000)
y = np.exp(x)
plt.plot(x, y)
ax = plt.gca()
ax.set_yscale('log')
plt.tick_params(axis='y', which='minor')
ax.yaxis.set_minor_formatter(FormatStrFormatter("%.1f"))
plt.show()
```
 |
I want to create a script for unzip (.tar.gz) file via (Python) | 30,887,979 | 5 | 2015-06-17T09:45:41Z | 30,888,321 | 8 | 2015-06-17T10:01:29Z | [
"python",
"tar",
"unzip",
"gz"
] | I am trying to make a script for unzipping all the .tar.gz files from folders in one directory. For example, I will have a file which it calls ( testing.tar.gz). Then if I do manually, I can press to "extract here" then the .tar.gz file will create a new file, and it calls testing.tar. Finally, if I repeat the process of pressing "extract here", the .tar file prodcudes me all the .pdf files.
I wonder that how can I do it, and I have my code here and it seems doesn't realty work tho.
```
import os
import tarfile
import zipfile
def extract_file(path, to_directory='.'):
if path.endswith('.zip'):
opener, mode = zipfile.ZipFile, 'r'
elif path.endswith('.tar.gz') or path.endswith('.tgz'):
opener, mode = tarfile.open, 'r:gz'
elif path.endswith('.tar.bz2') or path.endswith('.tbz'):
opener, mode = tarfile.open, 'r:bz2'
else:
raise ValueError, "Could not extract `%s` as no appropriate extractor is found" % path
cwd = os.getcwd()
os.chdir(to_directory)
try:
file = opener(path, mode)
try: file.extractall()
finally: file.close()
finally:
os.chdir(cwd)
``` | Why do you want to "press" twice to extract a .tar.gz, when you can easily do it once? Here is a simple code to extract both .tar and .tar.gz in one go:
```
import tarfile
if (fname.endswith("tar.gz")):
tar = tarfile.open(fname, "r:gz")
tar.extractall()
tar.close()
elif (fname.endswith("tar")):
tar = tarfile.open(fname, "r:")
tar.extractall()
tar.close()
``` |
How to print a tree in Python? | 30,893,895 | 7 | 2015-06-17T14:03:08Z | 30,893,896 | 7 | 2015-06-17T14:03:08Z | [
"python",
"tree",
"pretty-print"
] | I have the following class which represents a node of a tree:
```
class Node:
def __init__(self, name, parent=None):
self.name = name
self.parent = parent
self.children = []
# ...
if parent:
self.parent.children.append(self)
```
How to print such a tree? | This is my solution:
```
def print_tree(current_node, indent="", last='updown'):
nb_children = lambda node: sum(nb_children(child) for child in node.children) + 1
size_branch = {child: nb_children(child) for child in current_node.children}
""" Creation of balanced lists for "up" branch and "down" branch. """
up = sorted(current_node.children, key=lambda node: nb_children(node))
down = []
while up and sum(size_branch[node] for node in down) < sum(size_branch[node] for node in up):
down.append(up.pop())
""" Printing of "up" branch. """
for child in up:
next_last = 'up' if up.index(child) is 0 else ''
next_indent = '{0}{1}{2}'.format(indent, ' ' if 'up' in last else 'â', " " * len(current_node.name))
print_tree(child, indent=next_indent, last=next_last)
""" Printing of current node. """
if last == 'up': start_shape = 'â'
elif last == 'down': start_shape = 'â'
elif last == 'updown': start_shape = ' '
else: start_shape = 'â'
if up: end_shape = 'â¤'
elif down: end_shape = 'â'
else: end_shape = ''
print '{0}{1}{2}{3}'.format(indent, start_shape, current_node.name, end_shape)
""" Printing of "down" branch. """
for child in down:
next_last = 'down' if down.index(child) is len(down) - 1 else ''
next_indent = '{0}{1}{2}'.format(indent, ' ' if 'down' in last else 'â', " " * len(current_node.name))
print_tree(child, indent=next_indent, last=next_last)
```
Example of use:
```
shame = Node("shame")
conscience = Node("conscience", shame)
selfdisgust = Node("selfdisgust", shame)
embarrassment = Node("embarrassment", shame)
selfconsciousness = Node("selfconsciousness", embarrassment)
shamefacedness = Node("shamefacedness", embarrassment)
chagrin = Node("chagrin", embarrassment)
discomfiture = Node("discomfiture", embarrassment)
abashment = Node("abashment", embarrassment)
confusion = Node("confusion", embarrassment)
print_tree(shame)
```
And this is the output:
```
âconscience
âself-disgust
shameâ¤
â âself-consciousness
â âshamefacedness
â âchagrin
âembarrassmentâ¤
âdiscomfiture
âabashment
âconfusion
```
You can download the code [here](https://github.com/clemtoy/pptree). |
Type hint for 'other' in magic methods? | 30,898,998 | 5 | 2015-06-17T18:01:33Z | 30,899,060 | 8 | 2015-06-17T18:04:28Z | [
"python",
"type-hinting"
] | ```
class Interval(object):
def __sub__(self, other: Interval):
pass
```
The way it is I get a 'NameError: name 'Interval' is not defined'. Can someone tell me which type would be correct here? | The class doesn't exist until after Python finishes executing all the code inside the class block, including your method definitions.
Just use a string literal instead, as suggested in [PEP 484](https://www.python.org/dev/peps/pep-0484/#forward-references):
```
class Interval(object):
def __sub__(self, other: 'Interval'):
pass
```
Note that it's perfectly fine to refer to the class *inside* a method:
```
class Interval(object):
def __sub__(self, other: 'Interval'):
Interval.do_something()
```
This is only a problem if you need to use it in a method signature or directly inside the class block. |
What happens to exceptions raised in a with statement expression? | 30,909,463 | 8 | 2015-06-18T07:49:00Z | 30,909,636 | 7 | 2015-06-18T07:58:47Z | [
"python",
"with-statement",
"contextmanager"
] | My understanding of Python's `with` statement is as follows:
`with` statement = `with` + *expression* + `as` + *target* + `:` + *suit*
1. *expression* is executed and returns a context manager
2. context manager's `__enter__` returns a value to *target*
3. The *suite* is executed.
4. context manager's `__exit__` method is invoked
I know exceptions can be handled in step2 and step3, my question is that if an exception is thrown during the step1 when *expression* is executed, can I get a context manager?
If not does that mean that the `with` statement just ensures the *suit* to be executed and close properly?
Like `with open("file") as f`, if the file does not exist what will happen? | The `with` statement only manages exceptions in *step 3*. If an exception is raised in step 1 (executing *expression*) or in step 2 (executing the context manager `__enter__` method), you *do not have a (valid and working) context manager* to hand the exception to.
So if the file does not exist, an exception is raised and cannot be handled by a context manager, because that context manager was never created.
If that is a problem, you can always execute the *expression* part separately:
```
try:
context_manager = expression
except SomeSpecificException:
# do something about the exception
else:
with context_manager as target:
# execute the suite
``` |
Subsetting a 2D numpy array | 30,917,753 | 10 | 2015-06-18T14:10:44Z | 30,917,921 | 7 | 2015-06-18T14:17:11Z | [
"python",
"numpy",
"multidimensional-array",
"subset"
] | I have looked into documentations and also other questions here, but it seems I
have not got the hang of subsetting in numpy arrays yet.
I have a numpy array,
and for the sake of argument, let it be defined as follows:
```
import numpy as np
a = np.arange(100)
a.shape = (10,10)
# array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
# [10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
# [20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
# [30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
# [40, 41, 42, 43, 44, 45, 46, 47, 48, 49],
# [50, 51, 52, 53, 54, 55, 56, 57, 58, 59],
# [60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
# [70, 71, 72, 73, 74, 75, 76, 77, 78, 79],
# [80, 81, 82, 83, 84, 85, 86, 87, 88, 89],
# [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]])
```
now I want to choose rows and columns of `a` specified by vectors `n1` and `n2`. As an example:
```
n1 = range(5)
n2 = range(5)
```
But when I use:
```
b = a[n1,n2]
# array([ 0, 11, 22, 33, 44])
```
Then only the first fifth diagonal elements are chosen, not the whole 5x5 block. The solution I have found is to do it like this:
```
b = a[n1,:]
b = b[:,n2]
# array([[ 0, 1, 2, 3, 4],
# [10, 11, 12, 13, 14],
# [20, 21, 22, 23, 24],
# [30, 31, 32, 33, 34],
# [40, 41, 42, 43, 44]])
```
But I am sure there should be a way to do this simple task in just one command. | Another quick way to build the desired index is to use the [`np.ix_`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ix_.html) function:
```
>>> a[np.ix_(n1, n2)]
array([[ 0, 1, 2, 3, 4],
[10, 11, 12, 13, 14],
[20, 21, 22, 23, 24],
[30, 31, 32, 33, 34],
[40, 41, 42, 43, 44]])
```
This provides a convenient way to construct an open mesh from sequences of indices. |
Subsetting a 2D numpy array | 30,917,753 | 10 | 2015-06-18T14:10:44Z | 30,918,530 | 7 | 2015-06-18T14:42:09Z | [
"python",
"numpy",
"multidimensional-array",
"subset"
] | I have looked into documentations and also other questions here, but it seems I
have not got the hang of subsetting in numpy arrays yet.
I have a numpy array,
and for the sake of argument, let it be defined as follows:
```
import numpy as np
a = np.arange(100)
a.shape = (10,10)
# array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
# [10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
# [20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
# [30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
# [40, 41, 42, 43, 44, 45, 46, 47, 48, 49],
# [50, 51, 52, 53, 54, 55, 56, 57, 58, 59],
# [60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
# [70, 71, 72, 73, 74, 75, 76, 77, 78, 79],
# [80, 81, 82, 83, 84, 85, 86, 87, 88, 89],
# [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]])
```
now I want to choose rows and columns of `a` specified by vectors `n1` and `n2`. As an example:
```
n1 = range(5)
n2 = range(5)
```
But when I use:
```
b = a[n1,n2]
# array([ 0, 11, 22, 33, 44])
```
Then only the first fifth diagonal elements are chosen, not the whole 5x5 block. The solution I have found is to do it like this:
```
b = a[n1,:]
b = b[:,n2]
# array([[ 0, 1, 2, 3, 4],
# [10, 11, 12, 13, 14],
# [20, 21, 22, 23, 24],
# [30, 31, 32, 33, 34],
# [40, 41, 42, 43, 44]])
```
But I am sure there should be a way to do this simple task in just one command. | You've gotten a handful of nice examples of how to do what you want. However, it's also useful to understand the what's happening and why things work the way they do. There are a few simple rules that will help you in the future.
There's a big difference between "fancy" indexing (i.e. using a list/sequence) and "normal" indexing (using a slice). The underlying reason has to do with whether or not the array can be "regularly strided", and therefore whether or not a copy needs to be made. Arbitrary sequences therefore have to be treated differently, if we want to be able to create "views" without making copies.
In your case:
```
import numpy as np
a = np.arange(100).reshape(10,10)
n1, n2 = np.arange(5), np.arange(5)
# Not what you want
b = a[n1, n2] # array([ 0, 11, 22, 33, 44])
# What you want, but only for simple sequences
# Note that no copy of *a* is made!! This is a view.
b = a[:5, :5]
# What you want, but probably confusing at first. (Also, makes a copy.)
# np.meshgrid and np.ix_ are basically equivalent to this.
b = a[n1[:,None], n2[None,:]]
```
---
Fancy indexing with 1D sequences is basically equivalent to zipping them together and indexing with the result.
```
print "Fancy Indexing:"
print a[n1, n2]
print "Manual indexing:"
for i, j in zip(n1, n2):
print a[i, j]
```
---
However, if the sequences you're indexing with match the dimensionality of the array you're indexing (2D, in this case), The indexing is treated differently. Instead of "zipping the two together", numpy uses the indices like a mask.
In other words, `a[[[1, 2, 3]], [[1],[2],[3]]]` is treated completely differently than `a[[1, 2, 3], [1, 2, 3]]`, because the sequences/arrays that you're passing in are two-dimensional.
```
In [4]: a[[[1, 2, 3]], [[1],[2],[3]]]
Out[4]:
array([[11, 21, 31],
[12, 22, 32],
[13, 23, 33]])
In [5]: a[[1, 2, 3], [1, 2, 3]]
Out[5]: array([11, 22, 33])
```
---
To be a bit more precise,
```
a[[[1, 2, 3]], [[1],[2],[3]]]
```
is treated exactly like:
```
i = [[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
j = [[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]
a[i, j]
```
In other words, whether the input is a row/column vector is a shorthand for how the indices should repeat in the indexing.
---
`np.meshgrid` and `np.ix_` are just convienent ways to turn your 1D sequences into their 2D versions for indexing:
```
In [6]: np.ix_([1, 2, 3], [1, 2, 3])
Out[6]:
(array([[1],
[2],
[3]]), array([[1, 2, 3]]))
```
Similarly (the `sparse` argument would make it identical to `ix_` above):
```
In [7]: np.meshgrid([1, 2, 3], [1, 2, 3], indexing='ij')
Out[7]:
[array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]]),
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])]
``` |
Python is returning false to "1"=="1". Any ideas why? | 30,923,049 | 3 | 2015-06-18T18:30:05Z | 30,923,219 | 7 | 2015-06-18T18:39:37Z | [
"python",
"string",
"file",
"integer",
"logic"
] | I've written a scout system for my A Level computing task. The program is designed to store information on scouts at a scout hut, including badges, have a leaderboard system, and a management system for adding/finding/deleting scouts from the list. The scout information MUST be stored in a file.
File handling process for remove function (where my issue lies):
The remove scout button triggers a popup window (using tkinter). The window collects the ID of the scout, then searches through the scout file, scanning the ID of the scouts stored and comparing it to the ID that was entered. If the ID is found, it skips this line of the file, otherwise the line is copied to a temp file. Once all lines are done, the lines in temp are copied to a new blank version of the original file, and deletes/recreates the temp file as blank.
My issue:
The issue is when the program compares the ID to remove (remID) with the ID of the scouts currently being looked at in the file (sctID), it returns false when in fact they are equal. It may be an issue with my handling of the variables, my splitting of the line to get the ID, or even my data types. I just don't know. I tried converting both to string, but still false. The code for this section is below. Thank you in advance!
```
elif self._name == "rem":
remID = str(scoutID.get())
if remID != "":
#store all the lines that are in the file in a temp file
with open(fileName,"r") as f:
with open(tempFileName,"a") as ft:
lines = f.readlines()
for line in lines:
sctID = str(line.split(",")[3])
print("%s,%s,%s"%(remID, sctID, remID==sctID))
#print(remID)
if sctID != remID: #if the ID we are looking to remove isn't
#the ID of the scout we are currently looking at, move it to the temp file
ft.write(line)
#remove the main file, then rectrate a new one
os.remove(fileName)
file = open(fileName,"a")
file.close()
#copy all the lines back to the main file
with open(tempFileName,"r") as tf:
lines = tf.readlines()
with open(fileName,"a") as f:
for line in lines:
f.write(line)
#finally, delete and recreate the temp file
os.remove(tempFileName)
file = open(tempFileName,"a")
file.close()
#remove the window
master.destroy()
```
My output:
```
1,1
,False
1,2
,False
1,3
,False
``` | By conversion to string, you hide the error.
Always try repr(value) instead of str(value) for debugging purposes. You should also know, that it is better to compare integers instead of strings -- e.g. " 1" != "1".
> Edit: From your output, it is clear that you have an extra '\n'
> (Newline) in the sctID. Because you compare strings, this will be
> always be False.
I guess, you have either strings with additional blanks or other hidden chars or just different types of the values, what would also provoke a different result. |
Why loop variable is not updated in python | 30,923,241 | 2 | 2015-06-18T18:40:33Z | 30,923,278 | 9 | 2015-06-18T18:42:24Z | [
"python"
] | This code is only printing 1 2 4 5..My question is that why p is not updated with the new array at 3rd iteration
```
p = [1, 2, [1, 2, 3], 4, 5]
for each in p:
if type(each) == int:
print each
else:
p = each
```
Actually to be precise when debugged the code I saw that it actually updating the value of p but `each` variable is not reinitialised again. | Because of `if type(each) == int:` line. Your third element is a list (`[1, 2, 3]`) and not an int, so it doesn't print anything.
Now what comes to changing the `p` variable: `p` is just a name for an object, not the object itself. If you do `p = each` inside the for loop, it doesn't affect the original object you're looping through, it just changes the name `p` to a *local* name, which points to a different object. As soon as that round of the loop ends, your for loop continues to do its business with the original object you were looping through.
So, notice that `p = each` doesn't change the existing object (the `p` you're looping through), it simply creates a new local name `p` which points to the value of `each`.
---
What you most likely want is something like this:
```
p = [1, 2, [1, 2, 3], 4, 5]
for each in p:
if isinstance(each, list):
for x in each:
print x
else:
print each
```
This then again, this isn't recursive, and you'd need a function for that:
```
def print_elements(iterable):
for element in iterable:
if isinstance(element, list):
print_elements(element)
else:
print element
```
---
If you want to unpack the values into one list to use them for something other than printing, you should use something like this:
```
def recursive_unpack(iterable):
final = []
for element in iterable:
if isinstance(element, list):
final.extend(recursive_unpack(element))
else:
final.append(element)
return final
```
---
Why I'm using `isinstance()` instead of `type()`: [Differences between isinstance() and type() in python](https://stackoverflow.com/questions/1549801/differences-between-isinstance-and-type-in-python)
Also, if you want this to apply to all iterables (my last example) and not just lists: [In Python, how do I determine if an object is iterable?](http://stackoverflow.com/questions/1952464/in-python-how-do-i-determine-if-an-object-is-iterable) |
pandas dataframe drop columns by number of nan | 30,923,324 | 4 | 2015-06-18T18:45:05Z | 30,925,185 | 7 | 2015-06-18T20:26:02Z | [
"python",
"pandas"
] | I have a dataframe with some columns containing nan. I'd like to drop those columns with certain number of nan. For example, in the following code, I'd like to drop any column with 2 or more nan. In this case, column 'C' will be dropped and only 'A' and 'B' will be kept. How can I implement it?
```
import pandas as pd
import numpy as np
dff = pd.DataFrame(np.random.randn(10,3), columns=list('ABC'))
dff.iloc[3,0] = np.nan
dff.iloc[6,1] = np.nan
dff.iloc[5:8,2] = np.nan
print dff
``` | There is a `thresh` param for [`dropna`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html#pandas.DataFrame.dropna), you just need to pass the length of your df - the number of `NaN` values you want as your threshold:
```
In [13]:
dff.dropna(thresh=len(dff) - 2, axis=1)
Out[13]:
A B
0 0.517199 -0.806304
1 -0.643074 0.229602
2 0.656728 0.535155
3 NaN -0.162345
4 -0.309663 -0.783539
5 1.244725 -0.274514
6 -0.254232 NaN
7 -1.242430 0.228660
8 -0.311874 -0.448886
9 -0.984453 -0.755416
```
So the above will drop any column that does not meet the criteria of the length of the df (number of rows) - 2 as the number of non-Na values. |
What is the difference between Python's __add__ and __concat__? | 30,924,533 | 8 | 2015-06-18T19:51:31Z | 30,924,612 | 7 | 2015-06-18T19:55:34Z | [
"python",
"operators"
] | The list of Python's standard operators includes both `__add__(a, b)` and `__concat__(a, b)`. Both of them are usually invoked by `a + b`. My question is, what is the difference between them? Is there a scenario where one would be used rather than the other? Is there any reason you would define both on a single object?
Here's the [documentation](https://docs.python.org/2/library/operator.html) I found the methods mentioned in.
Edit: Adding to the weirdness is this [documentation](https://docs.python.org/2/reference/datamodel.html#emulating-container-types):
> Finally, sequence types should implement addition (meaning concatenation) and multiplication (meaning repetition) by defining the methods `__add__()`, `__radd__()`, `__iadd__()`, `__mul__()`, `__rmul__()` and `__imul__()` described below; they should not define `__coerce__()` or other numerical operators. | If you check the source for the `operator` module ([add](https://hg.python.org/cpython/file/af793c7580f1/Lib/operator.py#l75), [concat](https://hg.python.org/cpython/file/af793c7580f1/Lib/operator.py#l146)), you will find these definitions for those functions:
```
def add(a, b):
"Same as a + b."
return a + b
def concat(a, b):
"Same as a + b, for a and b sequences."
if not hasattr(a, '__getitem__'):
msg = "'%s' object can't be concatenated" % type(a).__name__
raise TypeError(msg)
return a + b
```
So there is actually no difference except that `concat` actually requires a sequence type. Both functions use the `+` operator which effect depends on the types you add.
In general, using the [`operator` module](https://docs.python.org/3/library/operator.html) is not that useful most of the time. The module is mostly used when you need to pass a function that performs an operation, for example to functional functions like [`map`](https://docs.python.org/3/library/functions.html#map), [`filter`](https://docs.python.org/3/library/functions.html#filter), or [`reduce`](https://docs.python.org/3/library/functools.html#functools.reduce). But usually, you can just use the `+` operator directly.
As for the underscore functions (`__add__` and `__concat__`), these are [just aliases](https://hg.python.org/cpython/file/af793c7580f1/Lib/operator.py#l418):
```
__add__ = add
__concat__ = concat
```
But those are of course not related to the [special methods](https://docs.python.org/3/reference/datamodel.html#object.__add__) used to overload operators for custom types. They are functions that match the same name as those special methods, probably to make them appear similar. Note that there is no special `__concat__` method on objects though.
Implementing `__add__` on a custom type will however affect how the operator module functions work, for example:
```
>>> class Example:
def __init__ (self, x):
self.x = x
def __repr__ (self):
return 'Example({})'.format(self.x)
def __add__ (self, other):
return Example(self.x + other.x)
>>> a = Example(2)
>>> b = Example(4)
>>> operator.add(a, b)
Example(6)
>>> a + b
Example(6)
```
As you can see, `operator.add` will use the implementation of the special method `Example.__add__`; but the reason for that is that the implementation of `operator.add` just uses the `+` operator (which behavior is explicitely defined by the special `__add__` method). |
Get a list of N items with K selections for each element? | 30,924,997 | 3 | 2015-06-18T20:15:51Z | 30,925,060 | 7 | 2015-06-18T20:18:59Z | [
"python",
"list",
"list-comprehension"
] | For example if I have a selection set K
```
K = ['a','b','c']
```
and a length N
```
N = 4
```
I want to return all possible:
```
['a','a','a','a']
['a','a','a','b']
['a','a','a','c']
['a','a','b','a']
...
['c','c','c','c']
```
I can do it with recursion but it is not interesting. Is there a more Pythonic way? | That can be done with [`itertools`](https://docs.python.org/2/library/itertools.html).
```
>>> K = ['a','b','c']
>>> import itertools
>>> N = 4
>>> i = itertools.product(K,repeat = N)
>>> l = [a for a in i]
>>> l[:3]
[('a', 'a', 'a', 'a'), ('a', 'a', 'a', 'b'), ('a', 'a', 'a', 'c')]
```
EDIT: I realized you actually want [`product`](https://docs.python.org/2/library/itertools.html#itertools.product), not [`combinations_with_replacement`](https://docs.python.org/2/library/itertools.html#itertools.combinations_with_replacement). Updated code. |
Pandas: Add multiple empty columns to DataFrame | 30,926,670 | 12 | 2015-06-18T22:09:43Z | 30,926,717 | 13 | 2015-06-18T22:13:41Z | [
"python",
"pandas"
] | This may be a stupid question, but how do I add multiple empty columns to a DataFrame from a list?
I can do:
```
df["B"] = None
df["C"] = None
df["D"] = None
```
But I can't do:
```
df[["B", "C", "D"]] = None
KeyError: "['B' 'C' 'D'] not in index"
``` | I'd [`concat`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html#pandas.concat) using a DataFrame ctor:
```
In [23]:
df = pd.DataFrame(columns=['A'])
df
Out[23]:
Empty DataFrame
Columns: [A]
Index: []
In [24]:
pd.concat([df,pd.DataFrame(columns=list('BCD'))])
Out[24]:
Empty DataFrame
Columns: [A, B, C, D]
Index: []
```
So by passing a list containing your original df, and a new one with the columns you wish to add, this will return a new df with the additional columns. |
Pandas: Add multiple empty columns to DataFrame | 30,926,670 | 12 | 2015-06-18T22:09:43Z | 30,943,503 | 15 | 2015-06-19T17:00:52Z | [
"python",
"pandas"
] | This may be a stupid question, but how do I add multiple empty columns to a DataFrame from a list?
I can do:
```
df["B"] = None
df["C"] = None
df["D"] = None
```
But I can't do:
```
df[["B", "C", "D"]] = None
KeyError: "['B' 'C' 'D'] not in index"
``` | You could use [`df.reindex`](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.reindex.html) to add new columns:
```
In [18]: df = pd.DataFrame(np.random.randint(10, size=(5,1)), columns=['A'])
In [19]: df
Out[19]:
A
0 4
1 7
2 0
3 7
4 6
In [20]: df.reindex(columns=list('ABCD'))
Out[20]:
A B C D
0 4 NaN NaN NaN
1 7 NaN NaN NaN
2 0 NaN NaN NaN
3 7 NaN NaN NaN
4 6 NaN NaN NaN
```
`reindex` will return a new DataFrame, with columns appearing in the order they are listed:
```
In [31]: df.reindex(columns=list('DCBA'))
Out[31]:
D C B A
0 NaN NaN NaN 4
1 NaN NaN NaN 7
2 NaN NaN NaN 0
3 NaN NaN NaN 7
4 NaN NaN NaN 6
```
The `reindex` method as a `fill_value` parameter as well:
```
In [22]: df.reindex(columns=list('ABCD'), fill_value=0)
Out[22]:
A B C D
0 4 0 0 0
1 7 0 0 0
2 0 0 0 0
3 7 0 0 0
4 6 0 0 0
``` |
A python script that activates the virtualenv and then runs another python script? | 30,927,567 | 8 | 2015-06-18T23:32:57Z | 30,927,921 | 7 | 2015-06-19T00:10:15Z | [
"python",
"windows",
"shell",
"command",
"virtualenv"
] | On windows vista, I need a script that starts the `activate` (to activate the virtualenv) script in
```
C:\Users\Admin\Desktop\venv\Scripts\
```
And later, in the virtual environment, starts to the `manage.py runserver`
in the folder :
```
C:\Users\Admin\Desktop\helloworld\
```
how should I do? What modules should I use? | You can activate your virtualenv and then start server using a bat file.
Copy this script in to a file and save it with .bat extension (eg. runserver.bat)
```
@echo off
cmd /k "cd /d C:\Users\Admin\Desktop\venv\Scripts & activate & cd /d C:\Users\Admin\Desktop\helloworld & python manage.py runserver"
```
Then you can just run this bat file (just double click) to start the server |
Cache busting in Django 1.8? | 30,936,151 | 3 | 2015-06-19T10:42:18Z | 30,940,958 | 7 | 2015-06-19T14:41:10Z | [
"python",
"django"
] | I'm using Django 1.8 and I want to add a parameter to my static files to cache bust.
This is what I'm doing right now, setting a manual parameter:
```
<link href="{% static 'css/openprescribing.css' %}?q=0.1.1" rel="stylesheet">
```
But I feel there must be a better way to update the parameter.
I guess it would be a bit neater to have a setting passed through the template (and that would save having to update it in multiple places).
But what would be really nice is if Django could update it automatically for me.
The notes on [django-cachebuster](https://github.com/jaddison/django-cachebuster) suggest that it's now possible to do this automatically in [staticfiles](https://docs.djangoproject.com/en/1.8/ref/contrib/staticfiles/), but I can't find anything in the staticfiles docs about it.
Anyone know a way to do this? | Yes this can be done automatically with `contrib.staticfiles`. There are two additional provided storage classes which will rename files using a hash. These
are documented here: [ManifestStaticFilesStorage](https://docs.djangoproject.com/en/1.8/ref/contrib/staticfiles/#manifeststaticfilesstorage) and [CachedStaticFilesStorage](https://docs.djangoproject.com/en/1.8/ref/contrib/staticfiles/#cachedstaticfilesstorage)
From the docs:
> A subclass of the StaticFilesStorage storage backend which stores the file names it handles by appending the MD5 hash of the fileâs content to the filename. For example, the file css/styles.css would also be saved as css/styles.55e7cbb9ba48.css.
>
> The purpose of this storage is to keep serving the old files in case some pages still refer to those files, e.g. because they are cached by you or a 3rd party proxy server. Additionally, itâs very helpful if you want to apply far future Expires headers to the deployed files to speed up the load time for subsequent page visits.
The main difference is
> CachedStaticFilesStorage is a similar class like the ManifestStaticFilesStorage class but uses Djangoâs caching framework for storing the hashed names of processed files instead of a static manifest file called staticfiles.json. This is mostly useful for situations in which you donât have access to the file system.
To enable them you need to change your `STATICFILES_STORAGE` setting is set to `'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'` or `'django.contrib.staticfiles.storage.CachedStaticFilesStorage'`. The file names are only changed when `DEBUG=False` as it would be in production. |
How to properly write coss-references to external documentation with intersphinx? | 30,939,867 | 8 | 2015-06-19T13:50:26Z | 30,981,554 | 12 | 2015-06-22T13:43:34Z | [
"python",
"opencv",
"documentation",
"python-sphinx",
"autodoc"
] | I'm trying to add cross-references to external API into my documentation but I'm facing three different behaviors.
I am using sphinx(1.3.1) with Python(2.3.7) and my intersphinx mapping is configured as:
```
{
'python': ('https://docs.python.org/2.7', None),
'numpy': ('http://docs.scipy.org/doc/numpy/', None),
'cv2' : ('http://docs.opencv.org/2.4/', None),
'h5py' : ('http://docs.h5py.org/en/latest/', None)
}
```
I have no trouble writing a cross-reference to numpy API with `` :class:`numpy.ndarray` `` or `` :func:`numpy.array` `` which gives me, as expected, something like [numpy.ndarray](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html#numpy.ndarray).
However, with h5py, the only way I can have a link generated is if I omit the module name. For example, `` :class:`Group` `` (or `` :class:`h5py:Group` ``) gives me [Group](http://docs.h5py.org/en/latest/high/group.html#Group) but `` :class:`h5py.Group` `` fails to generate a link.
Finally, I cannot find a way to write a working cross-reference to OpenCV API, none of these seems to work:
```
:func:`cv2.convertScaleAbs`
:func:`cv2:cv2.convertScaleAbs`
:func:`cv2:convertScaleAbs`
:func:`convertScaleAbs`
```
How to properly write cross-references to external API, or configure intersphinx, to have a generated link as in the numpy case? | I gave another try on trying to understand the content of an `objects.inv` file and hopefully this time I inspected numpy and h5py instead of only OpenCV's one.
## How to read an intersphinx inventory file
Despite the fact that I couldn't find anything useful about reading the content of an `object.inv` file, it is actually very simple with the intersphinx module.
```
from sphinx.ext.intersphinx import fetch_inventory
import warnings
uri = 'http://docs.python.org/2.7/'
# Read inventory into a dictionnary
inv = fetch_inventory(warnings, uri, uri + 'objects.inv')
```
Here I use the `warnings` module as the first parameter instead of an `app` instance because if you fetch an online uri, only the `app.warn` function is used in case of failure. A real `app` (or a duck-typed one) will be needed if you want to read a local `objects.inv` file.
## File structure (numpy)
After inspecting numpy's one, you can see that keys are domains:
```
[u'np-c:function',
u'std:label',
u'c:member',
u'np:classmethod',
u'np:data',
u'py:class',
u'np-c:member',
u'c:var',
u'np:class',
u'np:function',
u'py:module',
u'np-c:macro',
u'np:exception',
u'py:method',
u'np:method',
u'np-c:var',
u'py:exception',
u'np:staticmethod',
u'py:staticmethod',
u'c:type',
u'np-c:type',
u'c:macro',
u'c:function',
u'np:module',
u'py:data',
u'np:attribute',
u'std:term',
u'py:function',
u'py:classmethod',
u'py:attribute']
```
You can see how you can write your cross-reference when you look at the content of a specific domain. For example, `py:class`:
```
{u'numpy.DataSource': (u'NumPy',
u'1.9',
u'http://docs.scipy.org/doc/numpy/reference/generated/numpy.DataSource.html#numpy.DataSource',
u'-'),
u'numpy.MachAr': (u'NumPy',
u'1.9',
u'http://docs.scipy.org/doc/numpy/reference/generated/numpy.MachAr.html#numpy.MachAr',
u'-'),
u'numpy.broadcast': (u'NumPy',
u'1.9',
u'http://docs.scipy.org/doc/numpy/reference/generated/numpy.broadcast.html#numpy.broadcast',
u'-'),
...}
```
So here, `` :class:`numpy.DataSource` `` will work as expected.
## h5py
In the case of h5py, the domains are:
```
[u'py:attribute', u'std:label', u'py:method', u'py:function', u'py:class']
```
and if you look at the `py:class` domain:
```
{u'AttributeManager': (u'h5py',
u'2.5',
u'http://docs.h5py.org/en/latest/high/attr.html#AttributeManager',
u'-'),
u'Dataset': (u'h5py',
u'2.5',
u'http://docs.h5py.org/en/latest/high/dataset.html#Dataset',
u'-'),
u'ExternalLink': (u'h5py',
u'2.5',
u'http://docs.h5py.org/en/latest/high/group.html#ExternalLink',
u'-'),
...}
```
That's why I couldn't make it work as numpy references. So a good way to format them would be `` :class:`h5py:Dataset` ``.
## OpenCV
OpenCV's inventory object seems malformed. Where I would expect to find domains there is actually 902 function signatures:
```
[u':',
u'AdjusterAdapter::create(const',
u'AdjusterAdapter::good()',
u'AdjusterAdapter::tooFew(int',
u'AdjusterAdapter::tooMany(int',
u'Algorithm::create(const',
u'Algorithm::getList(vector<string>&',
u'Algorithm::name()',
u'Algorithm::read(const',
u'Algorithm::set(const'
...]
```
and if we take the first one's value:
```
{u'Ptr<AdjusterAdapter>': (u'OpenCV',
u'2.4',
u'http://docs.opencv.org/2.4/detectorType)',
u'ocv:function 1 modules/features2d/doc/common_interfaces_of_feature_detectors.html#$ -')}
```
I'm pretty sure it is then impossible to write OpenCV cross-references with this file...
## Conclusion
I thought intersphinx generated the `objects.inv` based on the content of the documentation project in an *standard* way, which seems not to be the case.
As a result, it seems that the proper way to write cross-references is API dependent and one should inspect a specific inventory object to actually see what's available. |
psycopg2: AttributeError: 'module' object has no attribute 'extras' | 30,940,167 | 17 | 2015-06-19T14:04:22Z | 30,940,250 | 26 | 2015-06-19T14:08:17Z | [
"python",
"psycopg2",
"importerror"
] | In my code I use the [`DictCursor`](http://initd.org/psycopg/docs/extras.html#dictionary-like-cursor) from `psycopg2.extras` like this
```
dict_cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
```
However, all of the sudden I get the following error when I load the cursor:
```
AttributeError: 'module' object has no attribute 'extras'
```
Maybe something is dorked in my installation but I have no clue where to start looking. I made some updates with pip, but as far as I know no dependencies of `psycopg2`. | You need to explicitly import `psycopg2.extras`:
```
import psycopg2.extras
``` |
Python capture all printed output | 30,942,622 | 10 | 2015-06-19T16:08:15Z | 30,942,680 | 9 | 2015-06-19T16:11:20Z | [
"python",
"piping"
] | I am looking to write console based programs in python that can execute functions to perform generic tasks, pretty generic. Is it possible to capture everything written to the console by print calls in a function without needing to return a string, similar to how bash and the windows shell allow piping the output of a program to a text file, ie
ipconfig>ipconfig.txt
but doing this inside of a python program, where a function is called, everything that was printed to the console inside of that function is gathered as a list of strings, and then can be saved to a txt file of the users choice? | You can do this by setting `sys.stdout` to be a file of your choice
```
import sys
sys.stdout = open('out.dat', 'w')
print "Hello"
sys.stdout.close()
```
Will not display any output but will create a file called `out.dat` with the printed text.
Note that this doesn't need to be an actual file but could be a [StringIO](https://docs.python.org/2/library/stringio.html) instance, which you can just use the `getvalue` method of to access everything that has been printed previously. |
multiprocessing.Pool with maxtasksperchild produces equal PIDs | 30,943,161 | 6 | 2015-06-19T16:40:40Z | 30,943,203 | 7 | 2015-06-19T16:43:41Z | [
"python",
"python-3.x",
"multiprocessing",
"pid"
] | I need to run a function in a process, which is completely isolated from all other memory, several times. I would like to use `multiprocessing` for that (since I need to serialize a complex output coming from the functions). I set the `start_method` to `'spawn'` and use a pool with `maxtasksperchild=1`. I would expect to get a different process for each task, and therefore see a different PID:
```
import multiprocessing
import time
import os
def f(x):
print("PID: %d" % os.getpid())
time.sleep(x)
complex_obj = 5 #more complex axtually
return complex_obj
if __name__ == '__main__':
multiprocessing.set_start_method('spawn')
pool = multiprocessing.Pool(4, maxtasksperchild=1)
pool.map(f, [5]*30)
pool.close()
```
However the output I get is:
```
$ python untitled1.py
PID: 30010
PID: 30009
PID: 30012
PID: 30011
PID: 30010
PID: 30009
PID: 30012
PID: 30011
PID: 30018
PID: 30017
PID: 30019
PID: 30020
PID: 30018
PID: 30019
PID: 30017
PID: 30020
...
```
So the processes are not being respawned after every task. Is there an automatic way of getting a new PID each time (ie without starting a new pool for each set of processes)? | You need to also specify `chunksize=1` in the call to `pool.map`. Otherwise, multiple items in your iterable get bundled together into one "task" from the perception of the worker processes:
```
import multiprocessing
import time
import os
def f(x):
print("PID: %d" % os.getpid())
time.sleep(x)
complex_obj = 5 #more complex axtually
return complex_obj
if __name__ == '__main__':
multiprocessing.set_start_method('spawn')
pool = multiprocessing.Pool(4, maxtasksperchild=1)
pool.map(f, [5]*30, chunksize=1)
pool.close()
```
Output doesn't have repeated PIDs now:
```
PID: 4912
PID: 4913
PID: 4914
PID: 4915
PID: 4938
PID: 4937
PID: 4940
PID: 4939
PID: 4966
PID: 4965
PID: 4970
PID: 4971
PID: 4991
PID: 4990
PID: 4992
PID: 4993
PID: 5013
PID: 5014
PID: 5012
``` |
Why is single int 24 bytes, but in list it tends to 8 bytes | 30,946,955 | 5 | 2015-06-19T20:42:11Z | 30,947,014 | 8 | 2015-06-19T20:46:30Z | [
"python"
] | Here is what I am looking at:
```
In [1]: import sys
In [2]: sys.getsizeof(45)
Out[2]: 24
In [3]: sys.getsizeof([])
Out[3]: 72
In [4]: sys.getsizeof(range(1000))
Out[4]: 8072
```
I know that `int` in Python is growable (can get bigger that 24 bytes) objects that live on the heap, and I see why that object can be quite large, but isn't a list just a collections of such objects? Apparently it is not, what is going on here? | This is the size of the object - [excluding the objects it contains](https://docs.python.org/3/library/sys.html#sys.getsizeof):
```
>>> d = range(10)
>>> sys.getsizeof(d)
152
>>> d[0] = 'text'
>>> sys.getsizeof(d)
152
>>> d
['text', 1, 2, 3, 4, 5, 6, 7, 8, 9]
```
The size of the list with 1000 elements, in your case, is `8072` bytes but each integer object is still `24` bytes. The list object keeps track of these integer objects but they're not included in the size of the list object. |
Why is single int 24 bytes, but in list it tends to 8 bytes | 30,946,955 | 5 | 2015-06-19T20:42:11Z | 30,947,030 | 7 | 2015-06-19T20:47:29Z | [
"python"
] | Here is what I am looking at:
```
In [1]: import sys
In [2]: sys.getsizeof(45)
Out[2]: 24
In [3]: sys.getsizeof([])
Out[3]: 72
In [4]: sys.getsizeof(range(1000))
Out[4]: 8072
```
I know that `int` in Python is growable (can get bigger that 24 bytes) objects that live on the heap, and I see why that object can be quite large, but isn't a list just a collections of such objects? Apparently it is not, what is going on here? | The list doesn't contain any integers; it contains pointers to various objects, which happen to be integers and which are stored elsewhere. `getsizeof()` tells you the size only of the object you pass to it, not of any additional objects it points to. |
Why does '12345'.count('') return 6 and not 5? | 30,948,282 | 34 | 2015-06-19T22:38:48Z | 30,948,301 | 28 | 2015-06-19T22:41:04Z | [
"python",
"python-3.x",
"count"
] | ```
>>> '12345'.count('')
6
```
Why does this happen? If there are only 5 characters in that string, why is the count function returning one more?
Also, is there a more effective way of counting characters in a string? | That is because there are six different substrings that are the empty string: Before the 1, between the numbers, and after the 5.
If you want to count characters use `len` instead:
```
>>> len("12345")
5
``` |
Why does '12345'.count('') return 6 and not 5? | 30,948,282 | 34 | 2015-06-19T22:38:48Z | 30,948,311 | 25 | 2015-06-19T22:41:35Z | [
"python",
"python-3.x",
"count"
] | ```
>>> '12345'.count('')
6
```
Why does this happen? If there are only 5 characters in that string, why is the count function returning one more?
Also, is there a more effective way of counting characters in a string? | How many pieces do you get if you cut a string five times?
```
---|---|---|---|---|--- -> 6 pieces
```
The same thing is happening here. It counts the empty string after the `5` also.
`len('12345')` is what you should use. |
Why does '12345'.count('') return 6 and not 5? | 30,948,282 | 34 | 2015-06-19T22:38:48Z | 30,948,336 | 107 | 2015-06-19T22:43:39Z | [
"python",
"python-3.x",
"count"
] | ```
>>> '12345'.count('')
6
```
Why does this happen? If there are only 5 characters in that string, why is the count function returning one more?
Also, is there a more effective way of counting characters in a string? | `count` returns how many times an object occurs in a list, so if you count occurrences of `''` you get 6 because the empty string is at the beginning, end, and in between each letter.
Use the `len` function to find the length of a string. |
Django populate() isn't reentrant | 30,954,398 | 6 | 2015-06-20T13:11:37Z | 30,968,197 | 10 | 2015-06-21T18:58:31Z | [
"python",
"django",
"apache"
] | I keep getting this when I try to load my Django application on production . I tried all the stackoverflow answers but nothing has fixed it. Any other ideas. (I'm using Django 1.5.2 and Apache)
```
Traceback (most recent call last):
File "/var/www/thehomeboard/wwwhome/wsgi.py", line 37, in <module>
application = get_wsgi_application()
File "/usr/local/lib/python2.7/dist-packages/django/core/wsgi.py", line 14, in get_wsgi_application
django.setup()
File "/usr/local/lib/python2.7/dist-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python2.7/dist-packages/django/apps/registry.py", line 78, in populate
raise RuntimeError("populate() isn't reentrant")
RuntimeError: populate() isn't reentrant
``` | This RuntimeError first occured for me after upgrading to Django 1.7 (and still is present with Django 1.8). It is usually caused by an Django application which raises an error, but that error is swallowed somehow.
Here's a workaround which works for me. Add it to your wsgi.py and the *real* error should be logged:
```
import os
import time
import traceback
import signal
import sys
from django.core.wsgi import get_wsgi_application
try:
application = get_wsgi_application()
print 'WSGI without exception'
except Exception:
print 'handling WSGI exception'
# Error loading applications
if 'mod_wsgi' in sys.modules:
traceback.print_exc()
os.kill(os.getpid(), signal.SIGINT)
time.sleep(2.5)
```
See [this thread](https://groups.google.com/forum/#!topic/modwsgi/X3Upsk9Wfsw) on modwsgi for more details. |
Why is globals() a function in Python? | 30,958,904 | 13 | 2015-06-20T21:08:25Z | 30,958,981 | 17 | 2015-06-20T21:16:38Z | [
"python",
"global"
] | Python offers the function `globals()` to access a dictionary of all global variables. Why is that a function and not a variable? The following works:
```
g = globals()
g["foo"] = "bar"
print foo # Works and outputs "bar"
```
What is the rationale behind hiding globals in a function? And is it better to call it only once and store a reference somewhere or should I call it each time I need it?
IMHO, this is not a duplicate of [Reason for globals() in Python?](https://stackoverflow.com/questions/12693606), because I'm not asking why `globals()` exist but rather why it must be a function (instead of a variable `__globals__`). | Because it may depend on the *Python implementation* how much work it is to build that dictionary.
In CPython, globals are kept in just another mapping, and calling the `globals()` function returns a reference to that mapping. But other Python implementations are free to create a separate dictionary for the object, as needed, on demand.
This mirrors the `locals()` function, which in CPython has to create a dictionary on demand because locals are normally stored in an array (local names are translated to array access in CPython bytecode).
So you'd call `globals()` when you need access to the mapping of global names. Storing a reference to that mapping works in CPython, but don't count on other this in other implementations. |
Scrapy throws ImportError: cannot import name xmlrpc_client | 30,964,836 | 28 | 2015-06-21T13:06:56Z | 31,095,959 | 68 | 2015-06-28T03:47:37Z | [
"python",
"python-2.7",
"scrapy"
] | After install Scrapy via pip, and having `Python 2.7.10`:
```
scrapy
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 7, in <module>
from scrapy.cmdline import execute
File "/Library/Python/2.7/site-packages/scrapy/__init__.py", line 48,
in <module>
from scrapy.spiders import Spider
File "/Library/Python/2.7/site-packages/scrapy/spiders/__init__.py",
line 10, in <module>
from scrapy.http import Request
File "/Library/Python/2.7/site-packages/scrapy/http/__init__.py", line
12, in <module>
from scrapy.http.request.rpc import XmlRpcRequest
File "/Library/Python/2.7/site-packages/scrapy/http/request/rpc.py",
line 7, in <module>
from six.moves import xmlrpc_client as xmlrpclib
ImportError: cannot import name xmlrpc_client
```
But I can import module:
```
Python 2.7.10 (default, Jun 10 2015, 19:42:47)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import scrapy
>>>
```
What's going on? | I've just fixed this issue on my OS X.
**Please backup your files first.**
```
sudo rm -rf /Library/Python/2.7/site-packages/six*
sudo rm -rf /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six*
sudo pip install six
```
Scrapy 1.0.0 is ready to go. |
Scrapy throws ImportError: cannot import name xmlrpc_client | 30,964,836 | 28 | 2015-06-21T13:06:56Z | 31,578,089 | 27 | 2015-07-23T04:19:43Z | [
"python",
"python-2.7",
"scrapy"
] | After install Scrapy via pip, and having `Python 2.7.10`:
```
scrapy
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 7, in <module>
from scrapy.cmdline import execute
File "/Library/Python/2.7/site-packages/scrapy/__init__.py", line 48,
in <module>
from scrapy.spiders import Spider
File "/Library/Python/2.7/site-packages/scrapy/spiders/__init__.py",
line 10, in <module>
from scrapy.http import Request
File "/Library/Python/2.7/site-packages/scrapy/http/__init__.py", line
12, in <module>
from scrapy.http.request.rpc import XmlRpcRequest
File "/Library/Python/2.7/site-packages/scrapy/http/request/rpc.py",
line 7, in <module>
from six.moves import xmlrpc_client as xmlrpclib
ImportError: cannot import name xmlrpc_client
```
But I can import module:
```
Python 2.7.10 (default, Jun 10 2015, 19:42:47)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import scrapy
>>>
```
What's going on? | This is a known issue on Mac OSX for Scrapy. You can refer to [this link](https://github.com/scrapy/scrapy/commit/e645d6a01e53af35e27b47ed80cfd6f9282f50b2).
Basically the issue is with the PYTHONPATH in your system. To solve the issue change the current PYTHONPATH to point to the newer or none Mac OSX version of Python. Before running Scrapy, try:
`export PYTHONPATH=/Library/Python/2.7/site-packages:$PYTHONPATH`
If that worked you can change the .bashrc file permanently:
`echo "export PYTHONPATH=/Library/Python/2.7/site-packages:$PYTHONPATH" >> ~/.bashrc`
If none of this works, take a look at the link above. |
Scrapy throws ImportError: cannot import name xmlrpc_client | 30,964,836 | 28 | 2015-06-21T13:06:56Z | 31,635,775 | 18 | 2015-07-26T10:12:26Z | [
"python",
"python-2.7",
"scrapy"
] | After install Scrapy via pip, and having `Python 2.7.10`:
```
scrapy
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 7, in <module>
from scrapy.cmdline import execute
File "/Library/Python/2.7/site-packages/scrapy/__init__.py", line 48,
in <module>
from scrapy.spiders import Spider
File "/Library/Python/2.7/site-packages/scrapy/spiders/__init__.py",
line 10, in <module>
from scrapy.http import Request
File "/Library/Python/2.7/site-packages/scrapy/http/__init__.py", line
12, in <module>
from scrapy.http.request.rpc import XmlRpcRequest
File "/Library/Python/2.7/site-packages/scrapy/http/request/rpc.py",
line 7, in <module>
from six.moves import xmlrpc_client as xmlrpclib
ImportError: cannot import name xmlrpc_client
```
But I can import module:
```
Python 2.7.10 (default, Jun 10 2015, 19:42:47)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import scrapy
>>>
```
What's going on? | I had the same exact problem when upgrading to Scrapy 1.0. After numerous work arounds the solution that worked for me was uninstalling six with pip:
> sudo pip uninstall six
then re-installing six via easy\_install
> easy\_install six
Hope that works! |
Trivial functors | 30,964,994 | 7 | 2015-06-21T13:23:14Z | 30,965,000 | 11 | 2015-06-21T13:23:57Z | [
"python",
"python-3.x",
"functor"
] | I very often write code like:
```
sorted(some_dict.items(), key=lambda x: x[1])
sorted(list_of_dicts, key=lambda x: x['age'])
map(lambda x: x.name, rows)
```
where I would like to write:
```
sorted(some_dict.items(), key=idx_f(1))
sorted(list_of_dicts, key=idx_f('name'))
map(attr_f('name'), rows)
```
using:
```
def attr_f(field):
return lambda x: getattr(x, field)
def idx_f(field):
return lambda x: x[field]
```
Are there functor-creators like idx\_f and attr\_f in python, and are they of clearer when used than lambda's? | The `operator` module has [`operator.attrgetter()`](https://docs.python.org/3/library/operator.html#operator.attrgetter) and [`operator.itemgetter()`](https://docs.python.org/3/library/operator.html#operator.attrgetter) that do just that:
```
from operator import attrgetter, itemgetter
sorted(some_dict.items(), key=itemgetter(1))
sorted(list_of_dicts, key=itemgetter('name'))
map(attrgetter('name'), rows)
```
These functions also take *more than one* argument, at which point they'll return a tuple containing the value for each argument:
```
# sorting on value first, then on key
sorted(some_dict.items(), key=itemgetter(1, 0))
# sort dictionaries by last name, then first name
sorted(list_of_dicts, key=itemgetter('last_name', 'first_name'))
```
The `attrgetter()` function also accepts *dotted names*, where you can reach attributes of attributes:
```
# extract contact names
map(attrgetter('contact.name'), companies)
``` |
Fraction object doesn't have __int__ but int(Fraction(...)) still works | 30,966,227 | 4 | 2015-06-21T15:32:08Z | 30,966,278 | 8 | 2015-06-21T15:38:23Z | [
"python",
"int",
"fractions",
"python-internals"
] | In Python, when you have an object you can convert it to an integer using the `int` function.
For example `int(1.3)` will return `1`. This works internally by using the `__int__` magic method of the object, in this particular case `float.__int__`.
In Python `Fraction` objects can be used to construct exact fractions.
```
from fractions import Fraction
x = Fraction(4, 3)
```
`Fraction` objects lack an `__int__` method, but you can still call `int()` on them and get a sensible integer back. I was wondering how this was possible with no `__int__` method being defined.
```
In [38]: x = Fraction(4, 3)
In [39]: int(x)
Out[39]: 1
``` | The `__trunc__` method is used.
```
>>> class X(object):
def __trunc__(self):
return 2.
>>> int(X())
2
```
`__float__` does not work
```
>>> class X(object):
def __float__(self):
return 2.
>>> int(X())
Traceback (most recent call last):
File "<pyshell#7>", line 1, in <module>
int(X())
TypeError: int() argument must be a string, a bytes-like object or a number, not 'X'
```
[The CPython source](https://hg.python.org/cpython/file/79397a76d2b1/Objects/abstract.c#l1283) shows when `__trunc__` is used. |
Eclipse, PyDev "Project interpreter not specifiedâ | 30,970,697 | 6 | 2015-06-22T00:30:59Z | 34,992,568 | 11 | 2016-01-25T12:24:13Z | [
"python",
"eclipse",
"pydev",
"interpreter"
] | I have installed PyDev in eclipse Luna. After successful installation of PyDev, when I want to create a new project I get the error:
Project interpreter not specified
How can I fix it? There is no option for interpreter to choose from.
eclipse version Luna,
Mac OSX Yosemite,
PyDev latest version (installed according to <http://pydev.org/manual_101_install.html>)
 | In my case it has worked with following steps
Prerequisite: Python should be installed
1. Go to Window -> Preferences -> PyDev -> Interpreters and click on "Python Interpreter".
2. Then click on new button and add python executable location.
> Example for windows:
>
> ```
> c:\python2.7\python.exe
> ```
>
> example for ubuntu:
>
> ```
> /usr/bin/python
> ```
3. Then you can attached image and click on finish.
[It should work.](http://i.stack.imgur.com/1VLGK.png) |
How does the class_weight parameter in scikit-learn work? | 30,972,029 | 13 | 2015-06-22T04:11:59Z | 30,982,811 | 10 | 2015-06-22T14:39:31Z | [
"python",
"scikit-learn"
] | I am having a lot of trouble understanding how the `class_weight` parameter in scikit-learn's Logistic Regression operates.
**The Situation**
I want to use logistic regression to do binary classification on a very unbalanced data set. The classes are labelled 0 (negative) and 1 (positive) and the observed data is in a ratio of about 19:1 with the majority of samples having negative outcome.
**First Attempt: Manually Preparing Training Data**
I split the data I had into disjoint sets for training and testing (about 80/20). Then I randomly sampled the training data by hand to get training data in different proportions than 19:1; from 2:1 -> 16:1.
I then trained logistic regression on these different training data subsets and plotted recall (= TP/(TP+FN)) as a function of the different training proportions. Of course, the recall was computed on the disjoint TEST samples which had the observed proportions of 19:1. Note, although I trained the different models on different training data, I computed recall for all of them on the same (disjoint) test data.
The results were as expected: the recall was about 60% at 2:1 training proportions and fell off rather fast by the time it got to 16:1. There were several proportions 2:1 -> 6:1 where the recall was decently above 5%.
**Second Attempt: Grid Search**
Next, I wanted to test different regularization parameters and so I used GridSearchCV and made a grid of several values of the `C` parameter as well as the `class_weight` parameter. To translate my n:m proportions of negative:positive training samples into the dictionary language of `class_weight` I thought that I just specify several dictionaries as follows:
```
{ 0:0.67, 1:0.33 } #expected 2:1
{ 0:0.75, 1:0.25 } #expected 3:1
{ 0:0.8, 1:0.2 } #expected 4:1
```
and I also included `None` and `auto`.
This time the results were totally wacked. All my recalls came out tiny (< 0.05) for every value of `class_weight` except `auto`. So I can only assume that my understanding of how to set the `class_weight` dictionary is wrong. Interestingly, the `class_weight` value of 'auto' in the grid search was around 59% for all values of `C`, and I guessed it balances to 1:1?
**My Questions**
1) How do you properly use `class_weight` to achieve different balances in training data from what you actually give it? Specifically, what dictionary do I pass to `class_weight` to use n:m proportions of negative:positive training samples?
2) If you pass various `class_weight` dictionaries to GridSearchCV, during cross-validation will it rebalance the training fold data according to the dictionary but use the true given sample proportions for computing my scoring function on the test fold? This is critical since any metric is only useful to me if it comes from data in the observed proportions.
3) What does the `auto` value of `class_weight` do as far as proportions? I read the documentation and I assume "balances the data inversely proportional to their frequency" just means it makes it 1:1. Is this correct? If not, can someone clarify?
Thank you very much, any clarification would be greatly appreciated! | First off, it might not be good to just go by recall alone. You can simply achieve a recall of 100% by classifying everything as the positive class.
I usually suggest using AUC for selecting parameters, and then finding a threshold for the operating point (say a given precision level) that you are interested in.
For how class-weight works: It penalizes mistakes in samples of class i with class\_weight[i] instead of 1. So higher class-weight means you want to put more emphasis on a class. From what you say it seems class 0 is 19 times more frequent than class 1. So you should increase the class-weight of class 1 relative to class 0, say {0:.1, 1:.9}.
If the class\_weight doesn't sum to 1, it will basically change the regularization parameter.
For how class\_weight="auto" works, you can have a look at [this discussion](https://github.com/scikit-learn/scikit-learn/issues/4324).
In the dev version you can use class\_weight="balanced", which is easier to understand: it basically means replicating the smaller class until you have as many samples as in the larger one, but in an implicit way. |
Why is using a generator function twice as fast in this case? | 30,973,746 | 10 | 2015-06-22T06:58:59Z | 30,974,243 | 8 | 2015-06-22T07:30:19Z | [
"python",
"performance",
"generator"
] | # The code which is common for both the implementations:
```
from math import sqrt
def factors(x):
num = 2
sq = int(sqrt(x))
for i in range(2, sq):
if (x % i) == 0:
num += 2
return num + ((1 if sq == sqrt(x) else 2) if x % sq == 0 else 0)
```
# 1. Implementation which doesn't make use of a generator function:
```
i = 1
while True:
if factors(i * (i+1) * 0.5) > 500:
print(int(i * (i+1) * 0.5))
break
i += 1
```
# 2. Implementation which makes use of a generator function:
```
def triangle():
i = 1
while True:
yield int(0.5 * i * (i + 1))
i += 1
t = triangle()
while True:
num = t.__next__()
if factors(num) > 500:
print(num)
break
```
# The Question:
The first implementation takes about 4 seconds while the second one takes approximately 8.2 seconds. Why is there such a big difference between the run times of the two implementations? | In the explicit case you're not taking the `int` of the expression before calling `factors` and therefore the value passed will be a floating-point number.
In the generator case you're instead yielding `int(...)`, calling `factors` passing an integer number. |
Why is using a generator function twice as fast in this case? | 30,973,746 | 10 | 2015-06-22T06:58:59Z | 30,974,391 | 10 | 2015-06-22T07:38:58Z | [
"python",
"performance",
"generator"
] | # The code which is common for both the implementations:
```
from math import sqrt
def factors(x):
num = 2
sq = int(sqrt(x))
for i in range(2, sq):
if (x % i) == 0:
num += 2
return num + ((1 if sq == sqrt(x) else 2) if x % sq == 0 else 0)
```
# 1. Implementation which doesn't make use of a generator function:
```
i = 1
while True:
if factors(i * (i+1) * 0.5) > 500:
print(int(i * (i+1) * 0.5))
break
i += 1
```
# 2. Implementation which makes use of a generator function:
```
def triangle():
i = 1
while True:
yield int(0.5 * i * (i + 1))
i += 1
t = triangle()
while True:
num = t.__next__()
if factors(num) > 500:
print(num)
break
```
# The Question:
The first implementation takes about 4 seconds while the second one takes approximately 8.2 seconds. Why is there such a big difference between the run times of the two implementations? | temp1():
```
def temp1():
i = 1
while True:
if factors(i * (i+1) * 0.5) > 500:
print(int(i * (i+1) * 0.5))
break
i += 1
```
temp2():
```
def temp2():
def triangle():
i = 1
while True:
yield int(0.5 * i * (i + 1))
i += 1
t = triangle()
while True:
num = t.next()
if factors(num) > 500:
print(num)
break
```
cProfile for both:

After changing the `factors` call in `temp1()` to `factors(int(...))`, It turns out that `temp1()` takes the similar time
`Modified temp1 to pass int rather than float:`
```
def temp1():
i = 1
while True:
if factors(int(i * (i+1) * 0.5)) > 500:
print(int(i * (i+1) * 0.5))
break
i += 1
```

So it turns out that in your first implementation you are passing `float` to the `factors()` and floating point arithmetic is complex than integer arithmetic
**Why Floating point operations are complex??**
Because the way floats are represented internally is different from ints, they are represented in 3 parts as sign , mantissa and exponent (IEEE 754) whereas representation of integer is much simple and so are operations like addition and subtraction on integers, even multiplication and division are performed using a combination of addition,subtraction and shift operations internally . since integer addition and subtraction are simple, so are their division/multiplications and hence floating point operations are some what expensive
**Why Floating point modulo is expensive than Integer?**
The answer is same as above , A modulo operation is nothing but combination of primitive operations mentioned above as follows:
```
a mod n = a - (n*int(a/n))
```
Since primitive operations for floats are more expensive, so is modulo for floats |
How to use JDBC source to write and read data in (Py)Spark? | 30,983,982 | 8 | 2015-06-22T15:30:15Z | 30,983,983 | 21 | 2015-06-22T15:30:15Z | [
"python",
"scala",
"apache-spark",
"apache-spark-sql",
"pyspark"
] | The goal of this question is to document:
* steps required to read and write data using JDBC connections in PySpark
* possible issues with JDBC sources and know solutions
With small changes these methods should work with other supported languages including Scala and R. | ## Writing data
1. Include applicable JDBC driver when you submit the application or start shell. You can use for example `--packages`:
```
bin/pyspark --packages group:name:version
```
or combining `driver-class-path` and `jars`
```
bin/pyspark --driver-class-path $PATH_TO_DRIVER_JAR --jars $PATH_TO_DRIVER_JAR
```
These properties can be also set using `PYSPARK_SUBMIT_ARGS` environment variable before JVM instance has been started or using `conf/spark-defaults.conf` to set `spark.jars.packages` or `spark.jars` / `spark.driver.extraClassPath`.
2. Choose desired mode. Spark JDBC writer supports following modes:
> * `append`: Append contents of this :class:`DataFrame` to existing data.
> * `overwrite`: Overwrite existing data.
> * `ignore`: Silently ignore this operation if data already exists.
> * `error` (default case): Throw an exception if data already exists.
Upserts or other fine-grained modifications are not supported
```
mode = ...
```
3. Prepare JDBC URI, for example:
```
# You can encode credentials in URI or pass
# separately using properties argument
# of jdbc method or options
url = "jdbc:postgresql://localhost/foobar"
```
4. (Optional) Create a dictionary of JDBC arguments.
```
properties = {
"user": "foo",
"password": "bar"
}
```
5. Use `DataFrame.write.jdbc`
```
df.write.jdbc(url=url, table="baz", mode=mode, properties=properties)
```
to save the data (see [`pyspark.sql.DataFrameWriter`](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=jdbc#pyspark.sql.DataFrameWriter) for details).
**Known issues**:
* Suitable driver cannot be found when driver has been included using `--packages` (`java.sql.SQLException: No suitable driver found for jdbc: ...`)
Assuming there is no driver version mismatch to solve this you can add `driver` class to the `properties`. For example:
```
properties = {
...
"driver": "org.postgresql.Driver"
}
```
* using `df.write.format("jdbc").options(...).save()` may result in:
> java.lang.RuntimeException: org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not allow create table as select.
Solution unknown.
* in Pyspark 1.3 you can try calling Java method directly:
```
df._jdf.insertIntoJDBC(url, "baz", True)
```
## Reading data
1. Follow steps 1-4 from *Writing data*
2. Use `sqlContext.read.jdbc`:
```
sqlContext.read.jdbc(url=url, table="baz", properties=properties)
```
or `sqlContext.read.format("jdbc")`:
```
(sqlContext.read.format("jdbc")
.options(url=url, dbtable="baz", **properties)
.load())
```
**Known issues and gotchas**:
* Suitable driver cannot be found - see: Writing data
* Spark SQL supports predicate pushdown with JDBC sources although not all predicates can pushed down. It also doesn't delegate limits nor aggregations. Possible workaround is to replace `dbtable` / `table` argument with a valid subquery. See for example:
+ [Does spark predicate pushdown work with JDBC?](http://stackoverflow.com/q/32573991/1560062)
+ [More than one hour to execute pyspark.sql.DataFrame.take(4)](http://stackoverflow.com/q/35869884/1560062)
## Where to find suitable drivers:
* [Maven Repository](http://mvnrepository.com/) (to obtain required coordinates for `--packages` select desired version and copy data from a Gradle tab in a form `compile-group:name:version` substituting respective fields) or [Maven Central Repository](https://search.maven.org/):
+ [PostgreSQL](http://mvnrepository.com/artifact/org.postgresql/postgresql)
+ [MySQL](http://mvnrepository.com/artifact/mysql/mysql-connector-java) |
Convert Rust vector of tuples to a C compatible structure | 30,984,688 | 16 | 2015-06-22T16:03:57Z | 30,992,210 | 16 | 2015-06-23T00:57:23Z | [
"python",
"rust",
"ctypes"
] | Following [these](http://stackoverflow.com/questions/30312885/pass-python-list-to-embedded-rust-function?lq=1) [answers](http://stackoverflow.com/questions/29182843/pass-a-c-array-to-a-rust-function), I've currently defined a Rust 1.0 function as follows, in order to be callable from Python using `ctypes`:
```
use std::vec;
extern crate libc;
use libc::{c_int, c_float, size_t};
use std::slice;
#[no_mangle]
pub extern fn convert_vec(input_lon: *const c_float,
lon_size: size_t,
input_lat: *const c_float,
lat_size: size_t) -> Vec<(i32, i32)> {
let input_lon = unsafe {
slice::from_raw_parts(input_lon, lon_size as usize)
};
let input_lat = unsafe {
slice::from_raw_parts(input_lat, lat_size as usize)
};
let combined: Vec<(i32, i32)> = input_lon
.iter()
.zip(input_lat.iter())
.map(|each| convert(*each.0, *each.1))
.collect();
return combined
}
```
And I'm setting up the Python part like so:
```
from ctypes import *
class Int32_2(Structure):
_fields_ = [("array", c_int32 * 2)]
rust_bng_vec = lib.convert_vec_py
rust_bng_vec.argtypes = [POINTER(c_float), c_size_t,
POINTER(c_float), c_size_t]
rust_bng_vec.restype = POINTER(Int32_2)
```
This seems to be OK, but I'm:
* Not sure how to transform `combined` (a `Vec<(i32, i32)>`) to a C-compatible structure, so it can be returned to my Python script.
* Not sure whether I should be returning a reference (`return &combined`?) and how I would have to annotate the function with the appropriate lifetime specifier if I did | The most important thing to note is that there is **no such thing** as a tuple in C. C is the *lingua franca* of library interoperability, and you will be required to restrict yourself to abilities of this language. It doesn't matter if you are talking between Rust and another high-level language; you have to speak C.
There may not be tuples in C, but there are `struct`s. A two-element tuple is just a struct with two members!
Let's start with the C code that we would write:
```
#include <stdio.h>
#include <stdint.h>
typedef struct {
uint32_t a;
uint32_t b;
} tuple_t;
typedef struct {
void *data;
size_t len;
} array_t;
extern array_t convert_vec(array_t lat, array_t lon);
int main() {
uint32_t lats[3] = {0, 1, 2};
uint32_t lons[3] = {9, 8, 7};
array_t lat = { .data = lats, .len = 3 };
array_t lon = { .data = lons, .len = 3 };
array_t fixed = convert_vec(lat, lon);
tuple_t *real = fixed.data;
for (int i = 0; i < fixed.len; i++) {
printf("%d, %d\n", real[i].a, real[i].b);
}
return 0;
}
```
We've defined two `struct`s â one to represent our tuple, and another to represent an array, as we will be passing those back and forth a bit.
We will follow this up by defining the *exact same* structs in Rust and define them to have the *exact same* members (types, ordering, names). Importantly, we use `#[repr(C)]` to let the Rust compiler know to not do anything funky with reordering the data.
```
extern crate libc;
use std::slice;
use std::mem;
#[repr(C)]
pub struct Tuple {
a: libc::uint32_t,
b: libc::uint32_t,
}
#[repr(C)]
pub struct Array {
data: *const libc::c_void,
len: libc::size_t,
}
impl Array {
unsafe fn as_u32_slice(&self) -> &[u32] {
assert!(!self.data.is_null());
slice::from_raw_parts(self.data as *const u32, self.len as usize)
}
fn from_vec<T>(mut vec: Vec<T>) -> Array {
// Important to make length and capacity match
// A better solution is to track both length and capacity
vec.shrink_to_fit();
let array = Array { data: vec.as_ptr() as *const libc::c_void, len: vec.len() as libc::size_t };
// Whee! Leak the memory, and now the raw pointer (and
// eventually C) is the owner.
mem::forget(vec);
array
}
}
#[no_mangle]
pub extern fn convert_vec(lon: Array, lat: Array) -> Array {
let lon = unsafe { lon.as_u32_slice() };
let lat = unsafe { lat.as_u32_slice() };
let vec =
lat.iter().zip(lon.iter())
.map(|(&lat, &lon)| Tuple { a: lat, b: lon })
.collect();
Array::from_vec(vec)
}
```
We must *never* accept or return non-`repr(C)` types across the FFI boundary, so we pass across our `Array`. Note that there's a good amount of `unsafe` code, as we have to convert an unknown pointer to data (`c_void`) to a specific type. That's the price of being generic in C world.
Let's turn our eye to Python now. Basically, we just have to mimic what the C code did:
```
import ctypes
class FFITuple(ctypes.Structure):
_fields_ = [("a", ctypes.c_uint32),
("b", ctypes.c_uint32)]
class FFIArray(ctypes.Structure):
_fields_ = [("data", ctypes.c_void_p),
("len", ctypes.c_size_t)]
# Allow implicit conversions from a sequence of 32-bit unsigned
# integers.
@classmethod
def from_param(cls, seq):
return cls(seq)
# Wrap sequence of values. You can specify another type besides a
# 32-bit unsigned integer.
def __init__(self, seq, data_type = ctypes.c_uint32):
array_type = data_type * len(seq)
raw_seq = array_type(*seq)
self.data = ctypes.cast(raw_seq, ctypes.c_void_p)
self.len = len(seq)
# A conversion function that cleans up the result value to make it
# nicer to consume.
def void_array_to_tuple_list(array, _func, _args):
tuple_array = ctypes.cast(array.data, ctypes.POINTER(FFITuple))
return [tuple_array[i] for i in range(0, array.len)]
lib = ctypes.cdll.LoadLibrary("./target/debug/libtupleffi.dylib")
lib.convert_vec.argtypes = (FFIArray, FFIArray)
lib.convert_vec.restype = FFIArray
lib.convert_vec.errcheck = void_array_to_tuple_list
for tupl in lib.convert_vec([1,2,3], [9,8,7]):
print tupl.a, tupl.b
```
Forgive my rudimentary Python. ~~I'm sure an experienced Pythonista could make this look a lot prettier!~~ Thanks to @eryksun for [some nice advice](http://stackoverflow.com/questions/30984688/convert-rust-vector-of-tuples-to-a-c-compatible-structure/30992210#comment50015596_30992210) on how to make the consumer side of calling the method **much** nicer.
# A word about ownership and memory leaks
In this example code, we've leaked the memory allocated by the `Vec`. Theoretically, the FFI code now owns the memory, but realistically, it can't do anything useful with it. To have a fully correct example, you'd need to add another method that would accept the pointer back from the callee, transform it back into a `Vec`, then allow Rust to drop the value. This is the only safe way, as Rust is almost guaranteed to use a different memory allocator than the one your FFI language is using.
> Not sure whether I should be returning a reference and how I would have to annotate the function with the appropriate lifetime specifier if I did
No, you don't want to (read: *can't*) return a reference. If you could, then the ownership of the item would end with the function call, and the reference would point to nothing. This is why we need to do the two-step dance with `mem::forget` and returning a raw pointer. |
Python regular expression, matching the last word | 30,986,345 | 4 | 2015-06-22T17:36:17Z | 30,986,390 | 7 | 2015-06-22T17:39:00Z | [
"python",
"regex",
"list"
] | I've the following problem. I'm looking to find all words in a string that typically looks like so
`HelloWorldToYou`
Notice, each word is capitalized as a start followed by the next word and so on.
I'm looking to create a list of words from it. So the final expected output is a list that looks like
```
['Hello','World','To','You']
```
In Python, I used the following
```
mystr = 'HelloWorldToYou'
pat = re.compile(r'([A-Z](.*?))(?=[A-Z]+)')
[x[0] for x in pat.findall(mystr)]
['Hello', 'World', 'To']
```
However, I'm unable to capture the last word 'You'. Is there a way to get at this?
Thanks in advance | Use the alternation with `$`:
```
import re
mystr = 'HelloWorldToYou'
pat = re.compile(r'([A-Z][a-z]*)')
# or your version with `.*?`: pat = re.compile(r'([A-Z].*?)(?=[A-Z]+|$)')
print pat.findall(mystr)
```
See [IDEONE demo](https://ideone.com/Qf5mpx)
Output:
```
['Hello', 'World', 'To', 'You']
```
**Regex explanation**:
* `([A-Z][a-z]*)` - A capturing group that matches
+ `[A-Z]` a capital English letter followed by
+ `[a-z]*` - optional number of lowercase English letters
-OR-
+ `.*?` - Match any characters other than a newline lazily
The lookahead can be omitted if we use `[a-z]*`, but if you use `.*?`, then use it:
* `(?=[A-Z]+|$)` - Up to an uppercase English letter (we can actually remove `+` here), **OR** the end of string (`$`).
If you do not use a look-ahead version, you can even remove the capturing group for better performance and [use `finditer`](https://ideone.com/FIzLOO):
```
import re
mystr = 'HelloWorldToYou'
pat = re.compile(r'[A-Z][a-z]*')
print [x.group() for x in pat.finditer(mystr)]
``` |
Double Asterisk | 30,987,462 | 2 | 2015-06-22T18:42:58Z | 30,988,340 | 9 | 2015-06-22T19:34:56Z | [
"python"
] | I'm new to Python and really stumped on this. I'm reading from a book and the code works fine; I just don't get it!
```
T[i+1] = m*v[i+1]Ë**/L
```
What's with the double asterisk part of this code? It's even followed by a forward slash. The variable L is initialized with the value 1.0 However, it looks like someone slumped over the keyboard, but the code works fine. Is this a math expression or something more? I would appreciate the help understanding this. Thanks!
full code:
```
from pylab import *
g = 9.8 # m/sË2
dt = 0.01 # s
time = 10.0 # s
v0 = 2.0 # s
D = 0.05 #
L = 1.0 # m
m = 0.5 # kg
# Numerical initialization
n = int(round(time/dt))
t = zeros(n,float)
s = zeros(n,float)
v = zeros(n,float)
T = zeros(n,float)
# Initial conditions
v[0] = v0
s[0] = 0.0
# Simulation loop
i = 0
while (i<n AND T[i]>=0.0):
t[i+1] = t[i] + dt
a = -D/m*v[i]*abs(v[i])-g*sin(s[i]/L)
v[i+1] = v[i] + a*dt
s[i+1] = s[i] + v[i+1]*dt
T[i+1] = m*v[i+1]Ë**/L + m*g*cos(s[i+1]/L)
i = i + 1
``` | This code is from the book "Elementary Mechanics Using Python: A Modern Course Combining Analytical and Numerical Techniques".
According to the formula on the page 255:

So the Python line should be:
```
T[i+1] = m*v[i+1]**2/L + m*g*cos(s[i+1]/L)
``` |
Support vector machine in Python using libsvm example of features | 30,991,592 | 5 | 2015-06-22T23:34:10Z | 31,093,946 | 10 | 2015-06-27T22:00:00Z | [
"python",
"machine-learning",
"svm",
"libsvm"
] | I have scraped a lot of ebay titles like this one:
```
Apple iPhone 5 White 16GB Dual-Core
```
and I have manually tagged all of them in this way
```
B M C S NA
```
where B=Brand (Apple) M=Model (iPhone 5) C=Color (White) S=Size (Size) NA=Not Assigned (Dual Core)
Now I need to train a SVM classifier using the libsvm library in python to learn the sequence patterns that occur in the ebay titles.
I need to extract new value for that attributes (Brand, Model, Color, Size) by considering the problem as a classification one. In this way I can predict new models.
I want to considering this features:
```
* Position
- from the beginning of the title
- to the end of the listing
* Orthographic features
- current word contains a digit
- current word is capitalized
....
```
I can't understand how can I give all this info to the library. The official doc lacks a lot of information
My class are Brand, Model, Size, Color, NA
what does the input file of the SVM algo must contain?
how can I create it? could I have an example of that file considering the 4 features that I put as example in my question? Can I also have an example of the code that I must use to elaborate the input file ?
**\* UPDATE \***
I want to represent these features... How can I must do?
1. Identity of the current word
I think that I can interpret it in this way
```
0 --> Brand
1 --> Model
2 --> Color
3 --> Size
4 --> NA
```
If I know that the word is a Brand I will set that variable to 1 (true).
It is ok to do it in the training test (because I have tagged all the words) but how can I do that for the test set? I don't know what is the category of a word (this is why I'm learning it :D).
2. N-gram substring features of current word (N=4,5,6)
No Idea, what does it means?
3. Identity of 2 words before the current word.
How can I model this feature?
Considering the legend that I create for the 1st feature I have 5^(5) combination)
```
00 10 20 30 40
01 11 21 31 41
02 12 22 32 42
03 13 23 33 43
04 14 24 34 44
```
How can I convert it to a format that the libsvm (or scikit-learn) can understand?
4. Membership to the 4 dictionaries of attributes
Again how can I do it?
Having 4 dictionaries (for color, size, model and brand) I thing that I must create a bool variable that I will set to true if and only if I have a match of the current word in one of the 4 dictionaries.
5. Exclusive membership to dictionary of brand names
I think that like in the 4. feature I must use a bool variable. Do you agree? | Here's a step-by-step guide for how to train an SVM using your data and then evaluate using the same dataset. It's also available at <http://nbviewer.ipython.org/gist/anonymous/2cf3b993aab10bf26d5f>. At the url you can also see the output of the intermediate data and the resulting accuracy (it's an [iPython notebook](http://ipython.org/notebook.html))
### Step 0: Install dependencies
You need to install the following libraries:
* pandas
* scikit-learn
From command line:
```
pip install pandas
pip install scikit-learn
```
### Step 1: Load the data
We will use pandas to load our data.
pandas is a library for easily loading data. For illustration, we first save
sample data to a csv and then load it.
We will train the SVM with `train.csv` and get test labels with `test.csv`
```
import pandas as pd
train_data_contents = """
class_label,distance_from_beginning,distance_from_end,contains_digit,capitalized
B,1,10,1,0
M,10,1,0,1
C,2,3,0,1
S,23,2,0,0
N,12,0,0,1"""
with open('train.csv', 'w') as output:
output.write(train_data_contents)
train_dataframe = pd.read_csv('train.csv')
```
### Step 2: Process the data
We will convert our dataframe into numpy arrays which is a format that scikit-
learn understands.
We need to convert the labels "B", "M", "C",... to numbers also because svm does
not understand strings.
Then we will train a linear svm with the data
```
import numpy as np
train_labels = train_dataframe.class_label
labels = list(set(train_labels))
train_labels = np.array([labels.index(x) for x in train_labels])
train_features = train_dataframe.iloc[:,1:]
train_features = np.array(train_features)
print "train labels: "
print train_labels
print
print "train features:"
print train_features
```
We see here that the length of `train_labels` (5) exactly matches how many rows
we have in `trainfeatures`. Each item in `train_labels` corresponds to a row.
### Step 3: Train the SVM
```
from sklearn import svm
classifier = svm.SVC()
classifier.fit(train_features, train_labels)
```
### Step 4: Evaluate the SVM on some testing data
```
test_data_contents = """
class_label,distance_from_beginning,distance_from_end,contains_digit,capitalized
B,1,10,1,0
M,10,1,0,1
C,2,3,0,1
S,23,2,0,0
N,12,0,0,1
"""
with open('test.csv', 'w') as output:
output.write(test_data_contents)
test_dataframe = pd.read_csv('test.csv')
test_labels = test_dataframe.class_label
labels = list(set(test_labels))
test_labels = np.array([labels.index(x) for x in test_labels])
test_features = test_dataframe.iloc[:,1:]
test_features = np.array(test_features)
results = classifier.predict(test_features)
num_correct = (results == test_labels).sum()
recall = num_correct / len(test_labels)
print "model accuracy (%): ", recall * 100, "%"
```
### Links & Tips
* Example code for how to load LinearSVC: <http://scikitlearn.org/stable/modules/svm.html#svm>
* Long list of scikit-learn examples: <http://scikitlearn.org/stable/auto_examples/index.html>. I've found these mildly helpful but
often confusing myself.
* If you find that the SVM is taking a long time to train, try LinearSVC
instead: <http://scikitlearn.org/stable/modules/generated/sklearn.svm.LinearSVC.html>
* Here's another tutorial on getting familiar with machine learning models: <http://scikit-learn.org/stable/tutorial/basic/tutorial.html>
You should be able to take this code and replace `train.csv` with your training data, `test.csv` with your testing data, and get predictions for your test data, along with accuracy results.
Note that since you're evaluating using the data you trained on the accuracy will be unusually high. |
Weird numpy.sum behavior when adding zeros | 30,998,305 | 21 | 2015-06-23T09:00:24Z | 31,072,639 | 7 | 2015-06-26T11:57:47Z | [
"python",
"numpy",
"sum",
"numerical-stability"
] | I understand how mathematically-equivalent arithmentic operations can result in different results due to numerical errors (e.g. summing floats in different orders).
However, it surprises me that adding zeros to `sum` can change the result. I thought that this always holds for floats, no matter what: `x + 0. == x`.
Here's an example. I expected all the lines to be exactly zero. Can anybody please explain why this happens?
```
M = 4 # number of random values
Z = 4 # number of additional zeros
for i in range(20):
a = np.random.rand(M)
b = np.zeros(M+Z)
b[:M] = a
print a.sum() - b.sum()
-4.4408920985e-16
0.0
0.0
0.0
4.4408920985e-16
0.0
-4.4408920985e-16
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
2.22044604925e-16
0.0
4.4408920985e-16
4.4408920985e-16
0.0
```
It seems not to happen for smaller values of `M` and `Z`.
I also made sure `a.dtype==b.dtype`.
Here is one more example, which also demonstrates python's builtin `sum` behaves as expected:
```
a = np.array([0.1, 1.0/3, 1.0/7, 1.0/13, 1.0/23])
b = np.array([0.1, 0.0, 1.0/3, 0.0, 1.0/7, 0.0, 1.0/13, 1.0/23])
print a.sum() - b.sum()
=> -1.11022302463e-16
print sum(a) - sum(b)
=> 0.0
```
I'm using numpy V1.9.2. | **Short answer:** You are seeing the difference between
```
a + b + c + d
```
and
```
(a + b) + (c + d)
```
which because of floating point inaccuracies is not the same.
**Long answer:** Numpy implements pair-wise summation as an optimization of both speed (it allows for easier vectorization) and rounding error.
The numpy sum-implementation can be found [here](https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/loops.c.src) (function `pairwise_sum_@TYPE@`). It essentially does the following:
1. If the length of the array is less than 8, a regular for-loop summation is performed. This is why the strange result is not observed if `W < 4` in your case - the same for-loop summation will be used in both cases.
2. If the length is between 8 and 128, it accumulates the sums in 8 bins `r[0]-r[7]` then sums them by `((r[0] + r[1]) + (r[2] + r[3])) + ((r[4] + r[5]) + (r[6] + r[7]))`.
3. Otherwise, it recursively sums two halves of the array.
Therefore, in the first case you get `a.sum() = a[0] + a[1] + a[2] + a[3]` and in the second case `b.sum() = (a[0] + a[1]) + (a[2] + a[3])` which leads to `a.sum() - b.sum() != 0`. |
Check if a list is a rotation of another list that works with duplicates | 31,000,591 | 17 | 2015-06-23T10:41:22Z | 31,000,688 | 7 | 2015-06-23T10:46:13Z | [
"python",
"arrays",
"algorithm",
"time-complexity"
] | I have this function for determining if a list is a rotation of another list:
```
def isRotation(a,b):
if len(a) != len(b):
return False
c=b*2
i=0
while a[0] != c[i]:
i+=1
for x in a:
if x!= c[i]:
return False
i+=1
return True
```
e.g.
```
>>> a = [1,2,3]
>>> b = [2,3,1]
>>> isRotation(a, b)
True
```
How do I make this work with duplicates? e.g.
```
a = [3,1,2,3,4]
b = [3,4,3,1,2]
```
And can it be done in `O(n)`time? | I think you could use something like this:
```
a1 = [3,4,5,1,2,4,2]
a2 = [4,5,1,2,4,2,3]
# Array a2 is rotation of array a1 if it's sublist of a1+a1
def is_rotation(a1, a2):
if len(a1) != len(a2):
return False
double_array = a1 + a1
return check_sublist(double_array, a2)
def check_sublist(a1, a2):
if len(a1) < len(a2):
return False
j = 0
for i in range(len(a1)):
if a1[i] == a2[j]:
j += 1
else:
j = 0
if j == len(a2):
return True
return j == len(a2)
```
Just common sense if we are talking about interview questions:
* we should remember that solution should be easy to code and to describe.
* do not try to remember solution on interview. It's better to remember core principle and re-implement it. |
Check if a list is a rotation of another list that works with duplicates | 31,000,591 | 17 | 2015-06-23T10:41:22Z | 31,000,695 | 28 | 2015-06-23T10:46:36Z | [
"python",
"arrays",
"algorithm",
"time-complexity"
] | I have this function for determining if a list is a rotation of another list:
```
def isRotation(a,b):
if len(a) != len(b):
return False
c=b*2
i=0
while a[0] != c[i]:
i+=1
for x in a:
if x!= c[i]:
return False
i+=1
return True
```
e.g.
```
>>> a = [1,2,3]
>>> b = [2,3,1]
>>> isRotation(a, b)
True
```
How do I make this work with duplicates? e.g.
```
a = [3,1,2,3,4]
b = [3,4,3,1,2]
```
And can it be done in `O(n)`time? | The following meta-algorithm will solve it.
* Build a concatenation of `a`, e.g., `a = [3,1,2,3,4]` => `aa = [3,1,2,3,4,3,1,2,3,4]`.
* Run any string adaptation of a string-matching algorithm, e.g., [Boyer Moore](https://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_search_algorithm) to find `b` in `aa`.
---
One particularly easy implementation, which I would first try, is to use [Rabin Karp](https://en.wikipedia.org/wiki/Rabin%E2%80%93Karp_algorithm) as the underlying algorithm. In this, you would
* calculate the [Rabin Fingerprint](https://en.wikipedia.org/wiki/Rabin_fingerprint) for `b`
* calculate the Rabin fingerprint for `aa[: len(b)]`, `aa[1: len(b) + 1]`, ..., and compare the lists only when the fingerprints match
Note that
* The Rabin fingerprint for a sliding window can be calculated iteratively very efficiently (read about it in the Rabin-Karp link)
* If your list is of integers, you actually have a slightly easier time than for strings, as you don't need to think what is the numerical hash value of a letter
- |
Check if a list is a rotation of another list that works with duplicates | 31,000,591 | 17 | 2015-06-23T10:41:22Z | 31,001,317 | 13 | 2015-06-23T11:16:35Z | [
"python",
"arrays",
"algorithm",
"time-complexity"
] | I have this function for determining if a list is a rotation of another list:
```
def isRotation(a,b):
if len(a) != len(b):
return False
c=b*2
i=0
while a[0] != c[i]:
i+=1
for x in a:
if x!= c[i]:
return False
i+=1
return True
```
e.g.
```
>>> a = [1,2,3]
>>> b = [2,3,1]
>>> isRotation(a, b)
True
```
How do I make this work with duplicates? e.g.
```
a = [3,1,2,3,4]
b = [3,4,3,1,2]
```
And can it be done in `O(n)`time? | You can do it in `0(n)` time and `0(1)` space using a modified version of a maximal suffixes algorithm:
From [Jewels of Stringology](https://books.google.ie/books?id=9NdohJXtIyYC&lpg=PA139&ots=ln6d-CM3Gb&dq=Equivalence%20of%20cyclic%20two%20words&pg=PA139#v=onepage&q=Equivalence%20of%20cyclic%20two%20words&f=false):
**Cyclic equality of words**
> A rotation of a word u of length n is any word of the form u[k + 1...n][l...k]. Let u, w be two words of the same length n. They are said to be cyclic-equivalent if u(i) == w(j) for some i, j.
>
> If words u and w are written as circles, they are cyclic-equivalent if the circles coincide after appropriate rotations.
>
> There are several linear-time algorithms for testing the cyclic-equivalence
> of two words. The simplest one is to apply any string matching algorithm to pattern pat = u and text = ww because words u and w are cyclic=equivalent if pat occurs in text.
>
> Another algorithm is to find maximal suffixes of uu and ww and check if
> they are identical on prefixes of size n. We have chosen this problem because there is simpler interesting algorithm, working in linear time and constant space simultaneously, which deserves presentation.
```
Algorithm Cyclic-Equivalence(u, w)
{ checks cyclic equality of u and w of common length n }
x := uu; y := ww;
i := 0; j := 0;
while (i < n) and (j < n) do begin
k := 1;
while x[i + k] = y[j + k] do k := k + 1;
if A; > n then return true;
if x[i + k]> y[i + k] then i := i + k else j := j + k;
{ invariant }
end;
return false;
```
Which translated to python becomes:
```
def cyclic_equiv(u, v):
n, i, j = len(u), 0, 0
if n != len(v):
return False
while i < n and j < n:
k = 1
while k <= n and u[(i + k) % n] == v[(j + k) % n]:
k += 1
if k > n:
return True
if u[(i + k) % n] > v[(j + k) % n]:
i += k
else:
j += k
return False
```
Running a few examples:
```
In [4]: a = [3,1,2,3,4]
In [5]: b =[3,4,3,1,2]
In [6]: cyclic_equiv(a,b)
Out[6]: True
In [7]: b =[3,4,3,2,1]
In [8]: cyclic_equiv(a,b)
Out[8]: False
In [9]: b =[3,4,3,2]
In [10]: cyclic_equiv(a,b)
Out[10]: False
In [11]: cyclic_equiv([1,2,3],[1,2,3])
Out[11]: True
In [12]: cyclic_equiv([3,1,2],[1,2,3])
Out[12]: True
```
A more naive approach would be to use a collections.deque to rotate the elements:
```
def rot(l1,l2):
from collections import deque
if l1 == l2:
return True
# if length is different we cannot get a match
if len(l2) != len(l1):
return False
# if any elements are different we cannot get a match
if set(l1).difference(l2):
return False
l2,l1 = deque(l2),deque(l1)
for i in range(len(l1)):
l2.rotate() # l2.appendleft(d.pop())
if l1 == l2:
return True
return False
``` |
Reddit search API not giving all results | 31,000,892 | 4 | 2015-06-23T10:56:22Z | 31,093,573 | 7 | 2015-06-27T21:12:29Z | [
"python",
"python-2.7",
"praw"
] | ```
import praw
def get_data_reddit(search):
username=""
password=""
r = praw.Reddit(user_agent='')
r.login(username,password,disable_warning=True)
posts=r.search(search, subreddit=None,sort=None, syntax=None,period=None,limit=None)
title=[]
for post in posts:
title.append(post.title)
print len(title)
search="stackoverflow"
get_data_reddit(search)
```
> Ouput=953
Why the limitation?
1. [Documentation](https://praw.readthedocs.org/en/v2.0.15/pages/faq.html) mentions
> We can at most get 1000 results from every listing, this is an
> upstream limitation by reddit. There is nothing we can do to go past
> this limit. But we may be able to get the results we want with the
> search() method instead.
Any workaround? I hoping someway to overcome in API, I wrote an scrapper for twitter data and find it to be not the most efficient solution.
Same Question:<https://github.com/praw-dev/praw/issues/430>
Please refer the aformentioned link for related discussion too. | Limiting results on a search or list is a common tactic for reducing load on servers. The reddit API is clear that this is what it does (as you have already flagged). However it doesn't stop there...
The API also supports a variation of paged results for listings. Since it is a constantly changing database, they don't provide pages, but instead allow you to pick up where you left off by using the 'after' parameter. This is documented [here](https://www.reddit.com/dev/api#listings).
Now, while I'm not familiar with PRAW, I see that the reddit [search API](https://www.reddit.com/dev/api#GET_search) conforms to the listing syntax. I think you therefore only need to reissue your search, specifying the extra 'after' parameter (referring to your last result from the first search).
Having subsequently tried it out, it appears PRAW is genuinely returning you all the results you asked for.
As requested by OP, here's the code I wrote to look at the paged results.
```
import praw
def get_data_reddit(search, after=None):
r = praw.Reddit(user_agent='StackOverflow example')
params = {"q": search}
if after:
params["after"] = "t3_" + str(after.id)
posts = r.get_content(r.config['search'] % 'all', params=params, limit=100)
return posts
search = "stackoverflow"
post = None
count = 0
while True:
posts = get_data_reddit(search, post)
for post in posts:
print(str(post.id))
count += 1
print(count)
``` |
What is python-dev package used for | 31,002,091 | 9 | 2015-06-23T11:54:12Z | 31,002,176 | 7 | 2015-06-23T11:58:34Z | [
"python",
"cpython"
] | I recently installed `lxml`.
Before that I had to install all the dependencies for that.
So I tried to install `liblxml2-dev`, `liblxslt1-dev` and `python-dev`
(google searched for what packages are required for `lxml`)
but even after that I could not able to install `lxml` by using the command
`pip install lxml`.
However as because I am using Python 3.4.0, I thought that may be there are different version of `python-dev` (thought came due to facing some similar version conflict problem). So I tried to install `python3-dev`.
Then I tried to install `lxml` using the same command said earlier and that worked!!!
As I am a newbie in this. So I google searched for those sentences
and I could able to understand the meaning of `lxml` and `xslt`. I found a [link](https://packages.debian.org/sid/python-dev) for `python-dev` but I am not still sure what it is actually doing.
Could anyone help :) | python-dev contains the header files you need to build Python extensions. lxml is a Python C-API extension that is compiled when you do `pip install lxml`. The lxml sources have at least something like `#include <Python.h>` in the code. The compiler looks for the Python.h file during compilation, hence those files need to be on your system such that they can be found.
On Linux typically, the the binary libraries and header files of packages like python are separate. Hence you can have Python installed and everything works fine but when you want to build extensions you need to install the corresponding dev package. |
Stopping list selection in Python 2.7 | 31,003,486 | 11 | 2015-06-23T12:59:20Z | 31,003,579 | 23 | 2015-06-23T13:03:41Z | [
"python",
"list",
"python-2.7"
] | Imagine that I have an order list of tuples:
```
s = [(0,-1), (1,0), (2,-1), (3,0), (4,0), (5,-1), (6,0), (7,-1)]
```
Given a parameter `X`, I want to select all the tuples that have a first element equal or greater than `X` up to but not including the first tuple that has -1 as the second element.
For example, if `X = 3`, I want to select the list `[(3,0), (4,0)]`
One idea I had is:
Get the cut-off key with
```
E = min (x [0] for x in s if (x [0] >= X) and (x [1] == -1) )
```
Then select elements with keys between the `X` and `E`:
```
R = [x for x in s if X <= x [0] < E]
```
That gives me what I want in R, but it seems really inefficient, involving two table scans. I could do it in a for loop, discarding tuples with keys too small, and *break* when I hit the first blocking tuple. But for runs like a dog compared to list selection.
Is there a super-efficient, python-esque (2.7) way of doing this? | You can simply filter the tuples from the list as a generator expression and then you can stop taking the values from the generator expression when you get the first tuple whose second element is `-1`, like this
```
>>> s = [(0,-1), (1,0), (2,-1), (3,0), (4,0), (5,-1), (6,0), (7,-1)]
>>> from itertools import takewhile
>>> X = 3
>>> list(takewhile(lambda x: x[1] != -1, (item for item in s if item[0] >= X)))
[(3, 0), (4, 0)]
```
Here, the generator expression, `(item for item in s if item[0] >= X)` will give values one-by-one, on demand, (they are not generated all at once, so we save memory here) which are greater than or equal to `X`.
Then, we take values from that generator expression, only till we find a tuple whose second element is not equal to `-1`, with [`itertools.takewhile`](https://docs.python.org/2/library/itertools.html#itertools.takewhile). |
My Django installs in virtual env are missing admin templates folder | 31,009,216 | 10 | 2015-06-23T17:12:21Z | 34,532,454 | 9 | 2015-12-30T15:36:01Z | [
"python",
"django"
] | When I Install a venv and install Django in it for example "DjangoUpgrade" then I am missing at this path the templates folder
```
:~/.venvs/DjangoUpgrade/local/lib/python2.7/site-packages/django/contrib/admin
```
Just to be clear this is the ls from that folder.
`actions.py exceptions.py filters.py forms.py helpers.py __init__.py models.py options.py sites.py templatetags tests.pyc util.pyc validation.pyc widgets.py
actions.pyc exceptions.pyc filters.pyc forms.pyc helpers.pyc __init__.pyc models.pyc options.pyc sites.pyc tests.py util.py validation.py views widgets.pyc`
This happens at Django 1.3 / 1.4 in completely fresh venvs, both when nothing else is installed yet and when everything else is installed fine via pip.
When I copy the admin templates folder from a working colleagues correct install it works then perfectly. What is going wrong here?
[We're upgrading through the versions atm so forgive the older Django version, it's still supported though]
python 2.7.3.
Django 1.4.20
pip 7.0.3 | To solve this issue you should use "--no-binary" while installing django.
```
pip install --no-binary django -r requirements.txt
```
or
```
pip install --no-binary django django==1.4.21
```
Remember to upgrade your PIP installation to have the "--no-binary" option.
You can get further information in this link:
<https://github.com/pypa/pip/issues/2823> |
Iterate over list of values dictionary | 31,018,651 | 3 | 2015-06-24T05:55:54Z | 31,018,728 | 13 | 2015-06-24T06:00:54Z | [
"python",
"dictionary"
] | I have a dict like this
```
data = {
'a': [95, 93, 90],
'b': [643, 611, 610]
}
```
I want to iterate over the dict and fetch key and value from list of values for each item, something like this
```
{'a': 95, 'b': 643}
{'a': 93, 'b': 611}
{'a': 90, 'b': 610}
```
I have implemented the logic for this and it works fine, but when i see the `temp_dict` created in process, i see lots of intermediate unnecessary looping. The end result works just fine but i think it can be improved a lot.
```
import timeit
data = {
'a': [95, 93, 90],
'b': [643, 611, 610]
}
def calculate(**kwargs):
temp_dict = {}
index = 0
len_values = list(kwargs.values())[0]
while index < len(len_values):
for k, v in kwargs.items():
temp_dict[k] = v[index]
index += 1
yield temp_dict
start_time = timeit.default_timer()
for k in (calculate(**data)):
print(k)
print(timeit.default_timer() - start_time)
```
How to do it more efficiently? | Try something like this -
```
>>> data = {
... 'a': [95, 93, 90],
... 'b': [643, 611, 610]
... }
>>> lst = list(data.items())
>>> lst1 = list(zip(*[i[1] for i in lst]))
>>> lst1
[(95, 643), (93, 611), (90, 610)]
>>> newlist = []
>>> for aval, bval in lst1:
... newlist.append({lst[0][0]:aval , lst[1][0]:bval})
...
>>> newlist
[{'a': 95, 'b': 643}, {'a': 93, 'b': 611}, {'a': 90, 'b': 610}]
```
When passing a list using \* as a parameter to a function, it will break the list into individual elements and pass it onto the function. Example - if we pass `[[1,2],[3,4]]` it would be passed as two different arguments - `[1,2]` and `[3,4]` - checkt this [here](http://www.python-course.eu/passing_arguments.php) (Section - \* in Function calls)
Example to explain this -
```
>>> lst = [[1,2,3],[4,5,6],[7,8,9]]
>>> def func(a, b, c):
... print(a)
... print(b)
... print(c)
...
>>> func(*lst)
[1, 2, 3]
[4, 5, 6]
[7, 8, 9]
```
zip - This function returns a list of tuples, where the i-th tuple contains the i-th element from each of the argument sequences or iterables.
---
A bit more scale-able model -
```
>>> lst = list(data.items())
>>> lst
[('a', [95, 93, 90]), ('b', [643, 611, 610])]
>>> lst1 = list(zip(*[i[1] for i in lst]))
>>> lst1
[(95, 643), (93, 611), (90, 610)]
>>> newlist = []
>>> for x in lst1:
... d = {}
... for i,y in enumerate(lst):
... d[y[0]] = x[i]
... newlist.append(d)
...
>>> newlist
[{'a': 95, 'b': 643}, {'a': 93, 'b': 611}, {'a': 90, 'b': 610}]
``` |
Python: issue when using vars() dictionary | 31,020,065 | 12 | 2015-06-24T07:13:46Z | 31,020,173 | 9 | 2015-06-24T07:19:04Z | [
"python",
"dictionary"
] | I have the following snippet:
```
a, b = 1, 2
params = ['a', 'b']
res = {p: vars()[p] for p in params}
```
Which gives me `KeyError: 'a'` whereas the following code works fine:
```
a, b = 1, 2
params = ['a', 'b']
res = {}
for p in params:
res[p] = vars()[p]
```
What's the difference here? | [`vars()`](https://docs.python.org/2/library/functions.html#vars) without any argument acts like `locals()` and since a dictionary comprehension has its own scope it has no variable named `a` or `b`.
You can use [`eval()`](https://docs.python.org/2/library/functions.html#eval) here. Without any argument it will execute in [LEGB](http://stackoverflow.com/questions/291978/short-description-of-python-scoping-rules) manner, or specify `globals()` dict explicitly to `eval`:
```
>>> res = {p: eval(p) for p in params}
>>> res
{'a': 1, 'b': 2}
```
---
But then again the correct way will be to create a dictionary from the start if you want to access variables using their names. |
Python if not == vs if != | 31,026,754 | 125 | 2015-06-24T12:35:39Z | 31,026,976 | 168 | 2015-06-24T12:45:34Z | [
"python",
"if-statement",
"equality"
] | What is the difference between these two lines of code:
```
if not x == 'val':
```
and
```
if x != 'val':
```
Is one more efficient than the other?
Would it be better to use
```
if x == 'val':
pass
else:
``` | Using [`dis`](https://docs.python.org/2/library/dis.html) to look at the bytecode generated for the two versions:
**`not ==`**
```
4 0 LOAD_FAST 0 (foo)
3 LOAD_FAST 1 (bar)
6 COMPARE_OP 2 (==)
9 UNARY_NOT
10 RETURN_VALUE
```
**`!=`**
```
4 0 LOAD_FAST 0 (foo)
3 LOAD_FAST 1 (bar)
6 COMPARE_OP 3 (!=)
9 RETURN_VALUE
```
The latter has fewer operations, and is therefore likely to be slightly more efficient.
---
It was pointed out [in the commments](http://stackoverflow.com/questions/31026754/python-if-not-vs-if/31026976#comment50099324_31026976) (thanks, [@Quincunx](http://stackoverflow.com/users/1896169/quincunx)) that where you have `if foo != bar` vs. `if not foo == bar` the number of operations is exactly the same, it's just that the `COMPARE_OP` changes and `POP_JUMP_IF_TRUE` switches to `POP_JUMP_IF_FALSE`:
**`not ==`**:
```
2 0 LOAD_FAST 0 (foo)
3 LOAD_FAST 1 (bar)
6 COMPARE_OP 2 (==)
9 POP_JUMP_IF_TRUE 16
```
**`!=`**
```
2 0 LOAD_FAST 0 (foo)
3 LOAD_FAST 1 (bar)
6 COMPARE_OP 3 (!=)
9 POP_JUMP_IF_FALSE 16
```
In this case, unless there was a difference in the amount of work required for each comparison, it's unlikely you'd see any performance difference at all.
---
However, note that the two versions **won't always be logically identical**, as it will depend on the implementations of `__eq__` and `__ne__` for the objects in question. Per [the data model documentation](https://docs.python.org/2/reference/datamodel.html#object.__lt__):
> There are no implied relationships among the comparison operators. The
> truth of `x==y` does not imply that `x!=y` is false.
For example:
```
>>> class Dummy(object):
def __eq__(self, other):
return True
def __ne__(self, other):
return True
>>> not Dummy() == Dummy()
False
>>> Dummy() != Dummy()
True
```
---
Finally, and perhaps most importantly: in general, where the two *are* logically identical, **`x != y` is much more readable than `not x == y`**. |
Python if not == vs if != | 31,026,754 | 125 | 2015-06-24T12:35:39Z | 31,027,250 | 22 | 2015-06-24T12:58:09Z | [
"python",
"if-statement",
"equality"
] | What is the difference between these two lines of code:
```
if not x == 'val':
```
and
```
if x != 'val':
```
Is one more efficient than the other?
Would it be better to use
```
if x == 'val':
pass
else:
``` | @jonrsharpe has an excellent explanation of what's going on. I thought I'd just show the difference in time when running each of the 3 options 10,000,000 times (enough for a slight difference to show).
Code used:
```
def a(x):
if x != 'val':
pass
def b(x):
if not x == 'val':
pass
def c(x):
if x == 'val':
pass
else:
pass
x = 1
for i in range(10000000):
a(x)
b(x)
c(x)
```
And the cProfile profiler results:

So we can see that there is a very minute difference of ~0.7% between `if not x == 'val':` and `if x != 'val':`. Of these, `if x != 'val':` is the fastest.
However, most surprisingly, we can see that
```
if x == 'val':
pass
else:
```
is in fact the fastest, and beats `if x != 'val':` by ~0.3%. This isn't very readable, but I guess if you wanted a negligible performance improvement, one could go down this route. |
What's the difference between type hinting in 3.3 and 3.5? | 31,029,343 | 6 | 2015-06-24T14:27:46Z | 31,029,363 | 8 | 2015-06-24T14:28:37Z | [
"python",
"python-3.x",
"type-hinting",
"python-3.5"
] | I keep hearing how type hinting will be a new feature in 3.5, but that makes me wonder what the arrow indicator (->) was in 3.3?
You can see it in the [3.3 grammar spec here,](https://docs.python.org/3.3/reference/grammar.html) which I found from [this question asked 2 years ago.](http://stackoverflow.com/questions/14379753/what-does-mean-in-python-function-definitions)
I'm wondering, did type hinting exist before, but in a limited fashion, and 3.5 is bringing more major support? Or is my understanding of type hinting incorrect, and it actually means something else? | The `->` is used for *annotations*. [One of the use cases for annotations](https://www.python.org/dev/peps/pep-3107/#use-cases) is type hinting.
Python 3.0 added annotations, Python 3.5 builds on that feature by introducing type hinting, standardising the feature.
The relevant PEP (Python Enhancement Proposals) are:
* [PEP 3107 *Function Annotations*](https://www.python.org/dev/peps/pep-3107/) -- added in Python 3.0
* [PEP 484 *Type Hints*](https://www.python.org/dev/peps/pep-0484/) - will be part of Python 3.5
Annotations are just *syntax*, type hinting is *specific functionality*.
You can use the syntax for anything you like, like inline documentation:
```
def documentation(self: "the instance", arg1: "first argument") -> "nothing is returned":
pass
```
All that the syntax does is attach that extra information you provided to the function object:
```
>>> def documentation(self: "the instance", arg1: "first argument") -> "nothing is returned":
... pass
...
>>> documentation.__annotations__
{'return': 'nothing is returned', 'arg1': 'first argument', 'self': 'the instance'}
```
The Type Hinting specification specifies how you could use those annotations to say something about what type each argument should be and what is returned. It is a specific application of annotations in that it defines how to interpret the annotations.
The Type Hinting PEP explicitly states it is not meant to be the only use of annotations:
> Note that this PEP still explicitly does NOT prevent other uses of annotations, nor does it require (or forbid) any particular processing of annotations, even when they conform to this specification. It simply enables better coordination, as PEP 333 did for web frameworks.
Type hinting remains entirely optional, it is not nor will it ever be required that you use it. Again quoting the PEP:
> While the proposed typing module will contain some building blocks for runtime type checking -- in particular the `get_type_hints()` function -- third party packages would have to be developed to implement specific runtime type checking functionality, for example using decorators or metaclasses. Using type hints for performance optimizations is left as an exercise for the reader.
>
> It should also be emphasized that **Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention.**
Emphasis in the original.
You can install the [`typing` module](https://pypi.python.org/pypi/typing) to add type hinting to earlier Python 3.x versions. |
Plotting categorical data with pandas and matplotlib | 31,029,560 | 12 | 2015-06-24T14:37:16Z | 31,029,857 | 17 | 2015-06-24T14:50:53Z | [
"python",
"pandas"
] | I have a data frame with categorical data:
```
colour direction
1 red up
2 blue up
3 green down
4 red left
5 red right
6 yellow down
7 blue down
```
and now I want to generate some graphs, like pie charts and histograns based on the categories. Is it possible without creating dummy numeric variables? Something like
```
df.plot(kind='hist')
``` | You can simply use `value_counts` on the series:
```
df.colour.value_counts().plot(kind='bar')
```
[](http://i.stack.imgur.com/ouoSE.png) |
Why is PyMongo 3 giving ServerSelectionTimeoutError? | 31,030,307 | 10 | 2015-06-24T15:11:20Z | 31,194,981 | 14 | 2015-07-02T21:31:45Z | [
"python",
"mongodb",
"uwsgi",
"mongolab",
"pymongo-3.x"
] | I'm using:
* Python 3.4.2
* PyMongo 3.0.2
* mongolab running mongod 2.6.9
* uWSGI 2.0.10
* CherryPy 3.7.0
* nginx 1.6.2
uWSGI start params:
```
--socket 127.0.0.1:8081 --daemonize --enable-threads --threads 2 --processes 2
```
I setup my MongoClient ONE time:
```
self.mongo_client = MongoClient('mongodb://user:[email protected]:port/mydb')
self.db = self.mongo_client['mydb']
```
I try and save a JSON dict to MongoDB:
```
result = self.db.jobs.insert_one(job_dict)
```
It works via a unit test that executes the same code path to mongodb. However when I execute via CherryPy and uWSGI using an HTTP POST, I get this:
```
pymongo.errors.ServerSelectionTimeoutError: No servers found yet
```
Why am I seeing this behavior when run via CherryPy and uWSGI? Is this perhaps the new thread model in PyMongo 3?
**Update:**
If I run without uWSGI and nginx by using the CherryPy built-in server, the `insert_one()` works.
**Update 1/25 4:53pm EST:**
After adding some debug in PyMongo, it appears that `topology._update_servers()` knows that the server\_type = 2 for server 'myserver-a.mongolab.com'. However `server_description.known_servers()` has the server\_type = 0 for server 'myserver.mongolab.com'
This leads to the following stack trace:
```
result = self.db.jobs.insert_one(job_dict)
File "/usr/local/lib/python3.4/site-packages/pymongo/collection.py", line 466, in insert_one
with self._socket_for_writes() as sock_info:
File "/usr/local/lib/python3.4/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.4/site-packages/pymongo/mongo_client.py", line 663, in _get_socket
server = self._get_topology().select_server(selector)
File "/usr/local/lib/python3.4/site-packages/pymongo/topology.py", line 121, in select_server
address))
File "/usr/local/lib/python3.4/site-packages/pymongo/topology.py", line 97, in select_servers
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: No servers found yet
``` | We're investigating this problem, tracked in [PYTHON-961](https://jira.mongodb.org/browse/PYTHON-961). You may be able to work around the issue by passing ***connect=False*** when creating instances of MongoClient. That defers background connection until the first database operation is attempted, avoiding what I suspect is a race condition between spin up of MongoClient's monitor thread and multiprocess forking. |
get the index of the last negative value in a 2d array per column | 31,031,355 | 21 | 2015-06-24T15:57:18Z | 31,085,940 | 8 | 2015-06-27T06:35:40Z | [
"python",
"arrays",
"numpy"
] | I'm trying to get the index of the last negative value of an array per column (in order to slice it after).
a simple working example on a 1d vector is :
```
import numpy as np
A = np.arange(10) - 5
A[2] = 2
print A # [-5 -4 2 -2 -1 0 1 2 3 4]
idx = np.max(np.where(A <= 0)[0])
print idx # 5
A[:idx] = 0
print A # [0 0 0 0 0 0 1 2 3 4]
```
Now I wanna do the same thing on each column of a 2D array :
```
A = np.arange(10) - 5
A[2] = 2
A2 = np.tile(A, 3).reshape((3, 10)) - np.array([0, 2, -1]).reshape((3, 1))
print A2
# [[-5 -4 2 -2 -1 0 1 2 3 4]
# [-7 -6 0 -4 -3 -2 -1 0 1 2]
# [-4 -3 3 -1 0 1 2 3 4 5]]
```
And I would like to obtain :
```
print A2
# [[0 0 0 0 0 0 1 2 3 4]
# [0 0 0 0 0 0 0 0 1 2]
# [0 0 0 0 0 1 2 3 4 5]]
```
but I can't manage to figure out how to translate the max/where statement to the this 2d array... | Assuming that you are looking to set all elements for each row until the last negative element to be set to zero (as per the expected output listed in the question for a sample case), two approaches could be suggested here.
**Approach #1**
This one is based on [`np.cumsum`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.cumsum.html) to generate a mask of elements to be set to zeros as listed next -
```
# Get boolean mask with TRUEs for each row starting at the first element and
# ending at the last negative element
mask = (np.cumsum(A2[:,::-1]<0,1)>0)[:,::-1]
# Use mask to set all such al TRUEs to zeros as per the expected output in OP
A2[mask] = 0
```
Sample run -
```
In [280]: A2 = np.random.randint(-4,10,(6,7)) # Random input 2D array
In [281]: A2
Out[281]:
array([[-2, 9, 8, -3, 2, 0, 5],
[-1, 9, 5, 1, -3, -3, -2],
[ 3, -3, 3, 5, 5, 2, 9],
[ 4, 6, -1, 6, 1, 2, 2],
[ 4, 4, 6, -3, 7, -3, -3],
[ 0, 2, -2, -3, 9, 4, 3]])
In [282]: A2[(np.cumsum(A2[:,::-1]<0,1)>0)[:,::-1]] = 0 # Use mask to set zeros
In [283]: A2
Out[283]:
array([[0, 0, 0, 0, 2, 0, 5],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 3, 5, 5, 2, 9],
[0, 0, 0, 6, 1, 2, 2],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 9, 4, 3]])
```
**Approach #2**
This one starts with the idea of finding the last negative element indices from [`@tom10's answer`](http://stackoverflow.com/a/31084951/3293881) and develops into a mask finding method using [`broadcasting`](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to get us the desired output, similar to `approach #1`.
```
# Find last negative index for each row
last_idx = A2.shape[1] - 1 - np.argmax(A2[:,::-1]<0, axis=1)
# Find the invalid indices (rows with no negative indices)
invalid_idx = A2[np.arange(A2.shape[0]),last_idx]>=0
# Set the indices for invalid ones to "-1"
last_idx[invalid_idx] = -1
# Boolean mask with each row starting with TRUE as the first element
# and ending at the last negative element
mask = np.arange(A2.shape[1]) < (last_idx[:,None] + 1)
# Set masked elements to zeros, for the desired output
A2[mask] = 0
```
---
Runtime tests -
Function defintions:
```
def broadcasting_based(A2):
last_idx = A2.shape[1] - 1 - np.argmax(A2[:,::-1]<0, axis=1)
last_idx[A2[np.arange(A2.shape[0]),last_idx]>=0] = -1
A2[np.arange(A2.shape[1]) < (last_idx[:,None] + 1)] = 0
return A2
def cumsum_based(A2):
A2[(np.cumsum(A2[:,::-1]<0,1)>0)[:,::-1]] = 0
return A2
```
Runtimes:
```
In [379]: A2 = np.random.randint(-4,10,(100000,100))
...: A2c = A2.copy()
...:
In [380]: %timeit broadcasting_based(A2)
10 loops, best of 3: 106 ms per loop
In [381]: %timeit cumsum_based(A2c)
1 loops, best of 3: 167 ms per loop
```
Verify results -
```
In [384]: A2 = np.random.randint(-4,10,(100000,100))
...: A2c = A2.copy()
...:
In [385]: np.array_equal(broadcasting_based(A2),cumsum_based(A2c))
Out[385]: True
``` |
get the index of the last negative value in a 2d array per column | 31,031,355 | 21 | 2015-06-24T15:57:18Z | 31,088,753 | 12 | 2015-06-27T12:27:29Z | [
"python",
"arrays",
"numpy"
] | I'm trying to get the index of the last negative value of an array per column (in order to slice it after).
a simple working example on a 1d vector is :
```
import numpy as np
A = np.arange(10) - 5
A[2] = 2
print A # [-5 -4 2 -2 -1 0 1 2 3 4]
idx = np.max(np.where(A <= 0)[0])
print idx # 5
A[:idx] = 0
print A # [0 0 0 0 0 0 1 2 3 4]
```
Now I wanna do the same thing on each column of a 2D array :
```
A = np.arange(10) - 5
A[2] = 2
A2 = np.tile(A, 3).reshape((3, 10)) - np.array([0, 2, -1]).reshape((3, 1))
print A2
# [[-5 -4 2 -2 -1 0 1 2 3 4]
# [-7 -6 0 -4 -3 -2 -1 0 1 2]
# [-4 -3 3 -1 0 1 2 3 4 5]]
```
And I would like to obtain :
```
print A2
# [[0 0 0 0 0 0 1 2 3 4]
# [0 0 0 0 0 0 0 0 1 2]
# [0 0 0 0 0 1 2 3 4 5]]
```
but I can't manage to figure out how to translate the max/where statement to the this 2d array... | You already have good answers, but I wanted to propose a potentially quicker variation using the function `np.maximum.accumulate`. Since your method for a 1D array uses `max`/`where`, you may also find this approach quite intuitive. (*Edit: quicker Cython implementation added below*).
The overall approach is very similar to the others; the mask is created with:
```
np.maximum.accumulate((A2 < 0)[:, ::-1], axis=1)[:, ::-1]
```
This line of code does the following:
* `(A2 < 0)` creates a Boolean array, indicating whether a value is negative or not. The index `[:, ::-1]` flips this left-to-right.
* `np.maximum.accumulate` is used to return the cumulative maximum along each row (i.e. `axis=1`). For example `[False, True, False]` would become `[False, True, True]`.
* The final indexing operation `[:, ::-1]` flips this new Boolean array left-to-right.
Then all that's left to do is to use the Boolean array as a mask to set the `True` values to zero.
---
Borrowing the timing methodology and two functions from [@Divakar's answer](http://stackoverflow.com/a/31085940/3923281), here are the benchmarks for my proposed method:
```
# method using np.maximum.accumulate
def accumulate_based(A2):
A2[np.maximum.accumulate((A2 < 0)[:, ::-1], axis=1)[:, ::-1]] = 0
return A2
# large sample array
A2 = np.random.randint(-4, 10, size=(100000, 100))
A2c = A2.copy()
A2c2 = A2.copy()
```
The timings are:
```
In [47]: %timeit broadcasting_based(A2)
10 loops, best of 3: 61.7 ms per loop
In [48]: %timeit cumsum_based(A2c)
10 loops, best of 3: 127 ms per loop
In [49]: %timeit accumulate_based(A2c2) # quickest
10 loops, best of 3: 43.2 ms per loop
```
So using `np.maximum.accumulate` can be as much as 30% faster than the next fastest solution for arrays of this size and shape.
---
As [@tom10 points out](http://stackoverflow.com/a/31084951/3923281), each NumPy operation processes arrays in their entirety, which can be inefficient when multiple operations are needed to get a result. An iterative approach which works through the array just once may fare better.
Below is a naive function written in Cython which could more than twice as fast as a pure NumPy approach.
This function may be able to be sped up further using [memory views](http://docs.cython.org/src/userguide/memoryviews.html).
```
cimport cython
import numpy as np
cimport numpy as np
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
def cython_based(np.ndarray[long, ndim=2, mode="c"] array):
cdef int rows, cols, i, j, seen_neg
rows = array.shape[0]
cols = array.shape[1]
for i in range(rows):
seen_neg = 0
for j in range(cols-1, -1, -1):
if seen_neg or array[i, j] < 0:
seen_neg = 1
array[i, j] = 0
return array
```
This function works backwards through each row and starts setting values to zero once it has seen a negative value.
*Testing it works*:
```
A2 = np.random.randint(-4, 10, size=(100000, 100))
A2c = A2.copy()
np.array_equal(accumulate_based(A2), cython_based(A2c))
# True
```
*Comparing the performance of the function*:
```
In [52]: %timeit accumulate_based(A2)
10 loops, best of 3: 49.8 ms per loop
In [53]: %timeit cython_based(A2c)
100 loops, best of 3: 18.6 ms per loop
``` |
Can you upload to S3 using a stream rather than a local file? | 31,031,463 | 9 | 2015-06-24T16:02:29Z | 31,054,669 | 7 | 2015-06-25T15:30:21Z | [
"python",
"csv",
"amazon-s3",
"boto",
"buffering"
] | I need to create a CSV and upload it to an S3 bucket. Since I'm creating the file on the fly, it would be better if I could write it directly to S3 bucket as it is being created rather than writing the whole file locally, and then uploading the file at the end.
Is there a way to do this? My project is in Python and I'm fairly new to the language. Here is what I tried so far:
```
import csv
import csv
import io
import boto
from boto.s3.key import Key
conn = boto.connect_s3()
bucket = conn.get_bucket('dev-vs')
k = Key(bucket)
k.key = 'foo/foobar'
fieldnames = ['first_name', 'last_name']
writer = csv.DictWriter(io.StringIO(), fieldnames=fieldnames)
k.set_contents_from_stream(writer.writeheader())
```
I received this error: BotoClientError: s3 does not support chunked transfer
**UPDATE: I found a way to write directly to S3, but I can't find a way to clear the buffer without actually deleting the lines I already wrote. So, for example:**
```
conn = boto.connect_s3()
bucket = conn.get_bucket('dev-vs')
k = Key(bucket)
k.key = 'foo/foobar'
testDict = [{
"fieldA": "8",
"fieldB": None,
"fieldC": "888888888888"},
{
"fieldA": "9",
"fieldB": None,
"fieldC": "99999999999"}]
f = io.StringIO()
fieldnames = ['fieldA', 'fieldB', 'fieldC']
writer = csv.DictWriter(f, fieldnames=fieldnames)
writer.writeheader()
k.set_contents_from_string(f.getvalue())
for row in testDict:
writer.writerow(row)
k.set_contents_from_string(f.getvalue())
f.close()
```
Writes 3 lines to the file, however I'm unable to release memory to write a big file. If I add:
```
f.seek(0)
f.truncate(0)
```
to the loop, then only the last line of the file is written. Is there any way to release resources without deleting lines from the file? | I did find a solution to my question, which I will post here in case anyone else is interested. I decided to do this as parts in a multipart upload. You can't stream to S3. There is also a package available that changes your streaming file over to a multipart upload which I used: [Smart Open](https://github.com/piskvorky/smart_open).
```
import smart_open
import io
import csv
testDict = [{
"fieldA": "8",
"fieldB": None,
"fieldC": "888888888888"},
{
"fieldA": "9",
"fieldB": None,
"fieldC": "99999999999"}]
fieldnames = ['fieldA', 'fieldB', 'fieldC']
f = io.StringIO()
with smart_open.smart_open('s3://dev-test/bar/foo.csv', 'wb') as fout:
writer = csv.DictWriter(f, fieldnames=fieldnames)
writer.writeheader()
fout.write(f.getvalue())
for row in testDict:
f.seek(0)
f.truncate(0)
writer.writerow(row)
fout.write(f.getvalue())
f.close()
``` |
Pandas: Find max value in each row of a column and identify corresponding values in another column | 31,032,896 | 3 | 2015-06-24T17:12:44Z | 31,032,989 | 7 | 2015-06-24T17:17:42Z | [
"python",
"pandas"
] | I have a two pandas data frame columns, for which the values are lists of tuples such as:
```
df[âAâ].values
(1.55, 2.07, 2.20, 2.23)
(0.67, 1.10, 1.73, 1.35)
(2.92, 1.98, 2.30, 2.66)
```
and
```
df[âBâ].values
(1.55, 0.0086, 0.078, 0.12)
(0.672, 0.142, 0.0166, 0.0173)
(1.97, 0.0094, 0.1648, 0.016)
```
I would like to pick the largest value for each row of `df['A']` and find the value in the corresponding position of `df['B']` to generate a new column such as:
```
> df[âCâ]
0.12
0.0166
1.97
``` | ```
import pandas as pd
import numpy as np
df_dict = dict(A=[(1.55, 2.07, 2.20, 2.23), (0.67, 1.10, 1.73, 1.35), (2.92, 1.98, 2.30, 2.66)],
B=[(1.55, 0.0086, 0.078, 0.12), (0.672, 0.142, 0.0166, 0.0173), (1.97, 0.0094, 0.1648, 0.016)])
df = pd.DataFrame(df_dict)
Out[180]:
A B
0 (1.55, 2.07, 2.2, 2.23) (1.55, 0.0086, 0.078, 0.12)
1 (0.67, 1.1, 1.73, 1.35) (0.672, 0.142, 0.0166, 0.0173)
2 (2.92, 1.98, 2.3, 2.66) (1.97, 0.0094, 0.1648, 0.016)
def apply_func(row):
return row.B[np.array(row.A).argmax()]
df['C'] = df.apply(apply_func, axis=1)
Out[182]:
A B C
0 (1.55, 2.07, 2.2, 2.23) (1.55, 0.0086, 0.078, 0.12) 0.1200
1 (0.67, 1.1, 1.73, 1.35) (0.672, 0.142, 0.0166, 0.0173) 0.0166
2 (2.92, 1.98, 2.3, 2.66) (1.97, 0.0094, 0.1648, 0.016) 1.9700
``` |
pandas get column average | 31,037,298 | 5 | 2015-06-24T21:22:11Z | 31,037,360 | 15 | 2015-06-24T21:26:16Z | [
"python",
"pandas"
] | I can't get the average or mean of a column in pandas. A have a dataframe. Neither of things I tried below gives me the average of the column `weight`
```
>>> allDF
ID birthyear weight
0 619040 1962 0.1231231
1 600161 1963 0.981742
2 25602033 1963 1.3123124
3 624870 1987 0.94212
```
The following returns several values, not one:
```
allDF[['weight']].mean(axis=1)
```
So does this:
```
allDF.groupby('weight').mean()
``` | If you only want the mean of the `weight` column, select the column (which is a Series) and call `.mean()`:
```
In [479]: df
Out[479]:
ID birthyear weight
0 619040 1962 0.123123
1 600161 1963 0.981742
2 25602033 1963 1.312312
3 624870 1987 0.942120
In [480]: df["weight"].mean()
Out[480]: 0.83982437500000007
``` |
Why is "1.real" a syntax error but "1 .real" valid in Python? | 31,037,609 | 21 | 2015-06-24T21:42:18Z | 31,037,690 | 11 | 2015-06-24T21:48:13Z | [
"python"
] | So I saw [these two](https://twitter.com/bmispelon/status/613816239622909953) [questions on twitter](https://twitter.com/bmispelon/status/613818391967715329). How is `1.real` a syntax error but `1 .real` is not?
```
>>> 1.real
File "<stdin>", line 1
1.real
^
SyntaxError: invalid syntax
>>> 1 .real
1
>>> 1. real
File "<stdin>", line 1
1. real
^
SyntaxError: invalid syntax
>>> 1 . real
1
>>> 1..real
1.0
>>> 1 ..real
File "<stdin>", line 1
1 ..real
^
SyntaxError: invalid syntax
>>> 1.. real
1.0
>>> 1 .. real
File "<stdin>", line 1
1 .. real
^
SyntaxError: invalid syntax
``` | With `1.real` Python is looking for a floating-point numeric literal like `1.0` and you can't have an `r` in a float. With `1 .real` Python has taken `1` as an integer and is doing the attribute lookup on that.
It's important to note that the floating-point syntax error handling happens before the `.` attribute lookup. |
Why is "1.real" a syntax error but "1 .real" valid in Python? | 31,037,609 | 21 | 2015-06-24T21:42:18Z | 31,037,917 | 42 | 2015-06-24T22:02:42Z | [
"python"
] | So I saw [these two](https://twitter.com/bmispelon/status/613816239622909953) [questions on twitter](https://twitter.com/bmispelon/status/613818391967715329). How is `1.real` a syntax error but `1 .real` is not?
```
>>> 1.real
File "<stdin>", line 1
1.real
^
SyntaxError: invalid syntax
>>> 1 .real
1
>>> 1. real
File "<stdin>", line 1
1. real
^
SyntaxError: invalid syntax
>>> 1 . real
1
>>> 1..real
1.0
>>> 1 ..real
File "<stdin>", line 1
1 ..real
^
SyntaxError: invalid syntax
>>> 1.. real
1.0
>>> 1 .. real
File "<stdin>", line 1
1 .. real
^
SyntaxError: invalid syntax
``` | I guess that the `.` is greedily parsed as part of a number, if possible, making it the `float` `1.`, instead of being part of the method call.
Spaces are not allowed around the decimal point, but you can have spaces before and after the `.` in a method call. If the number is followed by a space, the parse of the number is terminated, so it's unambiguous.
Let's look at the different cases and how they are parsed:
```
>>> 1.real # parsed as (1.)real -> missing '.'
>>> 1 .real # parsed as (1).real -> okay
>>> 1. real # parsed as (1.)real -> missing '.'
>>> 1 . real # parsed as (1).real -> okay
>>> 1..real # parsed as (1.).real -> okay
>>> 1 ..real # parsed as (1)..real -> one '.' too much
>>> 1.. real # parsed as (1.).real -> okay
>>> 1 .. real # parsed as (1)..real -> one '.' too much
``` |
Error with simple Python code | 31,038,208 | 3 | 2015-06-24T22:27:09Z | 31,038,288 | 7 | 2015-06-24T22:33:13Z | [
"python"
] | I have a simple python (version 2.7.3) code that has an output that I can't figure out. The code prompts the user for a score (and will continue to do so if the input is anything other than a number from 0 to 1), determines the letter grade, and then exits. The code is as follows:
```
def calc_grade():
try:
score = float(raw_input("Enter a score: "))
if score > 1.0:
print "Error: Score cannot be greater than 1."
calc_grade()
except:
print "Error: Score must be a numeric value from 0 to 1."
calc_grade()
print "\nthe score is: %s" % (score)
if score >= 0.9:
print "A"
elif score >= 0.8:
print "B"
elif score >= 0.7:
print "C"
elif score >= 0.6:
print "D"
else:
print "F"
return 0
calc_grade()
```
If I run this script an try the inputs: 1.5, h, 0.8, then I get the following output:
```
Enter a score: 1.5
Error: Score cannot be greater than 1.
Enter a score: h
Error: Score must be a numeric value from 0 to 1.
Enter a score: 0.8
the score is: 0.8
B
Error: Score must be a numeric value from 0 to 1.
Enter a score: 0.7
the score is: 0.7
C
the score is: 1.5
A
```
As you can see, after entering a valid value (0.8), the script prints out the correct grade (B), but then script doesn't end as I expect it to. Instead, it prints out the error message for a non-numeric value, and then prompts the user to enter a score again. If I enter another valid score (0.7 in this case), then script prints out the correct grade (C), and then prints out the first incorrect input (1.5) along with its grade (A).
I can't, for the life of me, figure out what's causing this, "functionality". Any suggestions? | On any error that occurs, you call `calc_grade` recursively again, so if you entered an invalid input, you'd have several calls. Instead, you should handle faulty errors iteratively:
```
def calc_grade():
score = None
while score is None:
try:
score = float(raw_input("Enter a score: "))
if score > 1.0:
print "Error: Score cannot be greater than 1."
score = None
except:
print "Error: Score must be a numeric value from 0 to 1."
# If we reached here, score is valid,
# continue with the rest of the code
``` |
Python multi-line with statement | 31,039,022 | 11 | 2015-06-24T23:49:35Z | 31,039,332 | 10 | 2015-06-25T00:29:57Z | [
"python",
"python-3.x",
"multiline",
"with-statement"
] | What is a clean way to create a multi-line `with` in python? I want to open up several files inside a single `with`, but it's far enough to the right that I want it on multiple lines. Like this:
```
class Dummy:
def __enter__(self): pass
def __exit__(self, type, value, traceback): pass
with Dummy() as a, Dummy() as b,
Dummy() as c:
pass
```
Unfortunately, that is a `SyntaxError`. So I tried this:
```
with (Dummy() as a, Dummy() as b,
Dummy() as c):
pass
```
Also a syntax error. However, this worked:
```
with Dummy() as a, Dummy() as b,\
Dummy() as c:
pass
```
But what if I wanted to place a comment? This does not work:
```
with Dummy() as a, Dummy() as b,\
# my comment explaining why I wanted Dummy() as c\
Dummy() as c:
pass
```
Nor does any obvious variation on the placement of the `\`s.
Is there a clean way to create a multi-line `with` statement that allows comments inside it? | Given that you've tagged this Python 3, if you need to intersperse comments with your context managers, I would use a [`contextlib.ExitStack`](https://docs.python.org/3/library/contextlib.html#contextlib.ExitStack):
```
with ExitStack() as stack:
a = stack.enter_context(Dummy()) # Relevant comment
b = stack.enter_context(Dummy()) # Comment about b
c = stack.enter_context(Dummy()) # Further information
```
This is equivalent to
```
with Dummy() as a, Dummy() as b, Dummy() as c:
```
This has the benefit that you can generate your context managers in a loop instead of needing to separately list each one. The documentation gives the example that if you want to open a bunch of files, and you have the filenames in a list, you can do
```
with ExitStack() as stack:
files = [stack.enter_context(open(fname)) for fname in filenames]
```
If your context managers take so much screen space that you want to put comments between them, you probably have enough to want to use some sort of loop. |
Insert element in Python list after every nth element | 31,040,525 | 11 | 2015-06-25T03:05:20Z | 31,040,944 | 7 | 2015-06-25T03:56:38Z | [
"python",
"list",
"indexing",
"insert",
"slice"
] | Say I have a Python list like this:
```
letters = ['a','b','c','d','e','f','g','h','i','j']
```
I want to insert an 'x' after every nth element, let's say three characters in that list. The result should be:
```
letters = ['a','b','c','x','d','e','f','x','g','h','i','x','j']
```
I understand that I can do that with looping and inserting. What I'm actually looking for is a Pythonish-way, a one-liner maybe? | I've got two one liners.
Given:
```
>>> letters = ['a','b','c','d','e','f','g','h','i','j']
```
1. Use `enumerate` to get index, add `'x'` every 3rd letter, *eg*: `mod(n, 3) == 2`, then concatenate into string and `list()` it.
```
>>> list(''.join(l + 'x' * (n % 3 == 2) for n, l in enumerate(letters)))
['a', 'b', 'c', 'x', 'd', 'e', 'f', 'x', 'g', 'h', 'i', 'x', 'j']
```
2. Use nested comprehensions to flatten a list of lists(a), sliced in groups of 3 with `'x'` added if less than 3 from end of list.
```
>>> [x for y in (letters[i:i+3] + ['x'] * (i < len(letters) - 2) for
i in xrange(0, len(letters), 3)) for x in y]
['a', 'b', 'c', 'x', 'd', 'e', 'f', 'x', 'g', 'h', 'i', 'x', 'j']
```
(a) `[item for subgroup in groups for item in subgroup]` flattens a jagged list of lists. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.