title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Check whether a list starts with the elements of another list | 32,149,202 | 3 | 2015-08-21T21:02:35Z | 32,149,245 | 9 | 2015-08-21T21:05:47Z | [
"python",
"list"
] | What is the easiest (most pythonic way) to check, if the beginning of the list are exactly the elements of another list? Consider the following examples:
```
li = [1,4,5,3,2,8]
#Should return true
startsWithSublist(li, [1,4,5])
#Should return false
startsWithSublist(list2, [1,4,3])
#Should also return false, although it is contained in the list
startsWithSublist(list2, [4,5,3])
```
Sure I could iterate over the lists, but I guess there is an easier way. Both list will never contain the same elements twice, and the second list will always be shorter or equal long to the first list. Length of the list to match is variable.
How to do this in Python? | Use list slicing:
```
>>> li = [1,4,5,3,2,8]
>>> sublist = [1,4,5]
>>> li[:len(sublist)] == sublist
True
``` |
WxPython: PyInstaller fails with No module named _core_ | 32,154,849 | 7 | 2015-08-22T10:05:36Z | 32,334,301 | 7 | 2015-09-01T14:30:34Z | [
"python",
"ubuntu",
"wxpython",
"ubuntu-14.04",
"pyinstaller"
] | I am converting my wxpython (3.0.2.0) application to binaries using PyInstaller. The binaries work fine when built and executed on Ubuntu 12.04. However if I build on Ubuntu 14.04, I get the following error. (The application works when I launch the python script directly i.e. python my\_application.py even in Ubuntu 14.04). Any idea what could be missing when packaging the application using PyInstaller?
```
$ ./my_application
Traceback (most recent call last):
File "<string>", line 22, in <module>
File "/usr/local/lib/python2.7/dist-packages/PyInstaller/loader/pyi_importers.py", line 270, in load_module
exec(bytecode, module.__dict__)
File "/local/workspace/my_application/out00-PYZ.pyz/wx", line 45, in <module>
File "/usr/local/lib/python2.7/dist-packages/PyInstaller/loader/pyi_importers.py", line 270, in load_module
exec(bytecode, module.__dict__)
File "/local/workspace/my_application/out00-PYZ.pyz/wx._core", line 4, in <module>
**ImportError: No module named _core_**
```
My PyInstaller spec file looks like this:
```
...
pyz = PYZ(a.pure)
exe = EXE(pyz,
a.scripts,
exclude_binaries=True,
name='my_application',
debug=False,
onefile = True,
strip=None,
upx=True,
console=True )
coll = COLLECT(exe,
a.binaries,
a.zipfiles,
a.datas,
strip=None,
upx=True,
name='my_application')
``` | Fundamentally the problem is with the PyInstaller version - you need to be on the `develop` version. This issue has been seen and is documented on a [PyInstaller Github issue](https://github.com/pyinstaller/pyinstaller/issues/1300).
To install the latest version and rectify - at the command prompt type:
```
$ pip install git+https://github.com/pyinstaller/pyinstaller
```
This directly installs the latest version of pyinstaller from github (this [branch on github](https://github.com/pyinstaller/pyinstaller/tree/develop). Until recently, PyInstaller had a separate `python3` branch, but this has been [merged back into the `develop` branch](https://github.com/pyinstaller/pyinstaller/issues/1417). If you need to use Python 3.x, you will need [this branch](https://github.com/pyinstaller/pyinstaller/tree/develop) - get this by appending `@develop` to the `pip install` command)
The above method relies on you having `git` installed on your system to get the pyinstaller code (pretty likely for a developer these days, I guess). If not, you can either
1. install git using `apt-get install git` (you might need to `sudo` that)
2. download the pyinstaller-develop zip file ([here](https://github.com/pyinstaller/pyinstaller/archive/develop.zip)) and install manually. Note as per the [wiki as of Oct 2014,](https://github.com/pyinstaller/pyinstaller/wiki) this should support 2.7 and 3.x.
Personally - I much prefer option 1 as you avoid all the potential problems of building from a zipped source tree yourself.
### Testing
I tested this on Ubuntu 14.04, 64 bit, wxpython 3.0.2.0 with python 2.7.6, using the simple ["Hello world" app](http://wiki.wxpython.org/Getting%20Started#A_First_Application:_.22Hello.2C_World.22) from the wxPython webpage. The OP's issue reproduced exactly before installing pyinstaller develop version. After installing the develop version the app built correctly and ran as an executable.
---
**Documentation** of using pip with git - <https://pip.pypa.io/en/latest/reference/pip_install.html#git>
It is not clear from your question which versions of PyInstaller you are using on your Ubuntu 12.04 install vs the 14.04 version. It seems that the version you have on 12.04 does not exhibit the same issue as the standard version installed on 14.04. |
How to convert string like '001100' to numpy.array([0,0,1,1,0,0]) quickly? | 32,155,011 | 4 | 2015-08-22T10:23:55Z | 32,155,109 | 7 | 2015-08-22T10:35:14Z | [
"python",
"numpy",
"types",
"format",
"type-conversion"
] | I have a string consists of 0 and 1, like `'00101'`. And I want to convert it to numpy array `numpy.array([0,0,1,0,1]`.
I am using `for` loop like:
```
import numpy as np
X = np.zeros((1,5),int)
S = '00101'
for i in xrange(5):
X[0][i] = int(S[i])
```
But since I have many strings and the length of each string is 1024, this way is very slow. Is there any better way to do this? | map should be a bit faster than a list comp:
```
import numpy as np
arr = np.array(map(int,'00101'))
```
Some timings show it is on a string of 1024 chars:
```
In [12]: timeit np.array([int(c) for c in s])
1000 loops, best of 3: 422 µs per loop
In [13]: timeit np.array(map(int,s))
1000 loops, best of 3: 389 µs per loop
```
Just calling list in s and using dtype=int is faster:
```
In [20]: timeit np.array(list(s), dtype=int)
1000 loops, best of 3: 329 µs per loop
```
Using [fromiter](http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromiter.html) and passing `dtype=int` is faster again:
```
In [21]: timeit np.fromiter(s,dtype=int)
1000 loops, best of 3: 289 µs per loop
```
Borrowing from this [answer](http://stackoverflow.com/a/12262022/2141635), using fromstring and uint8 as the dtype is the fastest:
```
In [54]: timeit np.fromstring(s, 'int8') - 48
100000 loops, best of 3: 4.54 µs per loop
```
Even rebinding the name and changing the dtype is still by far the fastest:
```
In [71]: %%timeit
....: arr = np.fromstring(s, 'int8') - 48
....: arr = arr.astype(int)
....:
100000 loops, best of 3: 6.23 µs per loop
```
Even considerably faster than Ashwini's join:
```
In [76]: timeit np.fromstring(' '.join(s), sep=' ', dtype=int)
10000 loops, best of 3: 62.6 µs per loop
```
As @Unutbu commented out,`np.fromstring(s, 'int8') - 48` is not limited to ones and zeros but will work for all strings composed of ASCII digits. |
Python Bokeh: remove toolbar from chart | 32,158,939 | 8 | 2015-08-22T17:35:38Z | 32,679,286 | 11 | 2015-09-20T12:13:55Z | [
"python",
"bokeh"
] | I don't seem to be able to remove the toolbar from a bokeh Bar chart. Despite setting the *tools* argument to *None* (or *False* or *''*) I always end up with the bokeh logo and a grey line, e.g. with this code:
```
from bokeh.charts import Bar, output_file, show
# prepare some data
data = {"y": [6, 7, 2, 4, 5], "z": [1, 5, 12, 4, 2]}
# output to static HTML file
output_file("bar.html")
# create a new line chat with a title and axis labels
p = Bar(data, cat=['C1', 'C2', 'C3', 'D1', 'D2'], title="Bar example",
xlabel='categories', ylabel='values', width=400, height=400,
tools=None)
# show the results
show(p)
```
However, when I try the same with a bokeh *plot*, it works perfectly fine and the toolbar is gone, e.g. with this code:
```
from bokeh.plotting import figure, output_file, show
output_file("line.html")
p = figure(plot_width=400, plot_height=400, toolbar_location=None)
# add a line renderer
p.line([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], line_width=2)
show(p)
```
Does anyone know what I'm doing wrong? | If you want to remove the logo and the toolbar you can do:
```
p.logo = None
p.toolbar_location = None
```
Hope this resolves your problem |
Django - CSS stops working when I change urls | 32,160,561 | 5 | 2015-08-22T20:41:01Z | 32,160,637 | 7 | 2015-08-22T20:49:14Z | [
"python",
"css",
"django",
"url"
] | So I ran into a problem on my website where I then created two separate html pages. I then edited the urls.py so the urls would be different for the 2 pages but the css stops working if I do this. My code is below and I will explain more thoroughly after.
part of my head.html
```
<!-- Bootstrap core CSS -->
<link href="../../static/textchange/index.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="../../static/textchange/jumbotron.css" rel="stylesheet">
<!-- Just for debugging purposes. Don't actually copy these 2 lines! -->
<!--[if lt IE 9]><script src="../../assets/js/ie8-responsive-file-warning.js"></script><![endif]-->
<script src="../../static/textchange/index.js"></script>
```
How I include head on each html page
```
{% include "textchange/head.html" %}
```
The two urls causing problems
```
url(r'^results/(?P<uisbn>(\w)+)/(?P<uuser>(\w)+)$', views.contactpost, name="contactpost"),
url(r'^results/(?P<uisbn>(\w)+)/(?P<uuser>(\w)+)$', views.contactwish, name="contactwish"),
```
So the above is how my urls are setup at the moment and I realize this will only ever go to contactpost at the moment. When I change the urls like this:
```
url(r'^results/(?P<uisbn>(\w)+)/post/(?P<uuser>(\w)+)$', views.contactpost, name="contactpost"),
url(r'^results/(?P<uisbn>(\w)+)/wish/(?P<uuser>(\w)+)$', views.contactwish, name="contactwish"),
```
The CSS stops working for both pages.
Initially before I had 2 pages the url looked like this:
```
url(r'^results/(?P<uisbn>(\w)+)/(?P<uuser>(\w)+)$', views.contact, name="contact"),
```
Views.py
```
@login_required
def contactpost(request, uuser, uisbn):
ltextbook = Textbook.objects.filter(isbn = uisbn)
text = ltextbook[0]
luser = User.objects.filter(username = uuser)
quser = luser[0]
post = Posting.objects.filter((Q(user = quser) & Q(textbook = ltextbook)))
posting = post[0]
return render_to_response(
'textchange/contactpost.html',
locals(),
context_instance=RequestContext(request)
)
@login_required
def contactwish(request, uuser, uisbn):
ltextbook = Textbook.objects.filter(isbn = uisbn)
text = ltextbook[0]
luser = User.objects.filter(username = uuser)
quser = luser[0]
wish = Wishlist.objects.filter((Q(user = quser) & Q(textbook = ltextbook)))
wishlist = wish[0]
return render_to_response(
'textchange/contactwish.html',
locals(),
context_instance=RequestContext(request)
)
```
Why would the CSS stop working?
Thanks. | The URL for static is going up two directories; but your path is now three directories deep, so the URL is wrong.
You shouldn't be using relative URLs for your static links. Instead, use absolute ones:
```
<link href="/static/textchange/index.css" rel="stylesheet">
```
even better, use the `{% static %}` tag which takes the value of STATIC\_URL from your settings file.
```
<link href="{% static "textchange/index.css" %}" rel="stylesheet">
``` |
How can I efficiently read and write files that are too large to fit in memory? | 32,162,295 | 17 | 2015-08-23T01:15:43Z | 32,166,257 | 7 | 2015-08-23T11:40:28Z | [
"python",
"numpy",
"memory-management"
] | I am trying to calculate the cosine similarity of 100,000 vectors, and each of these vectors has 200,000 dimensions.
From reading other questions I know that [memmap](http://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html), PyTables and h5py are my best bets for handling this kind of data, and I am currently working with two memmaps; one for reading the vectors, the other for storing the matrix of cosine similarities.
Here is my code:
```
import numpy as np
import scipy.spatial.distance as dist
xdim = 200000
ydim = 100000
wmat = np.memmap('inputfile', dtype = 'd', mode = 'r', shape = (xdim,ydim))
dmat = np.memmap('outputfile', dtype = 'd', mode = 'readwrite', shape = (ydim,ydim))
for i in np.arange(ydim)):
for j in np.arange(i+1,ydim):
dmat[i,j] = dist.cosine(wmat[:,i],wmat[:,j])
dmat.flush()
```
Currently, htop reports that I am using 224G of VIRT memory, and 91.2G of RES memory which is climbing steadily. It seems to me as if, by the end of the process, the entire output matrix will be stored in memory, which is something I'm trying to avoid.
QUESTION:
Is this a correct usage of memmaps, am I writing to the output file in a memory efficient manner (by which I mean that only the necessary parts of the in- and output files i.e. `dmat[i,j]` and `wmat[:,i/j]`, are stored in memory)?
If not, what did I do wrong, and how can I fix this?
Thanks for any advice you may have!
EDIT: I just realized that htop is reporting total system memory usage at 12G, so it seems it is working after all... anyone out there who can enlighten me? RES is now at 111G...
EDIT2: The memmap is created from a 1D array consisting of lots and lots of long decimals quite close to 0, which is shaped to the desired dimensions. The memmap then looks like this.
```
memmap([[ 9.83721223e-03, 4.42584107e-02, 9.85033578e-03, ...,
-2.30691545e-07, -1.65070799e-07, 5.99395837e-08],
[ 2.96711345e-04, -3.84307391e-04, 4.92968462e-07, ...,
-3.41317722e-08, 1.27959347e-09, 4.46846438e-08],
[ 1.64766260e-03, -1.47337747e-05, 7.43660202e-07, ...,
7.50395136e-08, -2.51943163e-09, 1.25393555e-07],
...,
[ -1.88709000e-04, -4.29454722e-06, 2.39720287e-08, ...,
-1.53058717e-08, 4.48678211e-03, 2.48127260e-07],
[ -3.34207882e-04, -4.60275148e-05, 3.36992876e-07, ...,
-2.30274532e-07, 2.51437794e-09, 1.25837564e-01],
[ 9.24923862e-04, -1.59552854e-03, 2.68354822e-07, ...,
-1.08862665e-05, 1.71283316e-07, 5.66851420e-01]])
``` | Memory maps are exactly what the name says: mappings of (virtual) disk sectors into memory pages. The memory is managed by the operating system on demand. If there is enough memory, the system keeps parts of the files in memory, maybe filling up the whole memory, if there is not enough left, the system may discard pages read from file or may swap them into swap space. Normally you can rely on the OS is as efficient as possible. |
How can I efficiently read and write files that are too large to fit in memory? | 32,162,295 | 17 | 2015-08-23T01:15:43Z | 32,166,493 | 7 | 2015-08-23T12:09:57Z | [
"python",
"numpy",
"memory-management"
] | I am trying to calculate the cosine similarity of 100,000 vectors, and each of these vectors has 200,000 dimensions.
From reading other questions I know that [memmap](http://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html), PyTables and h5py are my best bets for handling this kind of data, and I am currently working with two memmaps; one for reading the vectors, the other for storing the matrix of cosine similarities.
Here is my code:
```
import numpy as np
import scipy.spatial.distance as dist
xdim = 200000
ydim = 100000
wmat = np.memmap('inputfile', dtype = 'd', mode = 'r', shape = (xdim,ydim))
dmat = np.memmap('outputfile', dtype = 'd', mode = 'readwrite', shape = (ydim,ydim))
for i in np.arange(ydim)):
for j in np.arange(i+1,ydim):
dmat[i,j] = dist.cosine(wmat[:,i],wmat[:,j])
dmat.flush()
```
Currently, htop reports that I am using 224G of VIRT memory, and 91.2G of RES memory which is climbing steadily. It seems to me as if, by the end of the process, the entire output matrix will be stored in memory, which is something I'm trying to avoid.
QUESTION:
Is this a correct usage of memmaps, am I writing to the output file in a memory efficient manner (by which I mean that only the necessary parts of the in- and output files i.e. `dmat[i,j]` and `wmat[:,i/j]`, are stored in memory)?
If not, what did I do wrong, and how can I fix this?
Thanks for any advice you may have!
EDIT: I just realized that htop is reporting total system memory usage at 12G, so it seems it is working after all... anyone out there who can enlighten me? RES is now at 111G...
EDIT2: The memmap is created from a 1D array consisting of lots and lots of long decimals quite close to 0, which is shaped to the desired dimensions. The memmap then looks like this.
```
memmap([[ 9.83721223e-03, 4.42584107e-02, 9.85033578e-03, ...,
-2.30691545e-07, -1.65070799e-07, 5.99395837e-08],
[ 2.96711345e-04, -3.84307391e-04, 4.92968462e-07, ...,
-3.41317722e-08, 1.27959347e-09, 4.46846438e-08],
[ 1.64766260e-03, -1.47337747e-05, 7.43660202e-07, ...,
7.50395136e-08, -2.51943163e-09, 1.25393555e-07],
...,
[ -1.88709000e-04, -4.29454722e-06, 2.39720287e-08, ...,
-1.53058717e-08, 4.48678211e-03, 2.48127260e-07],
[ -3.34207882e-04, -4.60275148e-05, 3.36992876e-07, ...,
-2.30274532e-07, 2.51437794e-09, 1.25837564e-01],
[ 9.24923862e-04, -1.59552854e-03, 2.68354822e-07, ...,
-1.08862665e-05, 1.71283316e-07, 5.66851420e-01]])
``` | In terms of memory usage, there's nothing particularly wrong with what you're doing at the moment. Memmapped arrays are handled at the level of the OS - data to be written is usually held in a temporary buffer, and only committed to disk when the OS deems it necessary. Your OS should never allow you to run out of physical memory before flushing the write buffer.
I'd advise against calling `flush` on every iteration since this defeats the purpose of letting your OS decide when to write to disk in order to maximise efficiency. At the moment you're only writing individual float values at a time.
---
In terms of IO and CPU efficiency, operating on a single line at a time is almost certainly suboptimal. Reads and writes are generally quicker for large, contiguous blocks of data, and likewise your calculation will probably be much faster if you can process many lines at once using vectorization. The general rule of thumb is to process as big a chunk of your array as will fit in memory (including any intermediate arrays that are created during your computation).
[Here's an example](http://stackoverflow.com/a/21096605/1461210) showing how much you can speed up operations on memmapped arrays by processing them in appropriately-sized chunks.
Another thing that can make a huge difference is the memory layout of your input and output arrays. By default, `np.memmap` gives you a C-contiguous (row-major) array. Accessing `wmat` by column will therefore be very inefficient, since you're addressing non-adjacent locations on disk. You would be much better off if `wmat` was F-contiguous (column-major) on disk, or if you were accessing it by row.
The same general advice applies to using HDF5 instead of memmaps, although bear in mind that with HDF5 you will have to handle all the memory management yourself. |
Python - PIP install trouble shooting - PermissionError: [WinError 5] Access is denied | 32,167,418 | 3 | 2015-08-23T13:57:25Z | 32,199,615 | 9 | 2015-08-25T08:53:44Z | [
"python",
"pip"
] | I get the following error when using PIP to either install new packages or even upgrade pip itself to the latest version. I am running pip on a windows 8.1 machine with Python 3.4. It seems there is either a problem with deleting some of the Temp files created when pip tries to uninstall the previous version or the message is telling me I don't have Administrative Permission on the files (my account is an Administrator Account).
Would appreciate any thoughts on how to resolve this, as it is getting in the way of installing packages and progressing with Python.
Error message:
```
Installing collected packages: pip
Found existing installation: pip 6.0.8
Uninstalling pip-6.0.8:
Removing file or directory c:\program files (x86)\python\python34\lib\site-packages\pip-6.0.8.dist-info\description.rst
Cleaning up...
Exception:
Traceback (most recent call last):
File "C:\Program Files (x86)\Python\Python34\lib\shutil.py", line 523, in move
os.rename(src, real_dst)
PermissionError: [WinError 5] Access is denied: 'c:\\program files (x86)\\python\\python34\\lib\\site-packages\\pip-6.0.8.dist-info\\description.rst' -> 'C:\\Users\\User\\AppData\\Local\\Temp\\pip-uze_sc4k-uninstall\\program files (x86)\\python\\python34\\lib\\site-packages\\pip-6.0.8.dist-info\\description.rst'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files (x86)\Python\Python34\lib\site-packages\pip\basecommand.py", line 232, in main
status = self.run(options, args)
File "C:\Program Files (x86)\Python\Python34\lib\site-packages\pip\commands\install.py", line 347, in run
root=options.root_path,
File "C:\Program Files (x86)\Python\Python34\lib\site-packages\pip\req\req_set.py", line 543, in install
requirement.uninstall(auto_confirm=True)
File "C:\Program Files (x86)\Python\Python34\lib\site-packages\pip\req\req_install.py", line 667, in uninstall
paths_to_remove.remove(auto_confirm)
File "C:\Program Files (x86)\Python\Python34\lib\site-packages\pip\req\req_uninstall.py", line 126, in remove
renames(path, new_path)
File "C:\Program Files (x86)\Python\Python34\lib\site-packages\pip\utils\__init__.py", line 316, in renames
shutil.move(old, new)
File "C:\Program Files (x86)\Python\Python34\lib\shutil.py", line 536, in move
os.unlink(src)
PermissionError: [WinError 5] Access is denied: 'c:\\program files (x86)\\python\\python34\\lib\\site-packages\\pip-6.0.8.dist-info\\description.rst'
``` | For those that may run into the same issue:
Run the command prompt as administrator. Having administrator permissions in the account is not always enough. In Windows, things can be run as administrator by right-clicking the executable and selecting "Run as Administrator". So, type "cmd" to the Start menu, right click cmd.exe, and run it as administrator. |
How to create a pandas DatetimeIndex with year as frequency? | 32,168,848 | 5 | 2015-08-23T16:25:01Z | 32,168,932 | 11 | 2015-08-23T16:34:16Z | [
"python",
"python-3.x",
"pandas",
"date-range"
] | Using the `pandas.date_range(startdate, periods=n, freq=f)` function you can create a range of pandas `Timestamp` objects where the `freq` optional paramter denotes the frequency (second, minute, hour, day...) in the range.
The [documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html) does not mention the literals that are expected to be passed in, but after a few minutes you can easily find most of them.
* 's' : second
* 'min' : minute
* 'H' : hour
* 'D' : day
* 'w' : week
* 'm' : month
However, none of 'y', 'Y', 'yr', etc. create dates with year as frequency.
Does anybody know what to pass in, or if it is possible at all? | # Annual indexing to the beginning or end of the year
Frequency is `freq='A'` for end of year frequency, `'AS'` for start of year. Check the [aliases in the documentation](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases).
eg. `pd.date_range(start=pd.datetime(2000, 1, 1), periods=4, freq='A')`
returns
`DatetimeIndex(['2000-12-31', '2001-12-31', '2002-12-31', '2003-12-31'], dtype='datetime64[ns]', freq='A-DEC', tz=None)`
# Annual indexing to the beginning of an arbitrary month
If you need it to be annual from a particular time use an [anchored offset](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#anchored-offsets),
eg. `pd.date_range(start=pd.datetime(2000, 1, 1), periods=10, freq='AS-AUG')`
returns
`DatetimeIndex(['2000-08-01', '2001-08-01', '2002-08-01', '2003-08-01'], dtype='datetime64[ns]', freq='AS-AUG', tz=None)`
# Annual indexing from an arbitrary date
To index from an arbitrary date, begin the series on that date and use a custom [`DateOffset`](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#dateoffset-objects) object.
eg. `pd.date_range(start=pd.datetime(2000, 9, 10), periods=4, freq=pd.DateOffset(years=1))`
returns
`DatetimeIndex(['2000-09-10', '2001-09-10', '2002-09-10', '2003-09-10'], dtype='datetime64[ns]', freq='<DateOffset: kwds={'years': 1}>', tz=None)` |
copy 2D array into 3rd dimension, N times (Python) | 32,171,917 | 4 | 2015-08-23T21:51:14Z | 32,171,971 | 7 | 2015-08-23T21:58:39Z | [
"python",
"arrays",
"numpy"
] | I'm looking for a succinct way to copy a numpy 2D array into a third dimension; that is, for example, if I had such a matrix:
```
[[1,2];[1,2]]
```
I could make it into a 3D matrix with N such copies in a new dimension, something like this for N=3:
```
[[[1,2];[1,2]];[[1,2];[1,2]];[[1,2];[1,2]]]
```
Now I know you can just concatenate, but this is quite laborious if N=200. Thanks. | Probably the cleanest way is to use [`np.repeat`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html):
```
a = np.array([[1, 2], [1, 2]])
print(a.shape)
# (2, 2)
# indexing with np.newaxis inserts a new 3rd dimension, which we then repeat the
# array along, (you can achieve the same effect by indexing with None, see below)
b = np.repeat(a[:, :, np.newaxis], 3, axis=2)
print(b.shape)
# (2, 2, 3)
print(b[:, :, 0])
# [[1 2]
# [1 2]]
print(b[:, :, 1])
# [[1 2]
# [1 2]]
print(b[:, :, 2])
# [[1 2]
# [1 2]]
```
---
Having said that, you can often avoid repeating your arrays altogether by using [broadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). For example, let's say I wanted to add a `(3,)` vector:
```
c = np.array([1, 2, 3])
```
to `a`. I could copy the contents of `a` 3 times in the third dimension, then copy the contents of `c` twice in both the first and second dimensions, so that both of my arrays were `(2, 2, 3)`, then compute their sum. However, it's much simpler and quicker to do this:
```
d = a[..., None] + c[None, None, :]
```
Here, `a[..., None]` has shape `(2, 2, 1)` and `c[None, None, :]` has shape `(1, 1, 3)`. When I compute the sum, the result gets 'broadcast' out along the dimensions of size 1, giving me a result of shape `(2, 2, 3)`:
```
print(d.shape)
# (2, 2, 3)
print(d[..., 0]) # a + c[0]
# [[2 3]
# [2 3]]
print(d[..., 1]) # a + c[1]
# [[3 4]
# [3 4]]
print(d[..., 2]) # a + c[2]
# [[4 5]
# [4 5]]
```
Broadcasting is a very powerful technique because it avoids the additional overhead involved in creating repeated copies of your input arrays in memory. |
How can I retrieve the current seed of NumPy's random number generator? | 32,172,054 | 7 | 2015-08-23T22:10:14Z | 32,172,816 | 8 | 2015-08-24T00:06:06Z | [
"python",
"numpy",
"random",
"random-seed",
"mersenne-twister"
] | The following imports NumPy and sets the seed.
```
import numpy as np
np.random.seed(42)
```
However, I'm not interested in setting the seed but more in reading it. `random.get_state()` does not seem to contain the seed. The [documentation](http://docs.scipy.org/doc/numpy/reference/routines.random.html) doesn't show an obvious answer.
How do I retrieve the current seed used by `numpy.random`, assuming I did not set it manually?
I want to use the current seed to carry over for the next iteration of a process. | The short answer is that you simply can't (at least not in general).
The [Mersenne Twister](https://en.wikipedia.org/wiki/Mersenne_Twister) RNG used by numpy has 219937-1 possible internal states, whereas a single 64 bit integer has only 264 possible values. It's therefore impossible to map every RNG state to a unique integer seed.
You *can* get and set the internal state of the RNG directly using [`np.random.get_state`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.get_state.html) and [`np.random.set_state`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.set_state.html). The output of `get_state` is a tuple whose second element is a `(624,)` array of 32 bit integers. This array has more than enough bits to represent every possible internal state of the RNG (2624 \* 32 > 219937-1).
The tuple returned by `get_state` can be used much like a seed in order to create reproducible sequences of random numbers. For example:
```
import numpy as np
# randomly initialize the RNG from some platform-dependent source of entropy
np.random.seed(None)
# get the initial state of the RNG
st0 = np.random.get_state()
# draw some random numbers
print(np.random.randint(0, 100, 10))
# [ 8 76 76 33 77 26 3 1 68 21]
# set the state back to what it was originally
np.random.set_state(st0)
# draw again
print(np.random.randint(0, 100, 10))
# [ 8 76 76 33 77 26 3 1 68 21]
``` |
Should python imports take this long? | 32,173,861 | 3 | 2015-08-24T02:54:12Z | 32,173,887 | 7 | 2015-08-24T02:57:50Z | [
"python",
"time",
"import"
] | For the following command
```
time python test.py
```
on this script, test.py
```
import numpy as np
from math import *
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from matplotlib.colors import LogNorm
from scipy import stats
```
I get the output:
```
real 0m1.933s
user 0m1.322s
sys 0m0.282s
```
Is there something wrong? Or is this how long imports should take? | Some modules initialize when you use them, while others initialize everything once you start it up. Matplotlib is one of those modules.
Since matplotlib is a huge package that includes a whole lot of functionality, I'm not surprised that it takes this long, although it can get annoying.
So, in answer to your question, *yes to some.*
If you want a "solution" to your problem, you might want to import matplotlib only when you're going to use it, or have a loading screen / `print` at the beginning of your program. |
Hard coded variables in python function | 32,179,653 | 4 | 2015-08-24T10:07:51Z | 32,179,672 | 10 | 2015-08-24T10:09:00Z | [
"python",
"constants",
"python-internals"
] | Sometimes, some values/strings are hard-coded in functions. For example in the following function, I define a "constant" comparing string and check against it.
```
def foo(s):
c_string = "hello"
if s == c_string:
return True
return False
```
Without discussing too much about why it's bad to do this, and how it should be defined in the outer scope, I'm wondering what happens behind the scenes when it **is** defined this way.
Does the string get created each call?
If instead of the string `"hello"` it was the list: `[1,2,3]` (or a list with mutable content if it matters) would the same happen? | Because the string is immutable (as would a tuple), it is stored with the bytecode object for the function. It is loaded by a very simple and fast index lookup. This is actually *faster* than a global lookup.
You can see this in a disassembly of the bytecode, using the [`dis.dis()` function](https://docs.python.org/2/library/dis.html#dis.dis):
```
>>> import dis
>>> def foo(s):
... c_string = "hello"
... if s == c_string:
... return True
... return False
...
>>> dis.dis(foo)
2 0 LOAD_CONST 1 ('hello')
3 STORE_FAST 1 (c_string)
3 6 LOAD_FAST 0 (s)
9 LOAD_FAST 1 (c_string)
12 COMPARE_OP 2 (==)
15 POP_JUMP_IF_FALSE 22
4 18 LOAD_GLOBAL 0 (True)
21 RETURN_VALUE
5 >> 22 LOAD_GLOBAL 1 (False)
25 RETURN_VALUE
>>> foo.__code__.co_consts
(None, 'hello')
```
The `LOAD_CONST` opcode loads the string object from the `co_costs` array that is part of the code object for the function; the reference is pushed to the top of the stack. The `STORE_FAST` opcode takes the reference from the top of the stack and stores it in the locals array, again a very simple and fast operation.
For mutable literals (`{..}`, `[..]`) special opcodes build the object, with the contents still treated as constants as much as possible (more complex structures just follow the same building blocks):
```
>>> def bar(): return ['spam', 'eggs']
...
>>> dis.dis(bar)
1 0 LOAD_CONST 1 ('spam')
3 LOAD_CONST 2 ('eggs')
6 BUILD_LIST 2
9 RETURN_VALUE
```
The `BUILD_LIST` call creates the new list object, using two constant string objects.
Interesting fact: If you used a list object for a membership test (`something in ['option1', 'option2', 'option3']` Python knows the list object will never be mutated and will convert it to a tuple for you at compile time (a so-called peephole optimisation). The same applies to a set literal, which is converted to a `frozenset()` object, but only in Python 3.2 and newer. See [Tuple or list when using 'in' in an 'if' clause?](http://stackoverflow.com/questions/25368337/tuple-or-list-when-using-in-in-an-if-clause)
Note that your sample function is using booleans rather verbosely; you could just have used:
```
def foo(s):
c_string = "hello"
return s == c_string
```
for the exact same result, avoiding the `LOAD_GLOBAL` calls in Python 2 (Python 3 made `True` and `False` keywords so the values can also be stored as constants). |
Making Python run a few lines before my script | 32,184,440 | 7 | 2015-08-24T14:10:04Z | 32,236,228 | 7 | 2015-08-26T20:34:51Z | [
"python",
"python-import"
] | I need to run a script `foo.py`, but I need to also insert some debugging lines to run before the code in `foo.py`. Currently I just put those lines in `foo.py` and I'm careful not to commit that to Git, but I don't like this solution.
What I want is a separate file `bar.py` that I don't commit to Git. Then I want to run:
```
python /somewhere/bar.py /somewhere_else/foo.py
```
What I want this to do is first run some lines of code in `bar.py`, and then run `foo.py` as `__main__`. It should be in the same process that the `bar.py` lines ran in, otherwise the debugging lines won't help.
Is there a way to make `bar.py` do this?
Someone suggested this:
```
import imp
import sys
# Debugging code here
fp, pathname, description = imp.find_module(sys.argv[1])
imp.load_module('__main__', fp, pathname, description)
```
The problem with that is that because it uses import machinery, I need to be on the same folder as `foo.py` to run that. I don't want that. I want to simply put in the full path to `foo.py`.
Also: The solution needs to work with `.pyc` files as well. | You can use [`execfile()`](https://docs.python.org/2/library/functions.html#execfile) if the file is `.py` and [uncompyle2](https://github.com/wibiti/uncompyle2/blob/master/scripts/uncompyle2) if the file is `.pyc`.
Let's say you have your file structure like:
```
test|-- foo.py
|-- bar
|--bar.py
```
**foo.py**
```
import sys
a = 1
print ('debugging...')
# run the other file
if sys.argv[1].endswith('.py'): # if .py run right away
execfile(sys.argv[1], globals(), locals())
elif sys.argv[1].endswith('.pyc'): # if .pyc, first uncompyle, then run
import uncompyle2
from StringIO import StringIO
f = StringIO()
uncompyle2.uncompyle_file(sys.argv[1], f)
f.seek(0)
exec(f.read(), globals(), locals())
```
**bar.py**
```
print a
print 'real job'
```
And in `test/`, if you do:
```
$ python foo.py bar/bar.py
$ python foo.py bar/bar.pyc
```
Both, outputs the same:
```
debugging...
1
real job
```
Please also see this [answer](http://stackoverflow.com/a/32261548/547820). |
Break // in x axis of matplotlib | 32,185,411 | 2 | 2015-08-24T14:56:23Z | 32,186,074 | 7 | 2015-08-24T15:28:03Z | [
"python",
"matplotlib",
"plot"
] | Best way to describe what I want to achieve is using my own image:
[](http://i.stack.imgur.com/REeKv.png)
Now I have a lot of dead space in the spectra plot, especially between 5200 and 6300. My question is quite simple, how would I add in a nice little // break that looks something similar to this (image lifted from the net):
[](http://i.stack.imgur.com/f9aD2.png)
I'm using this setup for my plots:
```
nullfmt = pyplot.NullFormatter()
fig = pyplot.figure(figsize=(16,6))
gridspec_layout1= gridspec.GridSpec(2,1)
gridspec_layout1.update(left=0.05, right=0.97, hspace=0, wspace=0.018)
pyplot_top = fig.add_subplot(gridspec_layout1[0])
pyplot_bottom = fig.add_subplot(gridspec_layout1[1])
pyplot_top.xaxis.set_major_formatter(nullfmt)
```
I'm quite certain it is achievable with gridpsec but an advanced tutorial cover exactly how this is achieved would be greatly appreciated.
Apologies also if this question has been dealt with previously on stackoverflow but I have looked extensively for the correct procedure for `gridSpec` but found nothing as yet.
I have managed to go as far as this, pretty much there:
[](http://i.stack.imgur.com/5XXMY.png)
However, my break lines are not as steep as I would like them...how do I change them? (I have made use of the example answer below) | You could adapt [the matplotlib example](http://matplotlib.org/examples/pylab_examples/broken_axis.html) for a break in the x-axis directly:
```
"""
Broken axis example, where the x-axis will have a portion cut out.
"""
import matplotlib.pylab as plt
import numpy as np
x = np.linspace(0,10,100)
x[75:] = np.linspace(40,42.5,25)
y = np.sin(x)
f,(ax,ax2) = plt.subplots(1,2,sharey=True, facecolor='w')
# plot the same data on both axes
ax.plot(x, y)
ax2.plot(x, y)
ax.set_xlim(0,7.5)
ax2.set_xlim(40,42.5)
# hide the spines between ax and ax2
ax.spines['right'].set_visible(False)
ax2.spines['left'].set_visible(False)
ax.yaxis.tick_left()
ax.tick_params(labelright='off')
ax2.yaxis.tick_right()
# This looks pretty good, and was fairly painless, but you can get that
# cut-out diagonal lines look with just a bit more work. The important
# thing to know here is that in axes coordinates, which are always
# between 0-1, spine endpoints are at these locations (0,0), (0,1),
# (1,0), and (1,1). Thus, we just need to put the diagonals in the
# appropriate corners of each of our axes, and so long as we use the
# right transform and disable clipping.
d = .015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color='k', clip_on=False)
ax.plot((1-d,1+d), (-d,+d), **kwargs)
ax.plot((1-d,1+d),(1-d,1+d), **kwargs)
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d,+d), (1-d,1+d), **kwargs)
ax2.plot((-d,+d), (-d,+d), **kwargs)
# What's cool about this is that now if we vary the distance between
# ax and ax2 via f.subplots_adjust(hspace=...) or plt.subplot_tool(),
# the diagonal lines will move accordingly, and stay right at the tips
# of the spines they are 'breaking'
plt.show()
```
[](http://i.stack.imgur.com/L3Guy.png)
For your purposes, just plot your data twice (once on each axis, `ax` and `ax2` and set your `xlim`s appropriately. The "break lines" should move to match the new break because they are plotted in relative axis coordinates rather than data coordinates.
The break lines are just unclipped plot lines drawn between a pair of points. E.g. `ax.plot((1-d,1+d), (-d,+d), **kwargs)` plots the break line between point `(1-d,-d)` and `(1+d,+d)` on the first axis: this is the bottom righthand one. If you want to change the graident, change these values appropriately. For example, to make this one steeper, try `ax.plot((1-d/2,1+d/2), (-d,+d), **kwargs)` |
from matplotlib.backends import _tkagg ImportError: cannot import name _tkagg | 32,188,180 | 9 | 2015-08-24T17:29:33Z | 34,030,409 | 22 | 2015-12-01T21:21:35Z | [
"python",
"matplotlib",
"pip",
"virtualenv",
"tk"
] | While trying to run [this](http://matplotlib.org/examples/user_interfaces/embedding_in_tk.html) example to test how matplotlib works with Tkinter, I am getting the error:
```
(env)fieldsofgold@fieldsofgold-VirtualBox:~/new$ python test.py
Traceback (most recent call last):
File "test.py", line 7, in <module>
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg
File "/home/fieldsofgold/new/env/local/lib/python2.7/site-packages/matplotlib/backends/backend_tkagg.py", line 13, in <module>
import matplotlib.backends.tkagg as tkagg
File "/home/fieldsofgold/new/env/local/lib/python2.7/site-packages/matplotlib/backends/tkagg.py", line 7, in <module>
from matplotlib.backends import _tkagg
ImportError: cannot import name _tkagg
```
Using the solution provided [here](http://stackoverflow.com/questions/13110403/matplotlib-backend-missing-modules-with-underscore), I've tried to uninstall matplotlib and install the tk and tk-dev packages by using these commands :
```
sudo apt-get install tk8.5
sudo apt-get install tk-dev
```
and then re-installing matplotlib again by `pip install matplotlib`
but I am still getting the same error. Any help would be appreciated. I am using Ubuntu 14.04 on VirtualBox and working inside a virtualenv environment.
Thanks so much. | I just ran into this (Ubuntu 15.10 but same idea) and fixed it by:
```
sudo apt-get install tk-dev
pip uninstall -y matplotlib
pip --no-cache-dir install -U matplotlib
```
I think the third step was the critical one; if the cache is permitted then `pip` appeared to be just using the previously-built installation of `matplotlib`.
You can also manually remove the previously-built matplotlib; on Ubuntu it lives in `~/.cache/pip` somewhere. (I couldn't find a way for pip to clean up its cache unfortunately.) |
Getting the indices of several elements in a NumPy array at once | 32,191,029 | 3 | 2015-08-24T20:18:56Z | 32,191,125 | 9 | 2015-08-24T20:24:33Z | [
"python",
"arrays",
"numpy"
] | Is there any way to get the indices of several elements in a NumPy array at once?
E.g.
```
import numpy as np
a = np.array([1, 2, 4])
b = np.array([1, 2, 3, 10, 4])
```
I would like to find the index of each element of `a` in `b`, namely: `[0,1,4]`.
I find the solution I am using a bit verbose:
```
import numpy as np
a = np.array([1, 2, 4])
b = np.array([1, 2, 3, 10, 4])
c = np.zeros_like(a)
for i, aa in np.ndenumerate(a):
c[i] = np.where(b==aa)[0]
print('c: {0}'.format(c))
```
Output:
```
c: [0 1 4]
``` | You could use [`in1d`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html) and [`nonzero`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.nonzero.html) (or `where` for that matter):
```
>>> np.in1d(b, a).nonzero()[0]
array([0, 1, 4])
```
This works fine for your example arrays, but in general the array of returned indices does not honour the order of the values in `a`. This may be a problem depending on what you want to do next.
In that case, a much better answer is the one @Jaime gives [here](http://stackoverflow.com/a/29829495/3923281), using `searchsorted`:
```
>>> sorter = np.argsort(b)
>>> sorter[np.searchsorted(b, a, sorter=sorter)]
array([0, 1, 4])
```
This returns the indices for values as they appear in `a`. For instance:
```
a = np.array([1, 2, 4])
b = np.array([4, 2, 3, 1])
>>> sorter = np.argsort(b)
>>> sorter[np.searchsorted(b, a, sorter=sorter)]
array([3, 1, 0]) # the other method would return [0, 1, 3]
``` |
Python AND operator on two boolean lists - how? | 32,192,163 | 8 | 2015-08-24T21:38:24Z | 32,192,248 | 10 | 2015-08-24T21:44:45Z | [
"python",
"list",
"boolean",
"operator-keyword"
] | I have two boolean lists, e.g.,
```
x=[True,True,False,False]
y=[True,False,True,False]
```
I want to AND these lists together, with the expected output:
```
xy=[True,False,False,False]
```
I thought that expression `x and y` would work, but came to discover that it does not: in fact, `(x and y) != (y and x)`
Output of `x and y`: `[True,False,True,False]`
Output of `y and x`: `[True,True,False,False]`
Using list comprehension *does* have correct output. Whew!
```
xy = [x[i] and y[i] for i in range(len(x)]
```
Mind you I could not find any reference that told me the AND operator would work as I tried with x and y. But it's easy to try things in Python.
Can someone explain to me what is happening with `x and y`?
And here is a simple test program:
```
import random
random.seed()
n = 10
x = [random.random() > 0.5 for i in range(n)]
y = [random.random() > 0.5 for i in range(n)]
# Next two methods look sensible, but do not work
a = x and y
z = y and x
# Next: apparently only the list comprehension method is correct
xy = [x[i] and y[i] for i in range(n)]
print 'x : %s'%str(x)
print 'y : %s'%str(y)
print 'x and y : %s'%str(a)
print 'y and x : %s'%str(z)
print '[x and y]: %s'%str(xy)
``` | `and` simply returns either the first or the second operand, based on their truth value. If the first operand is considered false, it is returned, otherwise the other operand is returned.
Lists are considered *true* when *not empty*, so both lists are considered true. Their contents *don't play a role here*.
Because both lists are not empty, `x and y` simply returns the second list object; only if `x` was empty would it be returned instead:
```
>>> [True, False] and ['foo', 'bar']
['foo', 'bar']
>>> [] and ['foo', 'bar']
[]
```
See the [*Truth value testing* section](https://docs.python.org/2/library/stdtypes.html#truth-value-testing) in the Python documentation:
> Any object can be tested for truth value, for use in an `if` or `while` condition or as operand of the Boolean operations below. The following values are considered false:
>
> [...]
>
> * any empty sequence, for example, `''`, `()`, `[]`.
>
> [...]
>
> **All other values are considered true** â so objects of many types are always true.
(emphasis mine), and the [*Boolean operations* section](https://docs.python.org/2/library/stdtypes.html#boolean-operations-and-or-not) right below that:
> `x and y`
> if *x* is false, then *x*, else *y*
>
> This is a short-circuit operator, so it only evaluates the second argument if the first one is `True`.
You indeed need to test the values *contained* in the lists explicitly. You can do so with a list comprehension, as you discovered. You can rewrite it with the [`zip()` function](https://docs.python.org/2/library/functions.html#zip) to pair up the values:
```
[a and b for a, b in zip(x, y)]
``` |
Error with Sklearn Random Forest Regressor | 32,198,355 | 3 | 2015-08-25T07:49:37Z | 32,198,885 | 7 | 2015-08-25T08:17:01Z | [
"python",
"numpy",
"machine-learning",
"scikit-learn",
"random-forest"
] | When trying to fit a Random Forest Regressor model with y data that looks like this:
```
[ 0.00000000e+00 1.36094276e+02 4.46608221e+03 8.72660888e+03
1.31375786e+04 1.73580193e+04 2.29420671e+04 3.12216341e+04
4.11395711e+04 5.07972062e+04 6.14904935e+04 7.34275322e+04
7.87333933e+04 8.46302456e+04 9.71074959e+04 1.07146672e+05
1.17187952e+05 1.26953374e+05 1.37736003e+05 1.47239359e+05
1.53943242e+05 1.78806710e+05 1.92657725e+05 2.08912711e+05
2.22855152e+05 2.34532982e+05 2.41391255e+05 2.48699216e+05
2.62421197e+05 2.79544300e+05 2.95550971e+05 3.13524275e+05
3.23365158e+05 3.24069067e+05 3.24472999e+05 3.24804951e+05
```
And X data that looks like this:
```
[ 735233.27082176 735234.27082176 735235.27082176 735236.27082176
735237.27082176 735238.27082176 735239.27082176 735240.27082176
735241.27082176 735242.27082176 735243.27082176 735244.27082176
735245.27082176 735246.27082176 735247.27082176 735248.27082176
```
With the following code:
```
regressor = RandomForestRegressor(n_estimators=150, min_samples_split=1)
rgr = regressor.fit(X,y)
```
I get this error:
```
ValueError: Number of labels=600 does not match number of samples=1
```
I assume one of my sets of values is in the wrong format but its not too clear to me from the documentation. | The shape of `X` should be `[n_samples, n_features]`, you can transform `X` by
```
X = X[:, None]
``` |
How to do while() the "pythonic way" | 32,199,150 | 4 | 2015-08-25T08:30:52Z | 32,199,207 | 11 | 2015-08-25T08:33:34Z | [
"python"
] | I want to do this:
```
from django.db import connection
cursor = connection.cursor()
cursor.execute("PRAGMA table_info(ventegroupee)")
while row = cursor.fetchone():
print(row)
```
I get this:
```
File "<input>", line 1
while row = cursor.fetchone():
^
SyntaxError: invalid syntax
```
What is the "pythonic" way of doing this? | You don't have to use `while` loop at all, because cursors are iterable:
```
for row in cursor:
print(row)
```
From the "Connections and cursors" section of [Django documentation](https://docs.djangoproject.com/en/1.8/topics/db/sql/#connections-and-cursors):
> *connection* and *cursor* mostly implement the standard Python DB-API
> described in PEP 249 â except when it comes to transaction handling.
From the mentioned [PEP 249](https://www.python.org/dev/peps/pep-0249/#next):
> *Cursor.next()*
>
> Return the next row from the currently executing SQL statement using
> the same semantics as *.fetchone()*
>
> *Cursor.\_\_iter\_\_()*
>
> Return self to make cursors compatible to the iteration protocol |
Finding the "best" combination for a set | 32,202,797 | 17 | 2015-08-25T11:27:19Z | 32,360,908 | 7 | 2015-09-02T18:48:53Z | [
"python",
"algorithm",
"statistics",
"combinations",
"linguistics"
] | I have a set, `sentences`, which contains sentences from the English language in the form of strings. I wish to create a subset of `sentences`, `sentences2`, which contains sentences containing only 20 unique words. Of course, there are many, many such subsets, but I'm looking for the "best" one and by "best" I mean that subset where all words have the highest possible representation in `sentences2`.
The following example, will further clarify what I mean by "best":
If I was to filter `sentences` for this set of words:
```
(i,you,do,think,yes,dont,can,it,good,cant,but,am,why,where,now,no,know,here,feel,are)
```
I would get the following:
```
sentences2 = set(("where are you now", "here i am", "can you do it", "yes i can", "but can i do it", "no you cant", "do you feel good", "yes i do", "why are you here", "i dont know", "i think i know why", "you dont think", "yes i do", "no you dont", "i dont think you think", "i feel good", "but i am good", "i cant do it now", "yes you can", "but i cant", "where do you think i am"))
```
and here each word is represented at least twice, as we can see if we use a counter on sentences2:
```
c = collections.Counter({'i': 13, 'you': 10, 'do': 6, 'think': 5, 'dont': 4, 'can': 4, 'good': 3, 'but': 3, 'am': 3, 'it': 3, 'cant': 3, 'yes': 3, 'know': 2, 'no': 2, 'here': 2, 'why': 2, 'feel': 2, 'are': 2, 'now': 2, 'where': 2})
```
If each word is represented at least twice we can say that this set of 20 words has a score of 2.
```
score = min(c.values())
```
However, the following set:
```
(i,you,he,do,think,yes,dont,can,it,good,cant,but,am,why,where,now,no,here,she,are)
```
has a score of 5, since if I use it to filter `sentences`, I get a `sentences2` where each word is represented at least five times.
So I'm after the highest possible score for all possible 20 word combinations.
Here is my attempt at solving this problem:
```
sentences = ... # all the sentences in my text
common_words = ... # the hundred most common words in the text
result_size = 20
highest_score = 0
for sample in itertools.combinations(common_words, result_size):
sentences2 = list(filter(lambda s: set(s).issubset(sample), sentences))
c = Counter([j for i in sentences2 for j in i])
if len(c.values()) and min(c.values()) > highest_score:
# this is the set with the highest score to date
print(c)
highest_score = min(c.values())
```
However, this algorithm will take forever to compute, with 5.3598337040381E+20 combinations if I'm not mistaken. Can you suggest how I might go about solving this with a much faster algorithm?
Please note that the resulting set can contain less than 20 words and that this is completely fine. For example, `c.values()` in my algorithm does not have to match the size of `result_size`.
Also note that I'm expecting the words in the resulting set to be found in the top one hundred words (`common_words` contains 100 values). This is also by design. | **Disclaimer:** You have not specified data characteristics, so my answer will assume that it is not too large(more than 1,000,000 sentences, each at most 1,000). Also Description is a bit complicated and I might have not understood the problem fully.
**Solution:**
Instead of focusing on different combinations, why don't you create a hashMap(`dict` in python) for your 100 most frequently used words, then traverse the array of senteces and for each word in each sentence, increase its corresponding value(if it is already inside the dict).
In the end, just sort this hashMap according to the number of occurences(value) of each word(key), then use the most frequent 20.
**Complexity:**
A quick look at the algorithms, gives:
Traversing N sentences, traversing each with M words, increasing the hashMap value. at the end sorting an array of (word, occurrences) pairs. which is negligible(hashMap size is constant, 100 frequently used words), and extracting first 20.
Time Complexity : O(N\*M)
Space complexity : O(1) (we don't need to store the sentences, we just have the hashMap)
**Sample Code:**
Here is a quick psuedo-code:
```
word_occur_dict = {#initialized with frequent words as keys, and zero as value for all}
for sentence in sentences: #for each sentence
sentence_words = sentence.split(" ") #construct the word list
for word in sentence_words: #for each word
if word in word_occur_dict: #if it is a frequent word, increase value
word_occur_dict[word]++
final_result = sort_dict(word_occur_dict)[:20] #returns list of tuples
```
**Python Code:**
```
import operator
common_words = ["do","think","yes","dont","can","it","good","cant","but","am","why","where","now","no","know","here","feel","are","i","you","he","she"]
common_words_dict = {}
sentences = ["where are you now", "here i am", "can you do it", "yes i can", "but can i do it", "no you cant", "do you feel good", "yes i do", "why are you here", "i dont know", "i think i know why", "you dont think", "yes i do", "no you dont", "i dont think you think", "i feel good", "but i am good", "i cant do it now", "yes you can", "but i cant", "where do you think i am"]
for w in common_words: #initialize the dict
common_words_dict[w] = 0
for sentence in sentences: #for each sentence
sentence_words = sentence.split(" ") #construct the word list
for word in sentence_words: #for each word
if word in common_words_dict: #if it is a frequent word, increase value
common_words_dict[word] = common_words_dict[word]+1
sorted_word_dict = sorted(common_words_dict.items(), key=operator.itemgetter(1))
print sorted_word_dict[::-1][:20]
```
By the way, 'he' and 'she' does not appear anywhere on the sentences, but you said the following word combination has a score of 5
> (i,you,he,do,think,yes,dont,can,it,good,cant,but,am,why,where,now,no,here,she,are)
Have I misunderstood the problem.
Credit where it is due: [StackOverflow: Sort a Python dictionary by value](http://stackoverflow.com/a/613218/1329429) |
how to determine the minimal value in a column of a list of tuples in python | 32,205,413 | 3 | 2015-08-25T13:31:49Z | 32,205,472 | 10 | 2015-08-25T13:34:05Z | [
"python"
] | I have the following list of tuples
```
lstoflsts = [(1.2, 2.1, 3.1),
(0.9, 3.4, 7.4),
(2.3, 1.1, 5.1)]
```
I would like to get the minimum value of the 2nd column (which is *1.1* based on above example).
I tried playing around with `min(listoflists)` without success.
any suggestions how to approach?
note: if possible I would like to avoid looping over rows and columns... | Simplest way, you can use `min`,
```
>>> lstoflsts = [(1.2, 2.1, 3.1),
... (0.9, 3.4, 7.4),
... (2.3, 1.1, 5.1)]
>>>
>>> min(lstoflsts, key=lambda x: x[1])
(2.3, 1.1, 5.1)
>>> min(lstoflsts, key=lambda x: x[1])[1]
1.1
``` |
Is there a multi-dimensional version of arange/linspace in numpy? | 32,208,359 | 5 | 2015-08-25T15:41:25Z | 32,208,788 | 9 | 2015-08-25T16:02:16Z | [
"python",
"numpy"
] | I would like a list of 2d numpy arrays (x,y) , where each x is in {-5, -4.5, -4, -3.5, ..., 3.5, 4, 4.5, 5} and the same for y.
I could do
```
x = np.arange(-5, 5.1, 0.5)
y = np.arange(-5, 5.1, 0.5)
```
and then iterate through all possible pairs, but I'm sure there's a nicer way...
I would like something back that looks like:
```
[[-5, -5],
[-5, -4.5],
[-5, -4],
...
[5, 5]]
```
but the order does not matter. | You can use [`np.mgrid`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.mgrid.html) for this, it's often more convenient than [`np.meshgrid`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html) because it creates the arrays in one step:
```
import numpy as np
X,Y = np.mgrid[-5:5.1:0.5, -5:5.1:0.5]
```
For linspace-like functionality, replace the step (i.e. `0.5`) with [a complex number](http://stackoverflow.com/questions/8370637/complex-numbers-usage-in-python) whose magnitude specifies the number of points you want in the series. Using this syntax, the same arrays as above are specified as:
```
X, Y = np.mgrid[-5:5:21j, -5:5:21j]
```
---
You can then create your pairs as:
```
xy = np.vstack((X.flatten(), Y.flatten())).T
```
As @ali\_m suggested, this can all be done in one line:
```
xy = np.mgrid[-5:5.1:0.5, -5:5.1:0.5].reshape(2,-1).T
```
Best of luck! |
Why can a floating point dictionary key overwrite an integer key with the same value? | 32,209,155 | 27 | 2015-08-25T16:22:07Z | 32,209,354 | 13 | 2015-08-25T16:32:05Z | [
"python",
"dictionary",
"floating-point",
"int"
] | I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code:
```
confusion = {}
confusion[1] = 1
confusion['1'] = 2
confusion[1.0] = 4
sum = 0
for k in confusion:
sum += confusion[k]
print sum
```
The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature? | You should consider that the `dict` aims at storing data depending on the logical numeric value, not on how you represented it.
The difference between `int`s and `float`s is indeed just an implementation detail and not conceptual. Ideally the only number type should be an arbitrary precision number with unbounded accuracy even sub-unity... this is however hard to implement without getting into troubles... but may be that will be the only future numeric type for Python.
So while having different types for technical reasons Python tries to hide these implementation details and `int`->`float` conversion is automatic.
It would be much more surprising if in a Python program `if x == 1: ...` wasn't going to be taken when `x` is a `float` with value 1.
Note that also with Python 3 the value of `1/2` is `0.5` (the division of two integers) and that the types `long` and non-unicode string have been dropped with the same attempt to hide implementation details. |
Why can a floating point dictionary key overwrite an integer key with the same value? | 32,209,155 | 27 | 2015-08-25T16:22:07Z | 32,211,042 | 24 | 2015-08-25T18:04:15Z | [
"python",
"dictionary",
"floating-point",
"int"
] | I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code:
```
confusion = {}
confusion[1] = 1
confusion['1'] = 2
confusion[1.0] = 4
sum = 0
for k in confusion:
sum += confusion[k]
print sum
```
The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature? | First of all: the behaviour is documented explicitly in the docs for the [hash](https://docs.python.org/3.5/library/functions.html#hash) function:
> **`hash(object)`**
>
> Return the hash value of the object (if it has one). Hash values are
> integers. They are used to quickly compare dictionary keys during a
> dictionary lookup. **Numeric values that compare equal have the same
> hash value (even if they are of different types, as is the case for `1`
> and `1.0`).**
Secondly, a limitation of hashing is pointed out in the docs for [`object.__hash__`](https://docs.python.org/3.5/reference/datamodel.html#object.__hash__)
> **`object.__hash__(self)`**
>
> Called by built-in function [`hash()`](https://docs.python.org/3.5/library/functions.html#hash) and for operations on members of
> hashed collections including `set`, `frozenset`, and `dict. __hash__()`
> should return an integer. **The only required property is that objects
> which compare equal have the same hash value;**
This is not unique to python. Java has the same caveat: if you implement `hashCode` then, in order for things to work correctly, you **must** implement it in such a way that: `x.equals(y)` implies `x.hashCode() == y.hashCode()`.
So, python decided that `1.0 == 1` holds, hence it's *forced* to provide an implementation for `hash` such that `hash(1.0) == hash(1)`. The side effect is that `1.0` and `1` act exactly in the same way as `dict` keys, hence the behaviour.
In other words the behaviour in itself doesn't have to be used or useful in any way. **It is necessary**. Without that behaviour there would be cases where you could accidentally overwrite a different key.
If we had `1.0 == 1` but `hash(1.0) != hash(1)` we could still have a *collision*. And if `1.0` and `1` collide, the `dict` will use equality to be sure whether they are the same key or not and *kaboom* the value gets overwritten even if you intended them to be different.
The only way to avoid this would be to have `1.0 != 1`, so that the `dict` is able to distinguish between them even in case of collision. But it was deemed more important to have `1.0 == 1` than to avoid the behaviour you are seeing, since you practically never use `float`s and `int`s as dictionary keys anyway.
Since python tries to hide the distinction between numbers by automatically converting them when needed (e.g. `1/2 -> 0.5`) it makes sense that this behaviour is reflected even in such circumstances. It's more consistent with the rest of python.
---
This behaviour would appear in *any* implementation where the matching of the keys is at least partially (as in a hash map) based on comparisons.
For example if a `dict` was implemented using a red-black tree or an other kind of balanced BST, when the key `1.0` is looked up the comparisons with other keys would return the same results as for `1` and so they would still act in the same way.
Hash maps require even more care because of the fact that it's the value of the hash that is used to find the entry of the key and comparisons are done only afterwards. So breaking the rule presented above means you'd introduce a bug that's quite hard to spot because at times the `dict` may seem to work as you'd expect it, and at other times, when the size changes, it would start to behave incorrectly.
---
Note that there *would* be a way to fix this: have a separate hash map/BST for each type inserted in the dictionary. In this way there couldn't be any collisions between objects of different type and how `==` compares wouldn't matter when the arguments have different types.
However this would complicate the implementation, it would probably be inefficient since hash maps have to keep quite a few free locations in order to have O(1) access times. If they become too full the performances decrease. Having multiple hash maps means wasting more space and also you'd need to first choose which hash map to look at before even starting the actual lookup of the key.
If you used BSTs you'd first have to lookup the type and the perform a second lookup. So if you are going to use many types you'd end up with twice the work (and the lookup would take O(log n) instead of O(1)). |
Can I add arguments to python code when I submit spark job? | 32,217,160 | 4 | 2015-08-26T02:43:08Z | 32,217,235 | 9 | 2015-08-26T02:50:08Z | [
"python",
"apache-spark"
] | I'm trying to use `spark-submit` to execute my python code in spark cluster.
Generally we run `spar-submit` with python code like below.
```
# Run a Python application on a cluster
./bin/spark-submit \
--master spark://207.184.161.138:7077 \
my_python_code.py \
1000
```
But I wanna run `my_python_code.py`by passing several arguments Is there smart way to pass arguments? | **Yes**: Put this in a file called args.py
```
#import sys
print sys.argv
```
If you run
```
spark-submit args.py a b c d e
```
You will see:
```
['/spark/args.py', 'a', 'b', 'c', 'd', 'e']
``` |
How to know which version of PyMongo is running on my project | 32,221,694 | 3 | 2015-08-26T08:34:48Z | 32,221,874 | 7 | 2015-08-26T08:43:42Z | [
"python",
"mongodb",
"pymongo"
] | I'm developing a python project, in the requirements file I have three different types of PyMongo
```
Flask-PyMongo==0.3.1
pymongo==2.7
flask-mongoengine==0.7.1
```
How can I define which version I'm using? | If you got `pip` installed, you can try this in terminal:
```
$ pip freeze | grep pymongo
pymongo==3.0.2
``` |
Computing mean and variance of numpy memmap Infinity output | 32,227,847 | 2 | 2015-08-26T13:21:16Z | 32,229,510 | 7 | 2015-08-26T14:33:10Z | [
"python",
"numpy",
"memory"
] | Creation of memmap array:
```
out = np.memmap('my_array.mmap', dtype=np.float16, mode='w+', shape=(num_axis1, num_axis2))
for index,row in enumerate(temp_train_data):
__,cd_i=pywt.dwt(X_train[index:index+1001].ravel(),'haar')
out[index]=(cd_i)
(Pdb) out.shape
(1421392L, 3504L)
```
Now, I simply feature scale this array. Subtract by mean and divide by variance.
```
np.mean(out[:,1])
memmap(inf, dtype=float16)
```
The output is
`memmap(inf, dtype=float16)`
I don't understand why!
Reproducible example:
```
import numpy as np
ut = np.memmap('my_array.mmap', dtype=np.float16, mode='w+',\
shape=(140000, 3504))
for index,row in enumerate(ut):
ut[index]=np.random.rand(1,3504)*10
print np.max(ut[:,1])
print np.mean(ut[:,1],axis=0)
```
> 10.0
>
> inf | [According to Wikipedia](https://en.wikipedia.org/wiki/Half-precision_floating-point_format#Precision_limitations_on_integer_values), the `float16` data type can't handle integers larger than 65520. The sum of all the values in your collection is probably larger than that, so it gets rounded up to infinity when calculating the mean.
Consider using a data type that has a higher upper limit. For example, `float32`.
```
ut = np.memmap('my_array.mmap', dtype=np.float32, mode='w+',\
shape=(140000, 3504))
```
When I make this change, I get some nice non-infinite output:
```
9.99996471405
4.9927
```
---
Alternatively, if you really have to have a `float16` array, you can specify the type used to calculate the mean:
```
print np.mean(ut[:,1],axis=0,dtype=np.float32)
``` |
Django ImportError: No module named middleware | 32,230,490 | 4 | 2015-08-26T15:17:16Z | 32,232,787 | 8 | 2015-08-26T17:16:24Z | [
"python",
"django",
"django-settings",
"django-middleware"
] | I am using Django version 1.8 and python 2.7. I am getting the following error after running my project.
```
Traceback (most recent call last):
File "C:\Python27\lib\wsgiref\handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "C:\Python27\lib\site-packages\django\contrib\staticfiles\handlers.py", line 63, in __call__
return self.application(environ, start_response)
File "C:\Python27\lib\site-packages\django\core\handlers\wsgi.py", line 170, in __call__
self.load_middleware()
File "C:\Python27\lib\site-packages\django\core\handlers\base.py", line 50, in load_middleware
mw_class = import_string(middleware_path)
File "C:\Python27\lib\site-packages\django\utils\module_loading.py", line 26, in import_string
module = import_module(module_path)
File "C:\Python27\lib\importlib\__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named middleware
[26/Aug/2015 20:34:29] "GET /favicon.ico HTTP/1.1" 500 59
```
This is my settings.py file
```
"""
Django settings for collageapp project.
Generated by 'django-admin startproject' using Django 1.8.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.8/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
APP_PATH = os.path.dirname(os.path.abspath(__file__))
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '******************************************************'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'manageApp',
'django.contrib.sites',
'allauth',
'allauth.account',
'allauth.socialaccount',
'allauth.socialaccount.providers.facebook',
'allauth.socialaccount.providers.google',
'django.contrib.admindocs',
'rest_framework',
)
SITE_ID = 1
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
'corsheaders.middleware.CorsMiddleware',
'oauth2_provider.middleware.OAuth2TokenMiddleware',
)
ROOT_URLCONF = 'collageapp.urls'
CORS_ORIGIN_ALLOW_ALL = True
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
'allauth.account.context_processors.account',
'allauth.socialaccount.context_processors.socialaccount'
],
},
},
]
WSGI_APPLICATION = 'collageapp.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'college_app',
'USER': 'root',
'PASSWORD': '',
'HOST': 'localhost', # Or an IP Address that your DB is hosted on
'PORT': '3306',
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.8/howto/static-files/
STATIC_URL = '/static/'
```
I have tried surfing the error but in vain. My same code is working fine in other machine. | Open up a python shell by running `python manage.py shell` in your project directory.
Run the following commands **one at a time** in the python shell:
```
>>> from corsheaders.middleware import CorsMiddleware
>>> from oauth2_provider.middleware import OAuth2TokenMiddleware
>>> from django.contrib.auth.middleware import SessionAuthenticationMiddleware
```
One of the lines should give you an error like the following:
```
Traceback (most recent call last):
File "<console>", line 1, in <module>
ImportError: No module named middleware
```
The line that gives you that error is the missing module that is giving you the problem.
To find the path where the modules are searched, do
```
>>> import sys; sys.path
```
---
Alternatively, if you don't know how to use the python shell, you could just remove the following lines in your `settings.MIDDLEWARE_CLASSES` one at a time until you don't get that error anymore:
```
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'corsheaders.middleware.CorsMiddleware',
'oauth2_provider.middleware.OAuth2TokenMiddleware',
```
---
Just reinstall the package that gave you the error.
`django.contrib.auth.middleware` -> `django`
`corsheaders.middleware` -> `corsheaders`
`oauth2_provider.middleware` -> `oauth2_provider` |
What makes an element eligible for a set membership test in Python? | 32,232,182 | 5 | 2015-08-26T16:42:30Z | 32,232,229 | 7 | 2015-08-26T16:46:00Z | [
"python",
"collections",
"set"
] | I would like to understand which items can be tested for `set` membership in Python. In general, set membership testing works like `list` membership testing in Python.
```
>>> 1 in {1,2,3}
True
>>> 0 in {1,2,3}
False
>>>
```
However, sets are different from lists in that they cannot contain unhashable objects, for example nested sets.
**List, okay:**
```
>>> [1,2,{1,2}]
[1, 2, {1, 2}]
>>>
```
**Set, does not work because unhashable:**
```
>>> {1,2,{1,2}}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'set'
>>>
```
Now, even if sets cannot be members of other sets, we can use them in membership tests. This will always return `False` because sets cannot be members, but such a check does not result in an error.
```
>>> {1} in {1,2,3}
False
>>> {1,2} in {1,2,3}
False
>>> set() in {1,2,3}
False
>>>
```
However, if I try to do the same test where the element being tested is a `dict`, I get an error which suggests that the element being tested cannot be unhashable.
```
>>> {'a':1} in {1,2}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>> {} in {1,2}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>>
```
That cannot be the whole story, because a `set` **can** be tested for membership in another set even if it is itself unhashable, giving the result `False` rather than an error.
So the question is: **What makes an element eligible for a set membership test in Python?** | The confusion comes because when you say 'if set in set', I think python is casting the left hand set to a frozenset and then testing that. E.g.
```
>>> f = frozenset({1})
>>> f
frozenset([1])
>>> x = {f, 2, 3}
>>> {1} in x
True
```
However, there is no equivalent to frozenset for a dict, so it cannot convert the dict to an immutable object for the membership test, and thus it fails.
I don't know the 'rule' that's followed here - whether there's some general **method** that can be overrideen to provide the immutable-conversion or if this behaviour is hardcoded to the specific case of set in set. |
Command prompt can't write letter by letter? | 32,233,636 | 10 | 2015-08-26T18:04:20Z | 32,233,670 | 10 | 2015-08-26T18:06:34Z | [
"python",
"python-3.x"
] | ```
import time
def textinput(txt,waittime=0.04):
end = len(txt)
letters = 0
while end != letters:
print(txt[letters], end = '')
letters += 1
time.sleep(waittime)
textinput('Hello there!')
```
This is basically my function for writing words letter by letter, it works flawlessly on IDLE when testing it, however, when I run it normally (and it opens up the command prompt), what I'm trying to write stays invisible, and then suddenly displays the words at once. Did I type something wrong, or is it command prompt's issue? I'm using windows 10. | Output is probably buffered, trying flushing it by adding the following line after your print:
```
sys.stdout.flush()
``` |
Command prompt can't write letter by letter? | 32,233,636 | 10 | 2015-08-26T18:04:20Z | 32,233,850 | 16 | 2015-08-26T18:15:47Z | [
"python",
"python-3.x"
] | ```
import time
def textinput(txt,waittime=0.04):
end = len(txt)
letters = 0
while end != letters:
print(txt[letters], end = '')
letters += 1
time.sleep(waittime)
textinput('Hello there!')
```
This is basically my function for writing words letter by letter, it works flawlessly on IDLE when testing it, however, when I run it normally (and it opens up the command prompt), what I'm trying to write stays invisible, and then suddenly displays the words at once. Did I type something wrong, or is it command prompt's issue? I'm using windows 10. | You don't need to use `sys`, you just need `flush=True`:
```
def textinput(txt,waittime=0.4):
for letter in txt:
print(letter, end = '',flush=True)
time.sleep(waittime)
```
You can also simply iterate over the string itself. |
How to get rows from DF that contain value None in pyspark (spark) | 32,236,135 | 3 | 2015-08-26T20:28:50Z | 32,236,207 | 7 | 2015-08-26T20:33:31Z | [
"python",
"apache-spark",
"pyspark"
] | In below example `df.a == 1` predicate returns correct result but `df.a == None` returns 0 when it should return 1.
```
l = [[1], [1], [2], [2], [None]]
df = sc.parallelize(l).toDF(['a'])
df # DataFrame[a: bigint]
df.collect() # [Row(a=1), Row(a=1), Row(a=2), Row(a=2), Row(a=None)]
df.where(df.a == 1).count() # 2L
df.where(df.a == None).count() # 0L
```
Using Spark 1.3.1 | You can use [`Column.isNull`](https://spark.apache.org/docs/1.3.1/api/python/pyspark.sql.html#pyspark.sql.Column.isNull) method:
```
df.where(df.a.isNull()).count()
```
On a side note this behavior is what one could expect from a [normal SQL query](http://sqlfiddle.com/#!15/6c1d3/1). Since `NULL` marks *"missing information and inapplicable information"* [1] it doesn't make sense to ask if something is equal to `NULL`. It simply either `IS` or `IS NOT` missing.\
Scala API provides special null-safe equality `<=>` operator so it is possible to do something like this:
```
df.where($"a" <=> lit(null))
```
but it doesn't look like a good idea if you ask me.
1.[Wikipedia, Null (SQL)](https://en.wikipedia.org/wiki/Null_%28SQL%29) |
python flask redirect to https from http | 32,237,379 | 2 | 2015-08-26T21:57:37Z | 32,238,093 | 7 | 2015-08-26T23:03:19Z | [
"python",
"ssl",
"flask"
] | I have a website build using python3.4 and flask...I have generated my own self-signed certificate and I am currently testing my website through localhost.
I am using the python ssl module along with this flask extension: <https://github.com/kennethreitz/flask-sslify>
```
context = ('my-cert.pem', 'my-key.pem')
app = Flask(__name__)
sslify = SSLify(app)
...
if __name__ == '__main__':
app.debug = False
app.run(
host="127.0.0.1",
port=int("5000"),
ssl_context=context
)
```
This does not seem to be working however. I took a look in the sslify source code and this line does not seem to be working
```
def init_app(self, app):
"""Configures the configured Flask app to enforce SSL."""
app.before_request(self.redirect_to_ssl)
app.after_request(self.set_hsts_header)
```
Specifically the function call to redirect\_to\_ssl (I added my own print statement under the redirect\_to\_ssl function and my statement was never printed)
```
def redirect_to_ssl(self):
print("THIS IS WORKING")
"""Redirect incoming requests to HTTPS."""
Should we redirect?
criteria = [
request.is_secure,
current_app.debug,
request.headers.get('X-Forwarded-Proto', 'http') == 'https'
]
if not any(criteria) and not self.skip:
if request.url.startswith('http://'):
url = request.url.replace('http://', 'https://', 1)
code = 302
if self.permanent:
code = 301
r = redirect(url, code=code)
return r
```
I am pretty new to python. Any ideas? | To me, it appears you're making it more complicated than it needs to be. Here is the code I use in my views.py script to force user to HTTPS connections:
```
@app.before_request
def before_request():
if request.url.startswith('http://'):
url = request.url.replace('http://', 'https://', 1)
code = 301
return redirect(url, code=code)
``` |
Find the closest date to a given date | 32,237,862 | 2 | 2015-08-26T22:39:47Z | 32,237,949 | 15 | 2015-08-26T22:48:51Z | [
"python",
"date",
"datetime"
] | I have an array of datetime objects, and I would like to find which element in the array is the closest to a given date (e.g `datetime.datetime(2014,12,16)`)
[This](http://stackoverflow.com/questions/17249220/getting-the-closest-date-to-a-given-date) post shows how to find the nearest date *which is not before the given date*. How can I alter this code so that it can return dates that are before a given date?
For example, if the array housed elements `datetime.datetime(2014,12,10)` and `datetime.datetime(2014,12,28)`, the former item should be returned because it is closest to `datetime.datetime(2014,12,16)` in absolute value. | ```
def nearest(items, pivot):
return min(items, key=lambda x: abs(x - pivot))
```
*Note that this will work for numbers too.* |
Why does this Jython loop fail after a single run? | 32,239,955 | 15 | 2015-08-27T02:58:08Z | 32,383,489 | 7 | 2015-09-03T19:01:56Z | [
"java",
"python",
"loops",
"exception-handling",
"jython"
] | I've got the following code:
```
public static String getVersion()
{
PythonInterpreter interpreter = new PythonInterpreter();
try
{
interpreter.exec(IOUtils.toString(new FileReader("./Application Documents/Scripts/Version.py")));
PyObject get_version = interpreter.get("get_latest_version");
PyObject result = get_version.__call__(interpreter.get("url"));
String latestVersion = (String) result.__tojava__(String.class);
interpreter.close();
return latestVersion;
} catch (IOException ex) {
ex.printStackTrace();
interpreter.close();
return Version.getLatestVersionOnSystem();
}
```
For the sake of completeness, I'm adding the Python code:
```
import urllib2 as urllib
import warnings
url = 'arcticlights.ca/api/paint&requests?=version'
def get_latest_version(link=url):
request = urllib.Request(link)
handler = urllib.urllopen(request)
if handler.code is not 200:
warnings.warn('Invalid Status Code', RuntimeWarning)
return handler.read()
version = get_latest_version()
```
It works flawlessly, but only 10% of the time. If I run it with a main like follows:
```
public static void main(String[] args)
{
for (int i = 0; i < 10; i++) {
System.out.println(getVersion());
}
}
```
It works the first time. It gives me the output that I want, which is the data from the http request that is written in my `Versions.py` file, which the java code above calls. After the second time, it throws this massive error (which is 950 lines long, but of course, I won't torture you guys). Here's the gist of it:
```
Aug 26, 2015 10:41:21 PM org.python.netty.util.concurrent.DefaultPromise execute
SEVERE: Failed to submit a listener notification task. Event loop shut down?
java.util.concurrent.RejectedExecutionException: event executor terminated
```
My Python traceback that is supplied at the end of the 950 line Java stack trace is mostly this:
```
File "<string>", line 18, in get_latest_version
urllib2.URLError: <urlopen error [Errno -1] Unmapped exception: java.util.concurrent.RejectedExecutionException: event executor terminated>
```
If anyone is curious, the seemingly offending line in my `get_latest_version` is just:
```
handler = urllib2.urlopen(request)
```
Since the server that the code is calling is being run (by cherrypy) on the localhost on my network, I can see how it is interacting with my server. It actually sends two requests (and throws the exception right after the second).
```
127.0.0.1 - - [26/Aug/2015:22:41:21] "GET / HTTP/1.1" 200 3 "" "Python-urllib/2.7"
127.0.0.1 - - [26/Aug/2015:22:41:21] "GET / HTTP/1.1" 200 3 "" "Python-urllib/2.7"
```
While I'm never going to run this code in a loop likely, I'm quite curious as to two things:
* Is the offending code my Python or Java code? Or could it just be an issue with Jython altogether?
* What does the exception mean (it looks like a java exception)? Why is it being thrown when it is? Is there a way to make a loop like this work? Could this be written better? | The python library `urllib2`, which you use, uses `Netty`.
`Netty` has a problem, which is widely known:
* [Hopper: java.util.concurrent.RejectedExecutionException: event executor terminated](https://bugs.mojang.com/browse/MC-38028)
* [Error recurrent : DefaultPromise Failed to notify a listener. Event loop shut down?](https://github.com/mrniko/netty-socketio/issues/91)
* [Calling HttpClient.shutdown() causes problem to later created clients](https://github.com/ReactiveX/RxNetty/issues/195)
* [Shutting down netty 4 application throws RejectedExecutionException](http://stackoverflow.com/questions/20541433/shutting-down-netty-4-application-throws-rejectedexecutionexception)
According to all of these links `Netty` `HttpClient` fails from time to time after closing. It looks like `Netty` recovers after some time and some applications work just normally with this problem. Anyway it looks unstable.
---
Q: Is the offending code my Python or Java code? Or could it just be an issue with Jython altogether?
A: The problem is caused by Jython library `urllib2`, which uses `Netty`.
---
Q: What does the exception mean (it looks like a java exception)? Why is it being thrown when it is?
A: `urllib2` uses internally `Netty`. `Netty` is written in Java and throws this Java exception. `Netty` uses its own Thread Executor, which is shut down and unusable for some time after closing a request. You hit exactly this time.
---
Q: Is there a way to make a loop like this work? Could this be written better?
A: I would try to use [Requests](http://docs.python-requests.org/en/latest/index.html) library. |
How to rotate x-axis label in Pandas barplot | 32,244,019 | 9 | 2015-08-27T08:11:34Z | 32,244,161 | 28 | 2015-08-27T08:18:49Z | [
"python",
"pandas",
"matplotlib"
] | With the following code:
```
import matplotlib
matplotlib.style.use('ggplot')
import matplotlib.pyplot as plt
import pandas as pd
df = pd.DataFrame({ 'celltype':["foo","bar","qux","woz"], 's1':[5,9,1,7], 's2':[12,90,13,87]})
df = df[["celltype","s1","s2"]]
df.set_index(["celltype"],inplace=True)
df.plot(kind='bar',alpha=0.75)
plt.xlabel("")
```
I made this plot:
[](http://i.stack.imgur.com/6pLLq.jpg)
As stated there, how can I make the x-axis to 0 degree rotation?
I tried adding this but did not work:
```
plt.set_xticklabels(df.index,rotation=90)
``` | Pass param [`rot=0`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html#pandas.DataFrame.plot) to rotate the xticks:
```
import matplotlib
matplotlib.style.use('ggplot')
import matplotlib.pyplot as plt
import pandas as pd
df = pd.DataFrame({ 'celltype':["foo","bar","qux","woz"], 's1':[5,9,1,7], 's2':[12,90,13,87]})
df = df[["celltype","s1","s2"]]
df.set_index(["celltype"],inplace=True)
df.plot(kind='bar',alpha=0.75, rot=0)
plt.xlabel("")
plt.show()
```
yields plot:
[](http://i.stack.imgur.com/JPdxv.png) |
How to save a Seaborn plot into a file | 32,244,753 | 20 | 2015-08-27T08:51:07Z | 32,245,025 | 8 | 2015-08-27T09:03:19Z | [
"python",
"pandas",
"matplotlib",
"seaborn"
] | I tried the following code (`test_seaborn.py`):
```
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
import seaborn as sns
sns.set()
df = sns.load_dataset('iris')
sns_plot = sns.pairplot(df, hue='species', size=2.5)
fig = sns_plot.get_figure()
fig.savefig("output.png")
#sns.plt.show()
```
But I get this error:
```
Traceback (most recent call last):
File "test_searborn.py", line 11, in <module>
fig = sns_plot.get_figure()
AttributeError: 'PairGrid' object has no attribute 'get_figure'
```
I expect the final `output.png` will exist and look like this:
[](http://i.stack.imgur.com/n6uXd.png)
How can I resolve the problem? | You should just be able to use the `savefig` method of `sns_plot` directly.
```
sns_plot.savefig("output.png")
```
For clarity with your code if you did want to access the matplotlib figure that `sns_plot` resides in then you can get it directly with
```
fig = sns_plot.fig
```
In this case there is no `get_figure` method as your code assumes. |
How to save a Seaborn plot into a file | 32,244,753 | 20 | 2015-08-27T08:51:07Z | 32,245,026 | 24 | 2015-08-27T09:03:20Z | [
"python",
"pandas",
"matplotlib",
"seaborn"
] | I tried the following code (`test_seaborn.py`):
```
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
import seaborn as sns
sns.set()
df = sns.load_dataset('iris')
sns_plot = sns.pairplot(df, hue='species', size=2.5)
fig = sns_plot.get_figure()
fig.savefig("output.png")
#sns.plt.show()
```
But I get this error:
```
Traceback (most recent call last):
File "test_searborn.py", line 11, in <module>
fig = sns_plot.get_figure()
AttributeError: 'PairGrid' object has no attribute 'get_figure'
```
I expect the final `output.png` will exist and look like this:
[](http://i.stack.imgur.com/n6uXd.png)
How can I resolve the problem? | Remove the `get_figure` and just use `sns_plot.savefig('output.png')`
```
df = sns.load_dataset('iris')
sns_plot = sns.pairplot(df, hue='species', size=2.5)
sns_plot.savefig("output.png")
``` |
Complexity of len() with regard to sets and lists | 32,248,882 | 44 | 2015-08-27T12:03:42Z | 32,249,047 | 18 | 2015-08-27T12:11:42Z | [
"python",
"python-3.x",
"time-complexity",
"python-internals"
] | The complexity of `len()` with regards to sets and lists is equally O(1). How come it takes more time to process sets?
```
~$ python -m timeit "a=[1,2,3,4,5,6,7,8,9,10];len(a)"
10000000 loops, best of 3: 0.168 usec per loop
~$ python -m timeit "a={1,2,3,4,5,6,7,8,9,10};len(a)"
1000000 loops, best of 3: 0.375 usec per loop
```
Is it related to the particular benchmark, as in, it takes more time to build sets than lists and the benchmark takes that into account as well?
If the creation of a set object takes more time compared to creating a list, what would be the underlying reason? | The relevant lines are <http://svn.python.org/view/python/trunk/Objects/setobject.c?view=markup#l640>
```
640 static Py_ssize_t
641 set_len(PyObject *so)
642 {
643 return ((PySetObject *)so)->used;
644 }
```
and <http://svn.python.org/view/python/trunk/Objects/listobject.c?view=markup#l431>
```
431 static Py_ssize_t
432 list_length(PyListObject *a)
433 {
434 return Py_SIZE(a);
435 }
```
Both are only a static lookup.
So what is the difference you may ask. You measure the creation of the objects, too. And it is a little more time consuming to create a set than a list. |
Complexity of len() with regard to sets and lists | 32,248,882 | 44 | 2015-08-27T12:03:42Z | 32,249,059 | 101 | 2015-08-27T12:12:17Z | [
"python",
"python-3.x",
"time-complexity",
"python-internals"
] | The complexity of `len()` with regards to sets and lists is equally O(1). How come it takes more time to process sets?
```
~$ python -m timeit "a=[1,2,3,4,5,6,7,8,9,10];len(a)"
10000000 loops, best of 3: 0.168 usec per loop
~$ python -m timeit "a={1,2,3,4,5,6,7,8,9,10};len(a)"
1000000 loops, best of 3: 0.375 usec per loop
```
Is it related to the particular benchmark, as in, it takes more time to build sets than lists and the benchmark takes that into account as well?
If the creation of a set object takes more time compared to creating a list, what would be the underlying reason? | **Firstly,** you have not measured the speed of `len()`, you have measured the speed of creating a list/set *together with* the speed of `len()`.
Use the `--setup` argument of `timeit`:
```
$ python -m timeit --setup "a=[1,2,3,4,5,6,7,8,9,10]" "len(a)"
10000000 loops, best of 3: 0.0369 usec per loop
$ python -m timeit --setup "a={1,2,3,4,5,6,7,8,9,10}" "len(a)"
10000000 loops, best of 3: 0.0372 usec per loop
```
The statements you pass to `--setup` are run before measuring the speed of `len()`.
**Secondly,** you should note that `len(a)` is a pretty quick statement. The process of measuring its speed may be subject to "noise". Consider that [the code executed (and measured) by timeit](https://hg.python.org/cpython/file/3.5/Lib/timeit.py#l70) is equivalent to the following:
```
for i in itertools.repeat(None, number):
len(a)
```
Because both `len(a)` and `itertools.repeat(...).__next__()` are fast operations and their speeds may be similar, the speed of `itertools.repeat(...).__next__()` may influence the timings.
For this reason, you'd better measure `len(a); len(a); ...; len(a)` (repeated 100 times or so) so that the body of the for loop takes a considerably higher amount of time than the iterator:
```
$ python -m timeit --setup "a=[1,2,3,4,5,6,7,8,9,10]" "$(for i in {0..1000}; do echo "len(a)"; done)"
10000 loops, best of 3: 29.2 usec per loop
$ python -m timeit --setup "a={1,2,3,4,5,6,7,8,9,10}" "$(for i in {0..1000}; do echo "len(a)"; done)"
10000 loops, best of 3: 29.3 usec per loop
```
(The results still says that `len()` has the same performances on lists and sets, but now you are sure that the result is correct.)
**Thirdly,** it's true that "complexity" and "speed" are related, but I believe you are making some confusion. The fact that `len()` has *O(1)* complexity for lists and sets does not imply that it must run with the same speed on lists and sets.
It means that, on average, no matter how long the list `a` is, `len(a)` performs the same asymptotic number of steps. And no matter how long the set `b` is, `len(b)` performs the same asymptotic number of steps. But the algorithm for computing the size of lists and sets may be different, resulting in different performances (timeit shows that this is not the case, however this may be a possibility).
**Lastly,**
> If the creation of a set object takes more time compared to creating a list, what would be the underlying reason?
A set, as you know, does not allow repeated elements. Sets in CPython are implemented as hash tables (to ensure average *O(1)* insertion and lookup): constructing and maintaining a hash table is much more complex than adding elements to a list.
Specifically, when constructing a set, you have to compute hashes, build the hash table, look it up to avoid inserting duplicated events and so on. By contrast, lists in CPython are implemented as a simple array of pointers that is `malloc()`ed and `realloc()`ed as required. |
Python and functional programming: is there an apply() function? | 32,249,197 | 7 | 2015-08-27T12:18:39Z | 32,249,304 | 11 | 2015-08-27T12:23:07Z | [
"python",
"functional-programming"
] | Scala has the [`apply()`](http://stackoverflow.com/questions/9737352/what-is-the-apply-function-in-scala) function.
I am new to Python and I am wondering how should I write the following one-liner:
```
(part_a, part_b) = (lambda x: re.search(r"(\w+)_(\d+)", x).groups())(input_string)
```
I would feel better with something like:
```
(part_a, part_b) = input_string.apply(lambda x: re.search(r"(\w+)_(\d+)", x).groups())
```
Am I wrong from a FF viewpoint? Is there such construction in Python?
*Edit: I know about the poorly picked snippet.* | When writing Haskell write Haskell. When writing Python just write Python:
```
part_a, part_b = re.search(r"(\w+)_(\d+)", input_string).groups()
``` |
Installing iPython: "ImportError cannot import name path"? | 32,252,122 | 23 | 2015-08-27T14:27:47Z | 32,252,578 | 25 | 2015-08-27T14:46:53Z | [
"python",
"ipython"
] | I'm trying to install IPython. I have run `pip install ipython[notebook]` without any errors, but now I get this:
```
$ ipython notebook
Traceback (most recent call last):
File "/Users/me/.virtualenvs/.venv/bin/ipython", line 7, in <module>
from IPython import start_ipython
File "/Users/me/.virtualenvs/.venv/lib/python2.7/site-packages/IPython/__init__.py", line 48, in <module>
from .terminal.embed import embed
File "/Users/me/.virtualenvs/.venv/lib/python2.7/site-packages/IPython/terminal/embed.py", line 16, in <module>
from IPython.core.interactiveshell import DummyMod
File "/Users/me/.virtualenvs/.venv/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 31, in <module>
from pickleshare import PickleShareDB
File "/Users/me/.virtualenvs/.venv/lib/python2.7/site-packages/pickleshare.py", line 41, in <module>
from path import path as Path
ImportError: cannot import name path
```
I have the same error if I try to run `import pickleshare` at a Python console, or `from path import path`.
What can I do to fix this? | Looks like this is a [known issue](https://github.com/pickleshare/pickleshare/issues/8), caused by a change in the `path.py` package. Reverting to an older version of `path.py` solves this :
```
sudo pip3 install -I path.py==7.7.1
``` |
Shortest way to get a value from a dictionary or a default if the key is not present | 32,256,997 | 2 | 2015-08-27T18:33:12Z | 32,257,017 | 11 | 2015-08-27T18:34:41Z | [
"python",
"dictionary"
] | I am just curious if
```
p = 'padding' in ui and ui['padding'] or 0
```
is the shortest way (or if there is an even shorter way) of writing
```
if 'padding' in ui:
p = ui['padding']
else:
p = 0
```
in Python. | If `ui` is a dictionary, you can even simplify that with the [`dict.get`](https://docs.python.org/3/library/stdtypes.html#dict.get) method, like this
```
p = ui.get('padding', 0)
```
Here, if the key `padding` exists in the dictionary, then the value corresponding to that will be returned. Otherwise, the default value passed as the second parameter will be returned. |
Why is bytearray not a Sequence in Python 2? | 32,258,275 | 12 | 2015-08-27T19:51:37Z | 32,258,443 | 7 | 2015-08-27T20:02:38Z | [
"python",
"bytearray",
"python-internals",
"abc"
] | I'm seeing a weird discrepancy in behavior between Python 2 and 3.
In Python 3 things seem to work fine:
```
Python 3.5.0rc2 (v3.5.0rc2:cc15d736d860, Aug 25 2015, 04:45:41) [MSC v.1900 32 b
it (Intel)] on win32
>>> from collections import Sequence
>>> isinstance(bytearray(b"56"), Sequence)
True
```
But not in Python 2:
```
Python 2.7.10 (default, May 23 2015, 09:44:00) [MSC v.1500 64 bit (AMD64)] on wi
n32
>>> from collections import Sequence
>>> isinstance(bytearray("56"), Sequence)
False
```
The results seem to be consistent across minor releases of both Python 2.x and 3.x. Is this a known bug? Is it a bug at all? Is there any logic behind this difference?
I am actually more worried about the C API function [`PySequence_Check`](https://docs.python.org/2/c-api/sequence.html#c.PySequence_Check) properly identifying an object of type `PyByteArray_Type` as exposing the sequence protocol, which by looking at the source code it seems like it should, but any insight into this whole thing is very welcome. | Abstract classes from `collections` use [`ABCMeta.register(subclass)`](https://docs.python.org/3/library/abc.html#abc.ABCMeta.register) to
> Register *subclass* as a âvirtual subclassâ of this ABC.
In Python 3 `issubclass(bytearray, Sequence)` returns `True` because `bytearray` is explicitly registered as a subclass of `ByteString` (which is derived from `Sequence`) and `MutableSequence`. See the relevant part of [*Lib/\_collections\_abc.py*](https://hg.python.org/cpython/file/b76346142b49/Lib/_collections_abc.py#l870):
```
class ByteString(Sequence):
"""This unifies bytes and bytearray.
XXX Should add all their methods.
"""
__slots__ = ()
ByteString.register(bytes)
ByteString.register(bytearray)
...
MutableSequence.register(bytearray) # Multiply inheriting, see ByteString
```
Python 2 doesn't do that (from [*Lib/\_abcoll.py*](https://hg.python.org/cpython/file/v2.7.10/Lib/_abcoll.py#l631)):
```
Sequence.register(tuple)
Sequence.register(basestring)
Sequence.register(buffer)
Sequence.register(xrange)
...
MutableSequence.register(list)
```
This behaviour was changed in Python 3.0 (in [this commit](https://hg.python.org/cpython/rev/fe5b2edffae7) specifically):
> Add ABC *ByteString* which unifies *bytes* and *bytearray* (but not *memoryview*).
> There's no ABC for "PEP 3118 style buffer API objects" because there's no
> way to recognize these in Python (apart from trying to use *memoryview()*
> on them).
And there's more information in [PEP 3119](https://www.python.org/dev/peps/pep-3119/):
> This is a proposal to add Abstract Base Class (ABC) support to Python
> 3000. It proposes:
> [...]
> Specific ABCs for containers and iterators, to be added to the
> collections module.
>
> Much of the thinking that went into the proposal is not about the
> specific mechanism of ABCs, as contrasted with Interfaces or Generic
> Functions (GFs), but about clarifying philosophical issues like "what
> makes a set", "what makes a mapping" and "what makes a sequence".
>
> [...] a metaclass for use with ABCs that will allow us to add an ABC as a "virtual base class" (not the same concept as in C++) to any class, including to another ABC. This allows the standard library to define ABCs *Sequence* and *MutableSequence* and register these as virtual base classes for built-in types like *basestring*, *tuple* and *list*, so that for example the following conditions are all true: [...] *issubclass(bytearray, MutableSequence)*.
Just FYI `memoryview` was registered as a subclass of `Sequence` only in Python 3.4:
> There's no ducktyping for this due to the *Sequence*/*Mapping* confusion
> so it's a simple missing explicit registration.
(see [issue18690](http://bugs.python.org/issue18690) for details).
---
[`PySequence_Check`](https://hg.python.org/cpython/file/b76346142b49/Objects/abstract.c#l1375) from Python C API does not rely on the `collections` module:
```
int
PySequence_Check(PyObject *s)
{
if (PyDict_Check(s))
return 0;
return s != NULL && s->ob_type->tp_as_sequence &&
s->ob_type->tp_as_sequence->sq_item != NULL;
}
```
It checks for non-zero `tp_as_sequence` field ([example for `bytearray`](https://hg.python.org/cpython/file/b76346142b49/Objects/bytearrayobject.c#l3116)) and if that succeeds, for non-zero `sq_item` field (which is basically getitem - [example for `bytearray`](https://hg.python.org/cpython/file/b76346142b49/Objects/bytearrayobject.c#l2985)). |
what is function(var1)(var2) in python | 32,258,881 | 4 | 2015-08-27T20:31:10Z | 32,258,909 | 16 | 2015-08-27T20:32:53Z | [
"python"
] | I have a piece of code I pulled from someone I don't understand:
```
def __init__(self, func):
self.func = func
wraps(func)(self)
```
I've seen things like `wraps(func)(self)` several times but never seen it explained. How is there a function with parameters and then another `(var)` thing after it? What does it all mean? Thank you | Functions are [**first-class objects**](https://en.wikipedia.org/wiki/First-class_function) in Python.
You have no doubt encountered this on the command line if you have typed the name only of a function without parentheses.
```
In [2]: a
Out[2]: <function __main__.a>
```
When you see `a(b)(c)` it is a method of chaining:
* The function `a` is defined to return *another function*.
* `a(b)` calls `a` and returns that reference (to a callable function)
* `a(b)(c)` calls whatever function was returned by `a` with `c` as an argument.
This is equivalent to the following:
```
new_func = a(b)
new_func(c)
```
---
An example:
```
In [1]: def multiply_by(x):
...: def multiply_by_x(y):
...: return x * y
...: return multiply_by_x # notice no parens
...:
In [2]: multiply_by(10)
Out[2]: <function __main__.multiply_by_x>
```
Notice when you call this you get a **function object**. (This is what I mean when I am saying a "reference to a function" or the like.)
```
In [3]: multiply_by(10)(5)
Out[3]: 50
```
You're calling the function returned by `multiply_by()` with 5 as an argument, and it's exactly the same as doing:
```
In [4]: multiply_by_10 = multiply_by(10)
In [5]: multiply_by_10(4)
Out[5]: 40
In [6]: multiply_by_10(8)
Out[6]: 80
```
The cool thing about doing this, as you can see from this example, is that now your `multiply_by` function is a factory for functions that multiply by something. Above, we created `multiply_by_10` which, obviously, multiplies what you feed it by 10. We can just as easily do:
```
In [7]: multiply_by_5 = multiply_by(5)
```
and have a function that multiplies by 5. This is obviously extremely useful. Incidentally, it's also how Python's [decorators](https://www.python.org/dev/peps/pep-0318/) work.
*Thanks @MarkusMeskanen in the comments for pointing out a way to make my silly example into a cooler one!*
See also:
* [What is a closure?](https://programmers.stackexchange.com/questions/40454/what-is-a-closure)
* [Python Decorators Demystified](http://stackoverflow.com/a/1594484/2588818)
* [functools](https://docs.python.org/2/library/functools.html) |
How to add legend on Seaborn facetgrid bar plot | 32,261,619 | 7 | 2015-08-28T00:42:46Z | 32,301,804 | 7 | 2015-08-30T22:33:25Z | [
"python",
"pandas",
"matplotlib",
"seaborn"
] | I have the following code:
```
import numpy as np
import pandas as pd
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
import seaborn as sns
sns.set(style="white")
# Create a dataset with many short random walks
rs = np.random.RandomState(4)
pos = rs.randint(-1, 2, (10, 5)).cumsum(axis=1)
pos -= pos[:, 0, np.newaxis]
step = np.tile(range(5), 10)
walk = np.repeat(range(10), 5)
df = pd.DataFrame(np.c_[pos.flat, step, walk],
columns=["position", "step", "walk"])
# Initialize a grid of plots with an Axes for each walk
grid = sns.FacetGrid(df, col="walk", hue="walk", col_wrap=5, size=5,
aspect=1)
# Draw a bar plot to show the trajectory of each random walk
grid.map(sns.barplot, "step", "position", palette="Set3").add_legend();
grid.savefig("/Users/mymacmini/Desktop/test_fig.png")
#sns.plt.show()
```
Which makes this plot:
[](http://i.stack.imgur.com/y6Cpc.jpg)
As you can see I get the legend wrong. How can I make it right? | Some how there is one legend item for each of the subplot. Looks like if we want to have legend corresponds to the bars in each of the subplot, we have to manually make them.
```
# Let's just make a 1-by-2 plot
df = df.head(10)
# Initialize a grid of plots with an Axes for each walk
grid = sns.FacetGrid(df, col="walk", hue="walk", col_wrap=2, size=5,
aspect=1)
# Draw a bar plot to show the trajectory of each random walk
bp = grid.map(sns.barplot, "step", "position", palette="Set3")
# The color cycles are going to all the same, doesn't matter which axes we use
Ax = bp.axes[0]
# Some how for a plot of 5 bars, there are 6 patches, what is the 6th one?
Boxes = [item for item in Ax.get_children()
if isinstance(item, matplotlib.patches.Rectangle)][:-1]
# There is no labels, need to define the labels
legend_labels = ['a', 'b', 'c', 'd', 'e']
# Create the legend patches
legend_patches = [matplotlib.patches.Patch(color=C, label=L) for
C, L in zip([item.get_facecolor() for item in Boxes],
legend_labels)]
# Plot the legend
plt.legend(handles=legend_patches)
```
[](http://i.stack.imgur.com/AX5CF.png) |
Reading large file (52mb) of lines in Python, is it better to iterate the lines or use readlines? | 32,276,616 | 3 | 2015-08-28T17:26:46Z | 32,276,685 | 8 | 2015-08-28T17:31:33Z | [
"python",
"file-io"
] | I have a list of 4 million words in a txt file that I want to add to a list. I have two options:
```
l=[line for line in open(wordlist)]
```
or:
```
wordlist = file.readlines()
```
readlines() appears to be much faster, I'm guessing this is because the data is read into the memory in one go. The first option would be better for conserving memory because it reads one line at a time, is this true? Does readlines() use any type of buffer when copying? In general which is best to use? | Both options read the whole thing into memory in one big list. The first option is slower because you delegate looping to Python bytecode. If you wanted to create one big list with all lines from your file, then there is no reason to use a list comprehension here.
I'd not use *either*. Loop over the file *and process the lines as you loop*:
```
with open(wordlist) as fileobj:
for line in fileobj:
# do something with this line only.
```
There is usually no need to keep the whole unprocessed file data in memory. |
Pylab - 'module' object has no attribute 'Figure' | 32,279,887 | 3 | 2015-08-28T21:16:48Z | 33,510,341 | 14 | 2015-11-03T22:32:21Z | [
"python",
"matplotlib",
"tkinter"
] | I'm trying to use Tkinter to create a view, and therefore I'm also using pylab. My problem is that I get an error saying:
> AttributeError: 'module' object has no attribute 'Figure'
and the error comes from this line of code:
```
self.fig = FigureCanvasTkAgg(pylab.figure(), master=self)
```
I'm new to python, so I don't know how to fix this, since the `figure()` should be a part of the pylab library.
Any suggestions on how to fix this would be appreciated.
**EDIT**:
Here's the full code:
```
from Tkinter import *
import ttk
from ttk import Style
import pylab
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
import matplotlib.pyplot as plt
from numpy import cumsum
import matplotlib
class GUI(Frame):
def __init__(self, parent, motspiller):
Frame.__init__(self, parent)
self.style = Style()
self.fig = None
def setup(self):
self.style.theme_use("default")
self.pack(fill=BOTH, expand=1)
label = Label(self.parent)
label.place(x=800, y=50)
quit_button = Button(self, text="Quit", command=self.quit)
quit_button.place(x=1000, y=450)
self.fig = FigureCanvasTkAgg(pylab.figure(), master=self)
self.fig.get_tk_widget().grid(column=0, row=0)
self.fig.show()
``` | If you're unable to run any of the other `pylab` functions then there's a problem with your install. I just ran into a similar error when I installed `matplotlib` and then `pylab`, and it turns out that installing `matplotlib` will *also* install `pylab` for you and that separately installing `pylab` on top of it will cause those exact problems. A simple `pip uninstall pylab` did it for me, since it removed the newly installed `pylab` and let me import the one that came bundled with `matplotlib` instead. |
how to change a Dataframe column from String type to Double type in pyspark | 32,284,620 | 9 | 2015-08-29T09:34:08Z | 32,286,450 | 16 | 2015-08-29T13:15:11Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-1.4"
] | I have a dataframe with column as String.
I wanted to change the column type to Double type in pyspark.
Following is the way, I did,-
```
toDoublefunc = UserDefinedFunction(lambda x: x,DoubleType())
changedTypedf = joindf.withColumn("label",toDoublefunc(joindf['show']))
```
> Just wanted to know , is this the right way to do it as while running
> through Logistic Regression , I am getting some error, so I wonder ,
> is this the reason for the trouble. | There is no need for an UDF here. `Column` already provides [`cast` method](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.Column.cast) with `DataType` instance:
```
from pyspark.sql.types.import DoubleType
changedTypedf = joindf.withColumn("label", joindf["show"].cast(DoubleType()))
```
or short string:
```
changedTypedf = joindf.withColumn("label", joindf["show"].cast("double"))
``` |
How do I randomly select a variable from a list, and then modify it in python? | 32,288,236 | 4 | 2015-08-29T16:28:29Z | 32,288,253 | 7 | 2015-08-29T16:30:01Z | [
"python",
"list",
"python-3.x",
"random"
] | Here's my python 3 code. I would like to randomly select one of the cell variables (c1 through c9) and change its value to the be the same as the cpuletter variable.
```
import random
#Cell variables
c1 = "1"
c2 = "2"
c3 = "3"
c4 = "4"
c5 = "5"
c6 = "6"
c7 = "7"
c8 = "8"
c9 = "9"
cells = [c1, c2, c3, c4, c5, c6, c7, c8, c9]
cpuletter = "X"
random.choice(cells) = cpuletter
```
I'm getting a "Can't assign to function call" error on the "random.choice(cells)." I assume I'm just using it incorrectly? I know you can use the random choice for changing a variable like below:
```
import random
options = ["option1", "option2"]
choice = random.choice(options)
``` | ## Problem:
`random.choice(cells)` returns a random value from your list, for example `"3"`, and you are trying to assign something to it, like:
```
"3" = "X"
```
which is wrong.
Instead of this, you can modify the `list`, for example:
```
cells[5] = "X"
```
## Solution:
You can use [`random.randrange()`](https://docs.python.org/3/library/random.html#random.randrange).
```
import random
cells = [str(i) for i in range(1,10)] # your list
cpuletter = 'X'
print(cells)
random_index = random.randrange(len(cells)) # returns an integer between [0,9]
cells[random_index] = cpuletter
print(cells)
```
**Output:**
```
['1', '2', '3', '4', '5', '6', '7', '8', '9']
['1', '2', '3', '4', '5', '6', '7', 'X', '9']
``` |
chunk string in Python without breaking words | 32,295,475 | 2 | 2015-08-30T10:19:36Z | 32,295,512 | 7 | 2015-08-30T10:24:55Z | [
"python",
"raspberry-pi"
] | I have this code which is meant to display some text on a 20x2 LCD display:
```
#!/usr/bin/python
LCDCHARS = 20
LCDLINES = 2
def WriteLCD(text_per_LCD):
chunked = (text_per_LCD[i:LCDCHARS+i] for i in range (0, len(text_per_LCD), LCDCHARS))
count_l = 0
for text_per_line in chunked:
# print will be replaced by actual LCD call
print (text_per_line)
count_l += 1
if count_l >= LCDLINES:
# agree to lose any extra lines
break
WriteLCD("This text will display on %s LCD lines" % (LCDLINES))
```
The example string will output
```
This text will displ
ay on 2 LCD lines
```
What should I do to split the string without breaking the words? This even if the second line becomes longer and goes out of display.
I read a similar question on [javascript section](http://stackoverflow.com/questions/6632530) and another one in [ruby section](http://stackoverflow.com/questions/24871475), but I was not able to translate the given answers into my Python case. | Use the [`textwrap`](https://docs.python.org/2/library/textwrap.html) module:
```
>>> textwrap.wrap("This text will display on 3 LCD lines", 20)
['This text will', 'display on 3 LCD', 'lines']
``` |
removing duplicates of a list of sets | 32,296,933 | 9 | 2015-08-30T13:15:05Z | 32,296,966 | 13 | 2015-08-30T13:18:56Z | [
"python",
"list",
"set",
"unique",
"duplicate-removal"
] | I have a list of sets :
```
L = [set([1, 4]), set([1, 4]), set([1, 2]), set([1, 2]), set([2, 4]), set([2, 4]), set([5, 6]), set([5, 6]), set([3, 6]), set([3, 6]), set([3, 5]), set([3, 5])]
```
(actually in my case a conversion of a list of reciprocal tuples)
and I want to remove duplicates to get :
```
L = [set([1, 4]), set([1, 2]), set([2, 4]), set([5, 6]), set([3, 6]), set([3, 5])]
```
But if I try :
```
>>> list(set(L))
TypeError: unhashable type: 'set'
```
Or
```
>>> list(np.unique(L))
TypeError: cannot compare sets using cmp()
```
So do you have a solution to "uniquify" my list of sets ? | The best way is to convert your sets to `frozenset`s (which are hashable) and then use `set` to get only the unique sets, like this
```
>>> list(set(frozenset(item) for item in L))
[frozenset({2, 4}),
frozenset({3, 6}),
frozenset({1, 2}),
frozenset({5, 6}),
frozenset({1, 4}),
frozenset({3, 5})]
```
If you want them as sets, then you can convert them back to `set`s like this
```
>>> [set(item) for item in set(frozenset(item) for item in L)]
[{2, 4}, {3, 6}, {1, 2}, {5, 6}, {1, 4}, {3, 5}]
```
---
If you want the order also to be maintained, while removing the duplicates, then you can use [`collections.OrderedDict`](https://docs.python.org/3/library/collections.html#collections.OrderedDict), like this
```
>>> from collections import OrderedDict
>>> [set(i) for i in OrderedDict.fromkeys(frozenset(item) for item in L)]
[{1, 4}, {1, 2}, {2, 4}, {5, 6}, {3, 6}, {3, 5}]
``` |
Wolfram Alpha and scipy.integrate.quad give me different answers for the same integral | 32,302,231 | 4 | 2015-08-30T23:38:15Z | 32,302,387 | 8 | 2015-08-31T00:04:26Z | [
"python",
"math",
"numpy",
"scipy",
"wolframalpha"
] | Consider the following function:
```
import numpy as np
from scipy.special import erf
def my_func(x):
return np.exp(x ** 2) * (1 + erf(x))
```
When I evaluate the integral of this function from `-14` to `-4` using `scipy`'s `quad` function, I get the following result:
```
In [3]: from scipy import integrate
In [4]: integrate.quad(my_func, -14, -4)
/usr/local/lib/python2.7/dist-packages/scipy/integrate/quadpack.py:289: UserWarning: The maximum number of subdivisions (50) has been achieved.
If increasing the limit yields no improvement it is advised to analyze
the integrand in order to determine the difficulties. If the position of a
local difficulty can be determined (singularity, discontinuity) one will
probably gain from splitting up the interval and calling the integrator
on the subranges. Perhaps a special-purpose integrator should be used.
warnings.warn(msg)
Out[4]: (0.21896647054443383, 0.00014334175850538866)
```
That is, about `0.22`.
However, when I submit this integral to [Wolfram Alpha](http://www.wolframalpha.com/input/?i=integrate+from+-14+to+-4+e%5E%28x%5E2%29+*+%281+%2B+erf%28x%29%29), I get a very different result:
```
-5.29326 X 10 ^ 69.
```
What's the deal? I'm guessing this has to do with the warning `scipy` has given me. What's the best way to evaluate this integral in `python`?
**NOTE**: Increasing the `limit` changes the warning but leaves the `scipy` result unchanged:
```
In [5]: integrate.quad(my_func, -14, -4, limit=10000)
/usr/local/lib/python2.7/dist-packages/scipy/integrate/quadpack.py:289: UserWarning: The occurrence of roundoff error is detected, which prevents
the requested tolerance from being achieved. The error may be
underestimated.
warnings.warn(msg)
Out[5]: (0.21894780966717864, 1.989164129832358e-05)
``` | TL;DR: The integrand is equivalent to `erfcx(-x)`, and the implementation of `erfcx` at [`scipy.special.erfcx`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.erfcx.html) takes care of the numerical issues:
```
In [10]: from scipy.integrate import quad
In [11]: from scipy.special import erfcx
In [12]: quad(lambda x: erfcx(-x), -14, -4)
Out[12]: (0.6990732491815446, 1.4463494884581349e-13)
In [13]: quad(lambda x: erfcx(-x), -150, -50)
Out[13]: (0.6197754761443759, 4.165648376274775e-14)
```
You can avoid the `lambda` expression by changing the sign of the integration argument and limits:
```
In [14]: quad(erfcx, 4, 14)
Out[14]: (0.6990732491815446, 1.4463494884581349e-13)
```
---
The problem is the numerical evaluation of `1 + erf(x)` for negative values of `x`. As `x` decreases, `erf(x)` approaches -1. When you then add 1, you get [catastrophic loss of precision](https://en.wikipedia.org/wiki/Loss_of_significance), and for sufficiently negative `x` (specifically `x` < -5.87), `1 + erf(x)` is numerically 0.
Note that the default behavior at Wolfram Alpha suffers from the same problem. I had to click on "More digits" twice to get a reasonable answer.
The fix is to reformulate your function. You can express `1+erf(x)` as `2*ndtr(x*sqrt(2))`, where `ndtr` is the normal cumulative distribution function, available from [`scipy.special.ndtr`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.ndtr.html) (see, for example, <https://en.wikipedia.org/wiki/Error_function>). Here's an alternative version of your function, and the result of integrating it with `scipy.integrate.quad`:
```
In [133]: def func2(x):
.....: return np.exp(x**2) * 2 * ndtr(x * np.sqrt(2))
.....:
In [134]: my_func(-5)
Out[134]: 0.1107029852258767
In [135]: func2(-5)
Out[135]: 0.11070463773306743
In [136]: integrate.quad(func2, -14, -4)
Out[136]: (0.6990732491815298, 1.4469372263470424e-13)
```
The answer at Wolfram Alpha after clicking on "More digits" twice is `0.6990732491815446...`
This is what the plot of the function looks like when you use a numerically stable version:
[](http://i.stack.imgur.com/xHILS.png)
---
To avoid overflow or underflow for arguments with very large magnitudes, you can do part of the computation in log-space:
```
from scipy.special import log_ndtr
def func3(x):
t = x**2 + np.log(2) + log_ndtr(x * np.sqrt(2))
y = np.exp(t)
return y
```
E.g.
```
In [20]: quad(func3, -150, -50)
Out[20]: (0.6197754761435517, 4.6850379059597266e-14)
```
(Looks like @ali\_m beat me to it in the new question: [Tricking numpy/python into representing very large and very small numbers](http://stackoverflow.com/questions/32303006/tricking-numpy-python-into-representing-very-large-and-very-small-numbers).)
---
Finally, as Simon Byrne pointed out in an answer over at [Tricking numpy/python into representing very large and very small numbers](http://stackoverflow.com/questions/32303006/tricking-numpy-python-into-representing-very-large-and-very-small-numbers#32303168), the function to be integrated can be expressed as `erfcx(-x)`, where `erfcx` is the scaled complementary error function. It is available as [`scipy.special.erfcx`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.erfcx.html).
For example,
```
In [10]: from scipy.integrate import quad
In [11]: from scipy.special import erfcx
In [12]: quad(lambda x: erfcx(-x), -14, -4)
Out[12]: (0.6990732491815446, 1.4463494884581349e-13)
In [13]: quad(lambda x: erfcx(-x), -150, -50)
Out[13]: (0.6197754761443759, 4.165648376274775e-14)
``` |
Could not find a version that satisfies the requirement <package> | 32,302,379 | 15 | 2015-08-31T00:02:24Z | 32,302,448 | 8 | 2015-08-31T00:15:41Z | [
"python",
"pip",
"requirements.txt"
] | I'm installing several Python packages in Ubuntu 12.04 using the following `requirements.txt` file:
```
numpy>=1.8.2,<2.0.0
matplotlib>=1.3.1,<2.0.0
scipy>=0.14.0,<1.0.0
astroML>=0.2,<1.0
scikit-learn>=0.14.1,<1.0.0
rpy2>=2.4.3,<3.0.0
```
and these two commands:
```
$ pip install --download=/tmp -r requirements.txt
$ pip install --user --no-index --find-links=/tmp -r requirements.txt
```
(the first one downloads the packages and the second one installs them).
The process is frequently stopped with the error:
```
Could not find a version that satisfies the requirement <package> (from matplotlib<2.0.0,>=1.3.1->-r requirements.txt (line 2)) (from versions: )
No matching distribution found for <package> (from matplotlib<2.0.0,>=1.3.1->-r requirements.txt (line 2))
```
which I fix manually with:
```
pip install --user <package>
```
and then run the second `pip install` command again.
But that only works for *that* particular package. When I run the second `pip install` command again, the process is stopped now complaining about *another* required package and I need to repeat the process again, ie: install the new required package manually (with the command above) and then run the second `pip install` command.
So far I've had to manually install `six`, `pytz`, `nose`, and now it's complaining about needing `mock`.
Is there a way to tell `pip` to automatically install *all* needed dependencies so I don't have to do it manually one by one?
**Add**: This only happens in Ubuntu 12.04 BTW. In Ubuntu 14.04 the `pip install` commands applied on the `requirements.txt` file work without issues. | This approach (having all dependencies in a directory and not downloading from an index) only works when the directory contains all packages. The directory should therefore contain all dependencies but also all packages that those dependencies depend on (e.g., `six`, `pytz` etc).
You should therefore manually include these in `requirements.txt` (so that the first step downloads them explicitly) or you should install all packages using PyPI and then `pip freeze > requirements.txt` to store the list of all packages needed. |
Trouble installing "distribute": NameError: name 'sys_platform' is not defined | 32,303,152 | 5 | 2015-08-31T02:18:27Z | 34,694,350 | 10 | 2016-01-09T13:58:29Z | [
"python"
] | I'm trying to install the Python package "distribute". I've got it downloaded and it begins to work, but then quits out with the error seen here:
[](http://i.stack.imgur.com/PMV3z.png)
I have a feeling the solution is somehow related to me going in and defining sys\_platform, but I don't have enough of a grasp on what's *actually* wrong to know what to fix. Thanks for any help! I'm always blown away at how helpful you all are. | As stated by Burhan you have to install the `setuptools` package: just use the command:
```
pip install setuptools
```
Most importantly, **do not forget to also uninstall the `distribute` package** (since tools provided by that package are already included by `setuptools`).
Just use the command:
```
pip uninstall distribute
``` |
Error while starting new scrapy project | 32,304,886 | 3 | 2015-08-31T06:07:39Z | 32,584,004 | 18 | 2015-09-15T10:43:27Z | [
"python",
"scrapy"
] | I have installed Scrapy using Ubuntu packages provided in the Scrapy website.
But on starting a Scrapy project
```
scrapy startproject test
```
I am getting error message as.
```
Traceback (most recent call last): File "/usr/bin/scrapy", line 5,
in <module>
from pkg_resources import load_entry_point File "build/bdist.linux-x86_64/egg/pkg_resources/__init__.py", line 3084,
in <module>
File "build/bdist.linux-x86_64/egg/pkg_resources/__init__.py", line 3070, in _call_aside
File "build/bdist.linux-x86_64/egg/pkg_resources/__init__.py", line 3097, in _initialize_master_working_set
File "build/bdist.linux-x86_64/egg/pkg_resources/__init__.py", line 653, in _build_master
File "build/bdist.linux-x86_64/egg/pkg_resources/__init__.py", line 666, in _build_from_requirements
File "build/bdist.linux-x86_64/egg/pkg_resources/__init__.py", line 844, in resolve
pkg_resources.ContextualVersionConflict: (pyasn1 0.1.7 (/usr/lib/python2.7/dist-packages),
Requirement.parse('pyasn1>=0.1.8'), set(['pyasn1-modules']))
```
Please help me solve this error. I am running Python 2.7.6 | I tried `sudo pip install pyasn1 --upgrade` and it works. |
No output from pycharm profiling when running a Flask container | 32,319,971 | 4 | 2015-08-31T21:05:09Z | 34,663,714 | 7 | 2016-01-07T19:37:40Z | [
"python",
"pycharm"
] | Running PyCharm 4.5.3 Build #141.1899 Professional Edition and licensed. I run a Flask app using the profiler command using a configuration similar to:
```
python app.py
```
which looks like:
```
def create_app():
app = Flask(__name__, static_folder='static')
app.register_blueprint(
consumer_v1.bp,
url_prefix='/consumer/v1')
return app
app = create_app()
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=5001)
```
When I goto "Run -> Profile App" I see the following on the console:
```
/Users/james/.virtualenvs/trustmile-api/bin/python /Applications/PyCharm.app/Contents/helpers/profiler/run_profiler.py 127.0.0.1 49537 /Users/james/Documents/workspace/trustmile-backend/trustmile/api_stubs/api_stubs/__init__.py
Starting cProfile profiler
2015-09-01 07:00:25,717 INFO [werkzeug][MainThread] * Running on http://0.0.0.0:5001/ (Press CTRL+C to quit)
2015-09-01 07:00:25,723 INFO [werkzeug][MainThread] * Restarting with stat
2015-09-01 07:00:32,801 INFO [werkzeug][Thread-1] 127.0.0.1 - - [01/Sep/2015 07:00:32] "POST /consumer/v1/account/password HTTP/1.1" 403 -
Snapshot saved to /Users/XXXX/Library/Caches/PyCharm40/snapshots/trustmile-backend5.pstat
Process finished with exit code 0
```
However nothing is shown in the UI as shown below.
[](http://i.stack.imgur.com/rMoWe.png)
Any help greatly appreciated. | To fix this I simply installed [yappi](https://code.google.com/p/yappi/) instead of the default python profiler.
`pip install yappi`
I hope this works for you too! |
How do I get IPython profile behavior from Jupyter 4.x? | 32,320,836 | 8 | 2015-08-31T22:24:12Z | 32,516,383 | 7 | 2015-09-11T05:52:42Z | [
"python",
"ipython",
"ipython-notebook",
"jupyter"
] | There was official(?) recommendation of running an IPython Notebook server, and creating a profile via
```
$ ipython profile create nbserver
```
as recommended in <http://ipython.org/ipython-doc/1/interactive/public_server.html>. This allowed for very different and very useful behavior when starting an IPython Notebook via `ipython notebook` and `ipython notebook --profile=nbserver`.
With Jupyter 4.0, there's a change and there are no longer profiles. I've found the conversation <https://gitter.im/ipython/ipython/archives/2015/05/29> which has user minrk saying:
> The .ipython directory has several things in it:
>
> multiple config directories (called profiles)
>
> one 'data' directory, containing things like kernelspecs, nbextensions
>
> runtime info scattered throughout, but mostly in profiles
>
> Jupyter follows more platform-appropriate conventions:
>
> one config dir at JUPYTER\_CONFIG\_DIR, default: .jupyter
>
> one data dir at JUPYTER\_DATA\_DIR, default: platform-specific
>
> one runtime dir at JUPYTER\_RUNTIME\_DIR, default: platform-specific
And a rather cryptic remark:
> If you want to use different config, specify a different config directory with JUPYTER\_CONFIG\_DIR=whatever
What's the best way to get different behavior (say, between when running as a server vs normal usage)?
Will it involve running something like:
```
$ export JUPYTER_CONFIG_DIR=~/.jupyter-nbserver
$ jupyter notebook
```
whenever a server 'profile' needs to be run? and
```
$ export JUPYTER_CONFIG_DIR=~/.jupyter
$ jupyter notebook
```
whenever a 'normal' profile needs to run? Because that seems terrible. What's the best way to do this in Jupyter 4.0? | Using some code from this blog post <http://www.svds.com/jupyter-notebook-best-practices-for-data-science/> and updating it. The easiest solution appears to be to create an alias, like:
```
alias jupyter-nbserver='JUPYTER_CONFIG_DIR=~/.jupyter-nbserver jupyter notebook'
```
So now you can run the jupyter notebook with a different config via the simple command `jupyter-nbserver`.
A more robust solution might involve creating a bash function that changes the environment variable, checks whether there's a config file, if not creating one, then executing, but that's probably overkill. The answer that I give on this related question <http://stackoverflow.com/a/32516200/246856> goes into creating the initial config files for a new 'profile'. |
In Python, how can I translate *(1+(int*)&x)? | 32,320,974 | 4 | 2015-08-31T22:38:30Z | 32,321,142 | 7 | 2015-08-31T22:58:31Z | [
"python",
"c",
"floating-point",
"bit-representation"
] | This question is a follow-up of [this one](http://stackoverflow.com/questions/32300386/in-suns-libm-what-does-1intx-do-where-x-is-of-type-double). In [Sun's math library](http://www.netlib.org/fdlibm/readme) (in C), the expression
```
*(1+(int*)&x)
```
is used to retrieve the high word of the floating point number `x`. Here, the OS is assumed 64-bit, with little-endian representation.
I am thinking how to translate the C expression above into Python? The difficulty here is how to translate the '&', and '\*' in the expression. Btw, maybe Python has some built-in function that retrieves the high word of a floating point number? | You can do this more easily with [`struct`](https://docs.python.org/3/library/struct.html):
```
high_word = struct.pack('<d', x)[4:8]
return struct.unpack('<i', high_word)[0]
```
Here, `high_word` is a `bytes` object (or a `str` in 2.x) consisting of the four most significant bytes of `x` in little endian order (using IEEE 64-bit floating point format). We then unpack it back into a 32-bit integer (which is returned in a singleton tuple, hence the `[0]`).
This always uses little-endian for everything, regardless of your platform's underlying endianness. If you need to use native endianness, replace the `<` with `=` (and use `>` or `!` to force big endian). It also guarantees 64-bit doubles and 32-bit ints, which C does not. You *can* remove that guarantee as well, but there is no good reason to do so since it makes your question nonsensical.
While this could be done with pointer arithmetic, it would involve messing around with `ctypes` and the conversion from Python float to C float would still be relatively expensive. The `struct` code is much easier to read. |
What is the difference between the AWS boto and boto3 | 32,322,503 | 27 | 2015-09-01T02:09:11Z | 32,323,454 | 44 | 2015-09-01T04:17:41Z | [
"python",
"amazon-web-services",
"boto",
"boto3"
] | I'm new to AWS using Python and I'm trying to learn the boto API however I notice there are two major versions/packages for Python. That would be boto, and boto3.
I haven't been able to find an article with the major advantages/disadvantages or differences between these packages. | The [boto](https://github.com/boto/boto) package is the hand-coded Python library that has been around since 2006. It is very popular and is fully supported by AWS but because it is hand-coded and there are so many services available (with more appearing all the time) it is difficult to maintain.
So, [boto3](https://github.com/boto/boto3) is a new version of the boto library based on [botocore](https://github.com/boto/botocore). All of the low-level interfaces to AWS are driven from JSON service descriptions that are generated automatically from the canonical descriptions of the services. So, the interfaces are always correct and always up to date. There is a resource layer on top of the client-layer that provides a nicer, more Pythonic interface.
The boto3 library is being actively developed by AWS and is the one I would recommend people use if they are starting new development. |
makedirs gives OSError: [Errno 13] Permission denied: '/pdf_files' | 32,329,976 | 2 | 2015-09-01T11:03:35Z | 32,330,046 | 7 | 2015-09-01T11:08:09Z | [
"python",
"directory",
"operating-system",
"folder",
"createfile"
] | I'm trying to create a folder inside a folder, first I check if that directory exists and create it if necessary:
```
name = "User1"
if not os.path.exists("/pdf_files/%s" % name):
os.makedirs('/pdf_files/%s' % name )
```
Problem is that i'm getting an error : `OSError: [Errno 13] Permission denied: '/pdf_files'`
This folder named: `pdf_file` that I created have all permissions: `drwxrwxrwx` or `'777'`
I searched about this and I saw some solutions but none of them solved my problem.
Can somebody help me ? | You are trying to create your folder inside root directory (`/`).
Change `/pdf_files/%s` to `pdf_files/%s` or `/home/username/pdf_files/%s` |
Create a custom Transformer in PySpark ML | 32,331,848 | 4 | 2015-09-01T12:36:56Z | 32,337,101 | 13 | 2015-09-01T16:56:02Z | [
"python",
"apache-spark",
"nltk",
"pyspark",
"apache-spark-ml"
] | I am new to Spark SQL DataFrames and ML on them (PySpark).
How can I create a costume tokenizer, which for example removes stop words and uses some libraries from [nltk](/questions/tagged/nltk "show questions tagged 'nltk'")? Can I extend the default one?
Thanks. | > Can I extend the default one?
Not really. Default `Tokenizer` is a subclass of `pyspark.ml.wrapper.JavaTransformer` and, same as other transfromers and estimators from `pyspark.ml.feature`, delegates actual processing to its Scala counterpart. Since you want to use Python you should extend `pyspark.ml.pipeline.Transformer` directly.
```
import nltk
from pyspark import keyword_only ## < 2.0 -> pyspark.ml.util.keyword_only
from pyspark.ml import Transformer
from pyspark.ml.param.shared import HasInputCol, HasOutputCol, Param
from pyspark.sql.functions import udf
from pyspark.sql.types import ArrayType, StringType
class NLTKWordPunctTokenizer(Transformer, HasInputCol, HasOutputCol):
@keyword_only
def __init__(self, inputCol=None, outputCol=None, stopwords=None):
super(NLTKWordPunctTokenizer, self).__init__()
self.stopwords = Param(self, "stopwords", "")
self._setDefault(stopwords=set())
kwargs = self.__init__._input_kwargs
self.setParams(**kwargs)
@keyword_only
def setParams(self, inputCol=None, outputCol=None, stopwords=None):
kwargs = self.setParams._input_kwargs
return self._set(**kwargs)
def setStopwords(self, value):
self._paramMap[self.stopwords] = value
return self
def getStopwords(self):
return self.getOrDefault(self.stopwords)
def _transform(self, dataset):
stopwords = self.getStopwords()
def f(s):
tokens = nltk.tokenize.wordpunct_tokenize(s)
return [t for t in tokens if t.lower() not in stopwords]
t = ArrayType(StringType())
out_col = self.getOutputCol()
in_col = dataset[self.getInputCol()]
return dataset.withColumn(out_col, udf(f, t)(in_col))
```
Example usage (data from [ML - Features](https://spark.apache.org/docs/latest/ml-features.html#tokenizer)):
```
sentenceDataFrame = spark.createDataFrame([
(0, "Hi I heard about Spark"),
(0, "I wish Java could use case classes"),
(1, "Logistic regression models are neat")
], ["label", "sentence"])
tokenizer = NLTKWordPunctTokenizer(
inputCol="sentence", outputCol="words",
stopwords=set(nltk.corpus.stopwords.words('english')))
tokenizer.transform(sentenceDataFrame).show()
```
For custom Python `Estimator` see [How to Roll a Custom Estimator in PySpark mllib](http://stackoverflow.com/q/37270446/1560062) |
Numpy item faster than operator[] | 32,333,765 | 13 | 2015-09-01T14:06:08Z | 32,334,114 | 18 | 2015-09-01T14:22:20Z | [
"python",
"performance",
"numpy"
] | I have a following code in python that at least for me produces strange results:
```
import numpy as np
import timeit
a = np.random.rand(3,2)
print timeit.timeit('a[2,1] + 1', 'from __main__ import a', number=1000000)
print timeit.timeit('a.item((2,1)) + 1', 'from __main__ import a', number=1000000)
```
This gives the result:
```
0.533630132675
0.103801012039
```
It seems ok if I only try to access numpy element but when increasing this element the timings get strange... Why is there such a difference in timings? | In this case, they don't return quite the same thing. `a[2,1]` returns a `numpy.float64`, while `a.item((2,1))` returns a native python float.
## Native vs `numpy` *scalars* (`float`, `int`, etc)
A `numpy.float64` scalar isn't quite identical to a native python `float` (they behave identically, however). Simple operations on a single element will be faster with a native python float, as there's less indirection. Have a look at the [docstring for `ndarray.item`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.item.html) for a bit more detail.
As an example of the difference in speed, consider the following:
```
In [1]: x = 1.2
In [2]: y = np.float64(1.2)
In [3]: %timeit x + 1
10000000 loops, best of 3: 58.9 ns per loop
In [4]: %timeit y + 1
1000000 loops, best of 3: 241 ns per loop
```
---
Initially, I incorrectly stated that a second factor was that `a.item(...)` was slightly faster than `a[...]`. That actually isn't true. The time it takes for `a.item` to convert the numpy scalar into a native python scalar overwhelms the time it takes for the additional logic in `a[...]`/`a.__getitem__(...)`.
---
## Don't generalize this result to more than one item
However, you should be careful about trying to generalize what happens with numpy scalars to how numpy arrays operate as a whole. If you're doing a lot of single-item indexing in numpy, it's generally an anti-pattern.
For example, compare:
```
In [5]: a = np.random.rand(1000)
In [6]: %timeit a + 1
100000 loops, best of 3: 2.32 us per loop
```
No matter what we do, we won't be able to match the speed (or much lower memory usage) of the vectorized version (`a + 1`) above:
```
In [7]: %timeit [x + 1 for x in a]
1000 loops, best of 3: 257 us per loop
In [8]: %timeit [a.item(i) + 1 for i in range(len(a))]
1000 loops, best of 3: 208 us per loop
```
Some of this is because iterating through `ndarray`s is slower than iterating through a list. For a completely fair comparison, let's convert everything over to a list of native python floats:
```
In [9]: b = a.tolist()
In [10]: type(b[0])
Out[10]: float
In [11]: %timeit [x + 1 for x in b]
10000 loops, best of 3: 69.4 us per loop
```
Clearly, using vectorized operations (the first case) is much faster when you're operating on larger arrays. It's also far more memory efficient, as `list`s require storing pointers to each item, while `ndarray`s are contiguous in memory. |
SimpleBlobDetector not found in opencv 3.0 for python | 32,334,203 | 2 | 2015-09-01T14:26:03Z | 32,395,088 | 9 | 2015-09-04T09:53:32Z | [
"python",
"opencv"
] | I am trying to use SimpleBlobDetector in python with cv2 version 3.0.
However when I run:
```
import cv2
detector = cv2.SimpleBlobDetector()
```
The console returns me:
```
AttributeError: 'module' object has no attribute 'SimpleBlobDetector'
```
Does anyone know if the function name has changed from cv2 version 2.4 to version 3.0? | The new function is `cv2.SimpleBlobDetector_create(params)` if i'm not wrong. |
Why does this recursive function continue even after its base case has been satisfied | 32,339,236 | 2 | 2015-09-01T19:13:36Z | 32,339,304 | 7 | 2015-09-01T19:18:17Z | [
"python",
"recursion"
] | I was playing around with [/r/dailyprogrammer's](https://www.reddit.com/r/dailyprogrammer/comments/3i99w8/20150824_challenge_229_easy_the_dottie_number/) easy challenge earlier; in this case you are challenged to discover The Dottie Number (~0.739085). Whilst the challenge wanted it in `radians` I decided to keep it in degrees for the time being. Below is some quick code:
```
from math import cos
def func(n):
prev = n
cur = cos(n)
if cur == prev:
print 'Dottie number: ' + str(cur)
else:
func(cur)
print 'Previous = ' + str(prev) + '\tCurrent = ' + str(cur)
func(1)
```
However I noticed a following sample from the output:
```
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133215 Current = 0.739085133215
Previous = 0.739085133216 Current = 0.739085133215
Previous = 0.739085133214 Current = 0.739085133216
Previous = 0.739085133216 Current = 0.739085133214
Previous = 0.739085133213 Current = 0.739085133216
Previous = 0.739085133218 Current = 0.739085133213
Previous = 0.739085133211 Current = 0.739085133218
Previous = 0.739085133221 Current = 0.739085133211
Previous = 0.739085133206 Current = 0.739085133221
Previous = 0.739085133229 Current = 0.739085133206
Previous = 0.739085133195 Current = 0.739085133229
Previous = 0.739085133245 Current = 0.739085133195
Previous = 0.739085133171 Current = 0.739085133245
Previous = 0.739085133281 Current = 0.739085133171
Previous = 0.739085133117 Current = 0.739085133281
Previous = 0.739085133361 Current = 0.739085133117
Previous = 0.739085132999 Current = 0.739085133361
Previous = 0.739085133536 Current = 0.739085132999
Previous = 0.739085132739 Current = 0.739085133536
Previous = 0.739085133922 Current = 0.739085132739
Previous = 0.739085132166 Current = 0.739085133922
Previous = 0.739085134772 Current = 0.739085132166
Previous = 0.739085130904 Current = 0.739085134772
Previous = 0.739085136647 Current = 0.739085130904
Previous = 0.739085128121 Current = 0.739085136647
Previous = 0.739085140777 Current = 0.739085128121
Previous = 0.739085121989 Current = 0.739085140777
Previous = 0.739085149881 Current = 0.739085121989
Previous = 0.739085108474 Current = 0.739085149881
Previous = 0.739085169945 Current = 0.739085108474
Previous = 0.739085078689 Current = 0.739085169945
Previous = 0.739085214161 Current = 0.739085078689
Previous = 0.739085013048 Current = 0.739085214161
Previous = 0.739085311607 Current = 0.739085013048
Previous = 0.739084868387 Current = 0.739085311607
Previous = 0.739085526362 Current = 0.739084868387
Previous = 0.739084549575 Current = 0.739085526362
Previous = 0.739085999648 Current = 0.739084549575
Previous = 0.739083846965 Current = 0.739085999648
Previous = 0.739087042695 Current = 0.739083846965
Previous = 0.739082298522 Current = 0.739087042695
Previous = 0.739089341403 Current = 0.739082298522
Previous = 0.739078885995 Current = 0.739089341403
Previous = 0.739094407379 Current = 0.739078885995
Previous = 0.739071365299 Current = 0.739094407379
Previous = 0.739105571927 Current = 0.739071365299
Previous = 0.739054790747 Current = 0.739105571927
Previous = 0.73913017653 Current = 0.739054790747
Previous = 0.739018262427 Current = 0.73913017653
Previous = 0.739184399771 Current = 0.739018262427
Previous = 0.738937756715 Current = 0.739184399771
Previous = 0.739303892397 Current = 0.738937756715
Previous = 0.738760319874 Current = 0.739303892397
Previous = 0.739567202212 Current = 0.738760319874
Previous = 0.738369204122 Current = 0.739567202212
Previous = 0.740147335568 Current = 0.738369204122
Previous = 0.737506890513 Current = 0.740147335568
Previous = 0.74142508661 Current = 0.737506890513
Previous = 0.735604740436 Current = 0.74142508661
Previous = 0.744237354901 Current = 0.735604740436
Previous = 0.731404042423 Current = 0.744237354901
Previous = 0.750417761764 Current = 0.731404042423
Previous = 0.722102425027 Current = 0.750417761764
Previous = 0.763959682901 Current = 0.722102425027
Previous = 0.701368773623 Current = 0.763959682901
Previous = 0.793480358743 Current = 0.701368773623
Previous = 0.654289790498 Current = 0.793480358743
Previous = 0.857553215846 Current = 0.654289790498
Previous = 0.540302305868 Current = 0.857553215846
Previous = 1 Current = 0.540302305868
```
The output is fine, I managed to find the dottie number as requested, but I can't understand why the recursive function continued executing even after the current value was equal to the previous one (since that was the base case that I defined in the function). Does this have to do with floating point precision? Is the value being truncated at some point or am I just not printing it correctly? | The numbers shown to you are not the actual values, because calling `str` on a number doesn't show you all the digits. If you use `repr` instead, you'll get this:
```
Dottie number: 0.7390851332151607
Previous = 0.7390851332151607 Current = 0.7390851332151607
Previous = 0.7390851332151606 Current = 0.7390851332151607
Previous = 0.7390851332151608 Current = 0.7390851332151606
Previous = 0.7390851332151603 Current = 0.7390851332151608
Previous = 0.7390851332151611 Current = 0.7390851332151603
# ... etc.
```
Where you can see that the last few iterations aren't the same. |
Python Pandas: How to I round datetime column to nearest quarter hour | 32,344,533 | 5 | 2015-09-02T04:13:05Z | 32,344,636 | 12 | 2015-09-02T04:25:40Z | [
"python",
"datetime",
"pandas",
"python-datetime"
] | I have loaded a data file into a Python pandas dataframe. I has a datetime column of the format `2015-07-18 13:53:33.280`.
What I need to do is create a new column that rounds this out to its nearest quarter hour. So, the date above will be rounded to `2015-07-18 13:45:00.000`.
How do I do this in pandas? I tried using the solution from [here](http://stackoverflow.com/questions/24831018/python-how-can-i-round-a-datetime-object-to-the-most-recent-previous-quarter-h), but get an `'Series' object has no attribute 'year'` error. | Assuming that your series is made up of `datetime` objects, You need to use `Series.apply` . Example -
```
import datetime
df['<column>'] = df['<column>'].apply(lambda dt: datetime.datetime(dt.year, dt.month, dt.day, dt.hour,15*(dt.minute // 15)))
``` |
How to use the same line of code in all functions? | 32,347,159 | 19 | 2015-09-02T07:36:08Z | 32,347,255 | 45 | 2015-09-02T07:41:05Z | [
"python"
] | I am newbie in Python.
I wonder if it is possible that all functions inherit the same line of code?
`with open(filename, 'r') as f:` as this line of code is the same in all three functions. Is it possible to inherit the code without using classes?
I tried to find the answer on stackoverflow and python documentation, but with no luck.
```
def word_count(filename):
with open(filename, 'r') as f:
return len(f.read().split())
def line_count(filename):
with open(filename, 'r') as f:
return len(f.read().splitlines())
def character_count(filename):
with open(filename, 'r') as f:
return len(f.read())
``` | The common code in your case is
```
with open(filename, 'r') as f:
contents = f.read()
```
So just move it to its own function:
```
def get_file_contents(filename):
with open(filename, 'r') as f:
return f.read()
def word_count(filename):
return len(get_file_contents(filename).split())
def line_count(filename):
return len(get_file_contents(filename).splitlines())
def character_count(filename):
return len(get_file_contents(filename))
``` |
How to use the same line of code in all functions? | 32,347,159 | 19 | 2015-09-02T07:36:08Z | 32,347,261 | 15 | 2015-09-02T07:41:23Z | [
"python"
] | I am newbie in Python.
I wonder if it is possible that all functions inherit the same line of code?
`with open(filename, 'r') as f:` as this line of code is the same in all three functions. Is it possible to inherit the code without using classes?
I tried to find the answer on stackoverflow and python documentation, but with no luck.
```
def word_count(filename):
with open(filename, 'r') as f:
return len(f.read().split())
def line_count(filename):
with open(filename, 'r') as f:
return len(f.read().splitlines())
def character_count(filename):
with open(filename, 'r') as f:
return len(f.read())
``` | What I've done in the past is split the code out into another function, in your example
```
with open(filename, 'r') as f:
f.read()
```
Is common within all of your methods, so I'd look at rewriting it like so.
```
def read_file(filename):
with open(filename, 'r') as f:
return f.read()
def word_count(filename):
return len(read_file(filename).split())
def line_count(filename):
return len(read_file(filename).splitlines())
def character_count(filename):
return len(read_file(filename))
``` |
Efficiency vs legibility of code? | 32,347,732 | 4 | 2015-09-02T08:04:02Z | 32,347,789 | 9 | 2015-09-02T08:06:35Z | [
"python"
] | Let's say I have a simple function which calculates the cube root of a number and returns it as a string:
```
def cuberoot4u(number):
return str(pow(number, 1/3))
```
I could rewrite this as:
```
def cuberoot4u(number):
cube_root = pow(number, 1/3)
string_cube_root = str(cube_root)
return string_cube_root
```
The latter version has the extra steps of declaring extra variables which show the value through each respective operation; multiplying by 1/3 and the conversion to a string - the code looks a little easier to follow and understand.
Now for such a menial task of finding the cuberoot, both functions would appear pretty self-explanatory to the layman. However, if the function did something far more complicated which involved tens or hundreds of algebraic manipulations or some other kinds of operations, at what point should one simply write all of these in the `return` section of the function or instead detail all, if not most, steps in the main body like the 2nd example above?
To my understanding, the first version of the function seems less legible but more efficient. How and when do I balance legibility against efficiency in code such as in this example? | In general you should prioritise legibility over efficiency in your code, however if you have proved that your codes performance is causing an issue then ([and only then](http://c2.com/cgi/wiki?PrematureOptimization)) should you start to optimise.
If you do need to make your code less legible in order to speed it up you can always use a comment to explain what it is doing (perhaps even including the more readable version of the code in the comment to allow people to follow what it is doing).
Beware however, one of the problems with explaining your code via a comment rather than by just writing legible code is that comments can become out of date. If you change the code but don't update the comment, then your comment goes from being a helpful commentary to being a weasel-faced liar who ruins everyone's day - try to avoid that if possible. |
does this line of python close the file when its finished? | 32,356,115 | 2 | 2015-09-02T14:36:35Z | 32,356,184 | 8 | 2015-09-02T14:39:39Z | [
"python"
] | I have a line of python that splits a file by carriage return character:
```
lines = open(sFile, 'r').read().split("0d".decode('hex'))
```
Was this file is closed? If not, can I acquire the file handle somehow? | The short answer is "probably". The open file object *should* get garbage collected which will close the file. However, there are some circumstances where that might not be true and the open file handle can live on.
Best practice is to **always** close your file handles. The beauty of context managers is hard to over-estimate here:
```
with open(sFile) as input_file:
lines = input_file.read().split('0d'.decode('hex'))
```
---
It has been asked in the comments if we can *demonstrate* that a file object is implicitly closed. I don't know the answer to that question, however, we *can demonstrate* that the file object can be reaped by the garbage collector before the current statement is even finished executing. Since it's fairly common knowledge that file objects are closed when reaped we can make some assumptions about what *can* happen in OP's case.
```
from __future__ import print_function
import weakref
def callback(*args):
print('cleanup')
fobj = weakref.ref(open('test.dat'), callback)()
print(fobj)
```
If you'll notice, `cleanup` gets printed before `None` (and that the `weakref.ref` returns `None`) which means our file object has been reaped by the garbage collector before the reference is called1.
1Note, this is CPython behavior. Run it on pypy-2.6.1 and the file object is still alive at this point and `cleanup` never gets printed! This lends support to the claim that you should close your own file handles as there is definitely implementation dependence in when the GC runs (and therefore, when the file handle will be closed and reaped). |
Using Spyder IDE, how do you return from "goto definition"? | 32,358,012 | 15 | 2015-09-02T16:08:39Z | 32,404,840 | 9 | 2015-09-04T18:55:08Z | [
"python",
"ide",
"keyboard-shortcuts",
"spyder"
] | ## Description of the problem:
I like to jump around code a lot with the keyboard but I am hitting a wall of usability in Spyder IDE. I can use the "goto definition" feature to jump to the definition of some function but then I can't go back to where my cursor was (so it takes a while to manually find where I was before because there might be many lines of code).
So for example there is a constant `X=5` in the same file and when I use "goto definition" I can see what that constant is but then there is no way to go back. Or another example is a function from another file where "goto definition" takes me to that other file... but now I can't find the other file I was on (because there may be many files open).
In the 30+ year old `vi` you can goto the definition and return with `ctrl-]` and `ctrl-t`. In the 14+ year old Eclipse the equivalent to "goto definition" would be approximately `F3` to go to the definition. And then to return would be `alt`-`left`.
running Spyder version 2.2.4.
## Question:
**Using Spyder IDE, can you return from "goto definition"? If you can, how do you return from "goto definition"?**
## What I've tried:
I have a keyboard shortcut for "previous cursor position" set to `Alt Left` but "previous cursor position" doesn't do anything when I hit the key. (The default keyboard shortcut is `ctrl-alt-left` which conflicts with the Cinnamon-dekstop-manager keyboard shortcut for switching workspaces and so I had to remap the above mentioned keyboard shortcut.) | Spyder have a one strange [bug](https://groups.google.com/forum/#!topic/spyderlib/a-lVayczSKY). Shortcut "Previous cursor position" only work if "Source toolbar" is present.
Turn on "View -> Toolbars -> Source toolbar". You can try it. |
What difference between subprocess.call() and subprocess.Popen() makes PIPE less secure for the former? | 32,364,849 | 9 | 2015-09-02T23:40:32Z | 32,396,937 | 8 | 2015-09-04T11:26:33Z | [
"python",
"python-2.7",
"subprocess",
"popen",
"python-2.6"
] | I've had a look at the documentation for both of them.
This question is prompted by J.F.'s comment here: [Retrieving the output of subprocess.call()](http://stackoverflow.com/questions/1996518/retrieving-the-output-of-subprocess-call#comment31660313_1996540)
The current Python documentation for [`subprocess.call()`](https://docs.python.org/library/subprocess.html#module-subprocess) says the following about using `PIPE` for `subprocess.call()`:
> Note Do not use `stdout=PIPE` or `stderr=PIPE` with this function. The child process will block if it generates enough output to a pipe to fill up the OS pipe buffer as the pipes are not being read from.
Python 2.7 [`subprocess.call()`](https://docs.python.org/2.7/library/subprocess.html#subprocess.call):
> Note Do not use `stdout=PIPE` or `stderr=PIPE` with this function as that can deadlock based on the child process output volume. Use Popen with the communicate() method when you need pipes.
Python 2.6 includes no such warnings.
Also, the [`subprocess.call()` and `subprocess.check_call()`](https://docs.python.org/2.6/library/subprocess.html#convenience-functions) don't seem to have a way to access their output, except for using stdout=PIPE with communicate():
<https://docs.python.org/2.6/library/subprocess.html#convenience-functions>
> Note that if you want to send data to the processâs `stdin`, you need to create the `Popen` object with `stdin=PIPE`. Similarly, to get anything other than None in the result tuple, you need to give `stdout=PIPE` and/or `stderr=PIPE` too.
<https://docs.python.org/2.6/library/subprocess.html#subprocess.Popen.communicate>
What difference between `subprocess.call()` and `subprocess.Popen()` makes `PIPE` less secure for `subprocess.call()`?
**More Specific:** Why does `subprocess.call()` *"deadlock based on the child process output volume."*, and not `Popen()`? | [`call()` is just `Popen().wait()` (± error handling)](https://github.com/python/cpython/blob/b01cfb8f69e0990be0615b8613a7412f88bd217a/Lib/subprocess.py#L552-L566).
**You should not use `stdout=PIPE` with `call()`** because it does not read from the pipe and therefore the child process will hang as soon as it fills the corresponding OS pipe buffer. Here's a picture that shows how data flows in `command1 | command2` shell pipeline:
[](http://www.pixelbeat.org/programming/stdio_buffering/)
It does not matter what your Python version is -- the pipe buffer (look at the picture) is outside of your Python process. Python 3 does not use C stdio but it affects only the internal buffering. When the internal buffer is flushed the data goes into the pipe. If `command2` (your parent Python program) does not read from the pipe then `command1` (the child process e.g., started by `call()`) will hang as soon as the pipe buffer is full ([`pipe_size = fcntl(p.stdout, F_GETPIPE_SZ)`](http://man7.org/linux/man-pages/man2/fcntl.2.html) ~65K on my Linux box (max value is `/proc/sys/fs/pipe-max-size` ~1M)).
**You may use `stdout=PIPE` if you read from the pipe** later e.g., using `Popen.communicate()` method. You could also [read from `process.stdout` (the file object that represents the pipe) directly](http://stackoverflow.com/a/17698359/4279). |
How to handle the "else" clause when converting an "if..elif..else" statement into a dictionary lookup? | 32,367,882 | 3 | 2015-09-03T05:51:28Z | 32,367,906 | 8 | 2015-09-03T05:53:51Z | [
"python",
"if-statement",
"technical-debt"
] | I am trying to convert an "if else" statement in python into a dictionary.
I tried to convert it into a dictionary, but how do I handle the last else clause?
```
val=3
if val==1:
print "a"
elif val==2:
print "b"
elif val==3:
print "c"
elif val==4:
print "d"
else:
print "value not found:"
print "===========Converted if else into dictionary ==================="
DATA_SOURCE = {1:"a",2:"b",3:"c",4:"d"}
print DATA_SOURCE[val]
```
I have created this code as a replacement:
```
if not DATA_SOURCE.has_key(val):
print "value not found:"
else:
print DATA_SOURCE[val]
```
Is it equivalent? | You can use the `dict.get` method:
```
print DATA_SOURCE.get(val, "value not found")
```
That will return `"value not found"` if `val` is not a key, without affecting the dictionary.
As always, if in doubt, use the help:
```
>>> help(dict)
``` |
How to Include image or picture in jupyter notebook | 32,370,281 | 19 | 2015-09-03T08:09:58Z | 32,370,538 | 17 | 2015-09-03T08:23:25Z | [
"python",
"ipython-notebook",
"jupyter"
] | I would like to include image in a jupyter notebook.
If I did the following, it works :
```
from IPython.display import Image
Image("img/picture.png")
```
But I would like to include the images in a markdown cell and the following code gives a 404 error :
```

```
I also tried
```

```
But I still get the same error :
```
404 GET /notebooks/%22/home/user/folder/img/picture.png%22 (127.0.0.1) 2.74ms referer=http://localhost:8888/notebooks/notebook.ipynb
``` | There are several ways to post an image in Jupyter notebooks:
## via HTML:
```
from IPython.display import Image
from IPython.core.display import HTML
Image(url= "http://my_site.com/my_picture.jpg")
```
You retain the ability to use HTML tags to resize, etc...
```
Image(url= "http://my_site.com/my_picture.jpg", width=100, height=100)
```
You can also display images stored locally, either via relative or absolute path.
```
PATH = "/Users/reblochonMasque/Documents/Drawings/"
Image(filename = PATH + "My_picture.jpg", width=100, height=100)
```
**if the image it wider than the display settings:** [thanks](http://stackoverflow.com/questions/36766082/how-to-display-large-image-from-ipython-notebook-with-scrolling)
use `unconfined=True` to disable max-width confinement of the image
```
from IPython.core.display import Image, display
display(Image('https://i.ytimg.com/vi/j22DmsZEv30/maxresdefault.jpg', width=1900, unconfined=True))
```
## or via markdown:
**for a web image:**
```

```
as shown by @cristianmtr
Paying attention not to use either these quotes `""` or those `''` around the url.
**or a local one:**
```

```
demonstrated by @Sebastian |
How to Include image or picture in jupyter notebook | 32,370,281 | 19 | 2015-09-03T08:09:58Z | 32,371,085 | 19 | 2015-09-03T08:52:02Z | [
"python",
"ipython-notebook",
"jupyter"
] | I would like to include image in a jupyter notebook.
If I did the following, it works :
```
from IPython.display import Image
Image("img/picture.png")
```
But I would like to include the images in a markdown cell and the following code gives a 404 error :
```

```
I also tried
```

```
But I still get the same error :
```
404 GET /notebooks/%22/home/user/folder/img/picture.png%22 (127.0.0.1) 2.74ms referer=http://localhost:8888/notebooks/notebook.ipynb
``` | You mustn't use quotation marks around the name of the image files in markdown!
If you carefully read your error message, you will see the two `%22` parts in the link. That is the html encoded quotation mark.
You have to change the line
```

```
to
```

``` |
Why is this generator expression function slower than the loop version? | 32,382,230 | 10 | 2015-09-03T17:44:07Z | 32,382,281 | 10 | 2015-09-03T17:47:26Z | [
"python",
"performance",
"python-2.7",
"generator-expression"
] | I have been operating under the theory that generator expressions tend to be more efficient than normal loops. But then I ran into the following example: write a function which given a number, `N`, and some factors, `ps`, returns the sum of all the numbers under `N` that are a multiple of at least one factor.
Here is a loop version and a shorter generator expression version:
```
def loops(N, ps):
total_sum = 0
for i in xrange(N):
for p in ps:
if i%p == 0:
total_sum += i
break
return total_sum
def genexp(N, ps):
return sum(i for i in xrange(N)
if any(i%p == 0 for p in ps))
```
I'd expect the two to perform roughly equal, with maybe the comprehension version a little faster, but what I didn't expect was this:
```
for func in ('loops', 'genexp'):
print func, timeit.timeit('%s(100000, [3,5,7])' % func,
number=100,
setup='from __main__ import %s' % func)
loops 2.82878184319
genexp 10.1663100719
```
4x slower isn't even close! Why? What am I misunderstanding? | First of all: generator expressions are *memory* efficient, not necessarily speed efficient.
Your compact `genexp()` version is slower for two reasons:
* Generator expressions are implemented using a new scope (like a new function). You are producing *N* new scopes for each `any()` test. Creating a new scope and tearing it down again is relatively expensive, certainly when done in a loop and then compared with code that doesn't do this.
* The `sum()` and `any()` names are additional globals to be looked up. In the case of `any()`, that's an additional *N* global lookups per test. Globals must be looked up in a dictionary, versus locals which are looked up by index in a C-array (which is very fast).
The latter is but a small component, most of the cost lies in creating and destroying frames (scopes); if you create a version where `_any` and `_sum` are locals to the function you get but a small improvement in performance:
```
>>> def genexp_locals(N, ps, _any=any, _sum=sum):
... return _sum(i for i in xrange(N)
... if _any(i%p == 0 for p in ps))
...
>>> for func in ('loops', 'genexp', 'genexp_locals'):
... print func, timeit.timeit('%s(100000, [3,5,7])' % func,
... number=100,
... setup='from __main__ import %s' % func)
...
loops 2.00835800171
genexp 6.45241594315
genexp_locals 6.23843789101
```
I didn't create a local for `xrange` to keep that aspect the same. Technically speaking, the `_any` name is looked up as a closure, not a local, by the generator expression code object, which are not as slow as global lookups but not quite as speedy as a local lookup either. |
Precision difference when printing Python and C++ doubles | 32,386,768 | 14 | 2015-09-03T23:05:41Z | 32,387,030 | 9 | 2015-09-03T23:36:11Z | [
"python",
"c++",
"floating-point",
"double"
] | I'm currently marvelling over this:
**C++ 11**
```
#include <iostream>
#include <iomanip>
#include <limits>
int main()
{
double d = 1.305195828773568;
std::cout << std::setprecision(std::numeric_limits<double>::max_digits10) << d << std::endl;
// Prints 1.3051958287735681
}
```
**Python**
```
>>> repr(1.305195828773568)
'1.305195828773568'
```
What's going on, why the extra 1 in C++?
So far I thought that C++ and Python use the same 64 bit IEEE doubles under the hood; both formatting functions are supposed to print the full precision. | you can force python to print the 1 as well (and many more of the following digits):
```
print('{:.16f}'.format(1.305195828773568))
# -> 1.3051958287735681
```
from <https://docs.python.org/2/tutorial/floatingpoint.html>:
> ```
> >>> 7205759403792794 * 10**30 // 2**56
> 100000000000000005551115123125L
> ```
>
> In versions prior to Python 2.7 and Python 3.1, Python rounded this
> value to 17 significant digits, giving â0.10000000000000001â. In
> current versions, Python displays a value based on the shortest
> decimal fraction that rounds correctly back to the true binary value,
> resulting simply in â0.1â.
"print the full precision" is hard to do: what is the full precision? the representation of floats is binary; only fractions of powers of 2 can be represented exactly (to full precision); most decimal fractions can not be represented exactly in base 2.
but the float in the memory will be the same for python and c++; it is just the string representation that differs. |
Anaconda ImportError: libSM.so.6: cannot open shared object file: No such file or directory | 32,389,599 | 6 | 2015-09-04T03:27:15Z | 32,389,631 | 12 | 2015-09-04T03:31:34Z | [
"python",
"matplotlib",
"anaconda"
] | Here's my python import statements
```
import plotly as py
import pandas as pd
import numpy as np
import plotly.plotly as py
import plotly.tools as plotly_tools
from plotly.graph_objs import *
os.environ['MPLCONFIGDIR'] = tempfile.mkdtemp()
from matplotlib.finance import quotes_historical_yahoo
import matplotlib.pyplot as plt
from scipy.stats import gaussian_kde
from IPython.display import HTML
```
It throws and ImportError **`ImportError: libSM.so.6: cannot open shared object file: No such file or directory`**
I know there is problem with this import statement
`import matplotlib.pyplot as plt` | **Try this command if you are using ubuntu:**
`pyqt4` might be missing
```
sudo apt-get install -y python-qt4
```
It worked for me. |
Import Error: No module name libstdcxx | 32,389,977 | 6 | 2015-09-04T04:16:56Z | 33,897,420 | 14 | 2015-11-24T15:21:28Z | [
"python",
"c++",
"c",
"linux"
] | When I use gdb to debug my C++ program with **segmentation fault**, I come with this error in gdb.
> Traceback (most recent call last):
> File "/usr/share/gdb/auto-load/usr/lib/x86\_64-linux- gnu/libstdc++.so.6.0.19-gdb.py", line 63, in
> from libstdcxx.v6.printers import register\_libstdcxx\_printers
> ImportError: No module named 'libstdcxx'
I am using Gdb 7.7.1 and g++ version 4.8.4. I have googled around but haven't get answers. Can any one solve my error? Thank you very much. | This is a bug in /usr/lib/debug/usr/lib/$triple/libstdc++.so.6.0.18-gdb.py;
When you start gdb, please enter:
`python sys.path.append("/usr/share/gcc-4.8/python");` |
Import Error: No module name libstdcxx | 32,389,977 | 6 | 2015-09-04T04:16:56Z | 34,350,497 | 9 | 2015-12-18T07:31:14Z | [
"python",
"c++",
"c",
"linux"
] | When I use gdb to debug my C++ program with **segmentation fault**, I come with this error in gdb.
> Traceback (most recent call last):
> File "/usr/share/gdb/auto-load/usr/lib/x86\_64-linux- gnu/libstdc++.so.6.0.19-gdb.py", line 63, in
> from libstdcxx.v6.printers import register\_libstdcxx\_printers
> ImportError: No module named 'libstdcxx'
I am using Gdb 7.7.1 and g++ version 4.8.4. I have googled around but haven't get answers. Can any one solve my error? Thank you very much. | I encountered this error during using gdb in emacs. (in docker container - ubuntu)
I tried it like below and worked well.
(1) open libstdc++.so.x.x.x-gdb.py
```
sh> sudo vi /usr/share/gdb/auto-load/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19-gdb.py
```
(2) modify that file(libstdc++.so.x.x.x-gdb.py
) like below.
```
import sys
import gdb
import os
import os.path
pythondir = '/usr/share/gcc-4.8/python'
libdir = '/usr/lib/x86_64-linux-gnu'
sys.path.append(pythondir) <-- add this code
```
(3) execute gdb again
```
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./a.out...done.
(gdb) b main
Breakpoint 1 at 0x400ae9: file vector7.cpp, line 7.
(gdb) r
Starting program: /home/dplee/work/study_room/c++/a.out
Breakpoint 1, main () at vector7.cpp:7
7 vector<int> v(10);
(gdb) list
2 #include <vector>
3 using namespace std;
4
5 int main()
6 {
7 vector<int> v(10);
8 int num = 0;
9
10 for(auto& i : v)
11 {
(gdb)
``` |
Printing nested lists in certain directions - Python | 32,397,765 | 3 | 2015-09-04T12:11:33Z | 32,397,956 | 9 | 2015-09-04T12:21:51Z | [
"python"
] | I am trying a problem to make Graphical designs in Python, more specifically ASCII "banner" words. Each letter is made up of a nested list of lines of the character.
```
[[' _______ ', '( )', '| () () |', '| || || |', '| |(_)| |', '| | | |', '| ) ( |', '|/ \\|'], [' _______ ', '( ____ \\', '| ( \\/', '| (__ ', '| __) ', '| ( ', '| (____/\\', '(_______/']]
```
etc.
When printed down for each nested list and across for the whole thing, they make a word.
I am having trouble printing it as I said above, down for each nested list and across for the whole thing.
Thanks in advance! | If you want to print the letters from left to right, you will have to [`zip`](https://docs.python.org/3/library/functions.html#zip) the list of lists with itself, effectively ["transposing"](http://stackoverflow.com/q/4937491/1639625) it. This way, the first list will have all the first rows, the second list all the second rows, and so on. Now just [`join`](https://docs.python.org/3/library/stdtypes.html#str.join) those and you are done.
```
>>> ascii = [[' _______ ', '( )', '| () () |', '| || || |', '| |(_)| |', '| | | |', '| ) ( |', '|/ \\|'], [' _______ ', '( ____ \\', '| ( \\/', '| (__ ', '| __) ', '| ( ', '| (____/\\', '(_______/']]
>>> print '\n'.join((' '.join(line) for line in zip(*ascii)))
_______ _______
( ) ( ____ \
| () () | | ( \/
| || || | | (__
| |(_)| | | __)
| | | | | (
| ) ( | | (____/\
|/ \| (_______/
```
And if you want to print the letters from top to bottom, you can use this:
```
>>> print '\n\n'.join(('\n'.join(line) for line in ascii))
``` |
Pandas read_csv from url | 32,400,867 | 7 | 2015-09-04T14:44:24Z | 32,400,969 | 10 | 2015-09-04T14:50:24Z | [
"python",
"csv",
"pandas",
"request"
] | I am using Python 3.4 with IPython and have the following code. I'm unable to read a csv-file from the given URL:
```
import pandas as pd
import requests
url="https://github.com/cs109/2014_data/blob/master/countries.csv"
s=requests.get(url).content
c=pd.read_csv(s)
```
I have the following error
> "Expected file path name or file-like object, got type"
How can I fix this? | Just as the error suggests , `pandas.read_csv` needs a file-like object as the first argument.
If you want to read the csv from a string, you can use [`io.StringIO`](https://docs.python.org/3/library/io.html#io.StringIO) (Python 3.x) or [`StringIO.StringIO` (Python 2.x)](https://docs.python.org/2/library/stringio.html#StringIO.StringIO) .
Also, for the URL - <https://github.com/cs109/2014_data/blob/master/countries.csv> - you are getting back `html` response , not raw csv, you should use the url given by the `Raw` link in the github page for getting raw csv response , which is - <https://raw.githubusercontent.com/cs109/2014_data/master/countries.csv>
Example -
```
import pandas as pd
import io
import requests
url="https://raw.githubusercontent.com/cs109/2014_data/master/countries.csv"
s=requests.get(url).content
c=pd.read_csv(io.StringIO(s.decode('utf-8')))
``` |
caffe installation : opencv libpng16.so.16 linkage issues | 32,405,035 | 3 | 2015-09-04T19:09:21Z | 32,514,285 | 13 | 2015-09-11T01:49:40Z | [
"python",
"opencv",
"ubuntu",
"anaconda",
"caffe"
] | I am trying to compile caffe with python interface on an Ubuntu 14.04 machine.
I have installed Anaconda and opencv with `conda install opencv`. I have also installed all the requirement stipulated in the coffee and changed the commentary blocks in `makefile.config` so that PYTHON\_LIB and PYTHON\_INCLUDE point towards Anaconda distributions.
When I am calling `make all`, the following command is issued:
```
g++ .build_release/tools/caffe.o -o .build_release/tools/caffe.bin -pthread
-fPIC -DNDEBUG -O2 -DWITH_PYTHON_LAYER
-I/home/andrei/anaconda/include
-I/home/andrei/anaconda/include/python2.7
-I/home/andrei/anaconda/lib/python2.7/site-packages/numpy/core/include
-I/usr/local/include
-I/home/andrei/anaconda/lib
-I/lib/x86_64-linux-gnu
-I/lib64
-I/usr/lib/x86_64-linux-gnu
-I.build_release/src
-I./src
-I./include
-I/usr/include
-Wall -Wno-sign-compare -lcaffe
-L/home/andrei/anaconda/lib
-L/home/andrei/anaconda/lib/././
-L/usr/local/lib -L/usr/lib
-L/home/andrei/anaconda/lib/././libpng16.so.16
-L/lib/x86_64-linux-gnu
-L/lib64
-L/usr/lib/x86_64-linux-gnu
-L/usr/lib
-L.build_release/lib
-lcudart -lcublas -lcurand -lglog -lgflags -lprotobuf -lleveldb -lsnappy
-llmdb -lboost_system -lhdf5_hl -lhdf5 -lm
-lopencv_core -lopencv_highgui -lopencv_imgproc -lboost_thread -lstdc++
-lboost_python -lpython2.7 -lcblas -latlas \
-Wl,-rpath,\$ORIGIN/../lib
```
However, it is stopped by the following set of errors:
```
/usr/bin/ld: warning: libpng16.so.16, needed by /home/andrei/anaconda/lib/libopencv_highgui.so, not found (try using -rpath or -rpath-link)
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_create_read_struct@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_interlace_handling@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_IHDR@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_get_io_ptr@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_longjmp_fn@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_gray_to_rgb@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_compression_level@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_bgr@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_filter@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_rgb_to_gray@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_init_io@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_destroy_read_struct@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_swap@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_get_IHDR@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_palette_to_rgb@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_compression_strategy@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_get_tRNS@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_write_info@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_packing@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_read_fn@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_create_info_struct@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_read_end@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_read_update_info@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_write_image@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_write_end@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_expand_gray_1_2_4_to_8@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_create_write_struct@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_read_image@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_read_info@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_strip_alpha@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_write_fn@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_destroy_write_struct@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_error@PNG16_0'
/home/andrei/anaconda/lib/libopencv_highgui.so: undefined reference to `png_set_strip_16@PNG16_0'
```
Following the advice from this question: [Caffe install on ubuntu for anaconda with python 2.7 fails with libpng16.so.16 not found](http://stackoverflow.com/questions/31962975/caffe-install-on-ubuntu-for-anaconda-with-python-2-7-fails-with-libpng16-so-16-n), I tired running the `ldd /home/andrei/anaconda/lib/libopencv_highgui`, and obtained the following output:
```
linux-vdso.so.1 => (0x00007fff1a104000)
libopencv_core.so.2.4 => /home/andrei/anaconda/lib/././libopencv_core.so.2.4 (0x00007ff18e8a0000)
libopencv_imgproc.so.2.4 => /home/andrei/anaconda/lib/././libopencv_imgproc.so.2.4 (0x00007ff18e3f0000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ff18e1ec000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ff18dfce000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007ff18ddc6000)
libz.so.1 => /home/andrei/anaconda/lib/././libz.so.1 (0x00007ff18dbb0000)
libjpeg.so.8 => /home/andrei/anaconda/lib/././libjpeg.so.8 (0x00007ff18d979000)
libpng16.so.16 => /home/andrei/anaconda/lib/././libpng16.so.16 (0x00007ff18d737000)
libgthread-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0 (0x00007ff18d535000)
libglib-2.0.so.0 => /lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007ff18d22d000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007ff18cf29000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007ff18cc23000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007ff18ca0d000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ff18c648000)
/lib64/ld-linux-x86-64.so.2 (0x00007ff18f0d0000)
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007ff18c40a000)
```
I then proceeded to add all the required directories into the `INCLUDE_DIRS` and `LIBRARY_DIRS` of the `make.config` file (hence additional `-I` and `-L` in the `g++` call above), including the explicit link to `libpng16.so.16` already present in anaconda libraries. This have however not resolved the issue. I also tried adding the file to `$LD_LIBRARY_PATH` and `$LD_RUN_PATH`, but without any effect.
What might the problem be and how could I resolve it? | I came across the same problem. I found it similar to <https://github.com/BVLC/caffe/issues/2007>, and I solved it by
```
cd /usr/lib/x86_64-linux-gnu
sudo ln -s ~/anaconda/lib/libpng16.so.16 libpng16.so.16
sudo ldconfig
``` |
Bitwise-OR the elements of a list in python using lambda | 32,406,128 | 2 | 2015-09-04T20:30:06Z | 32,406,177 | 9 | 2015-09-04T20:33:19Z | [
"python",
"python-2.7"
] | Can I use the lambda function this way to bitwise-OR all the elements in the list?
```
lst = [1, 1, 1]
f = lambda x: x | b for b in lst
```
When I do this I get a `SyntaxError`. | You want [`reduce`](https://docs.python.org/2/library/functions.html#reduce):
```
f = reduce(lambda x, y: x | y, lst)
```
`reduce` accepts a binary function and an iterable, and applies the operator between all elements starting from the first pair.
Note: in Python 3 it moves to the [`functools`](https://docs.python.org/3.2/library/functools.html#functools.reduce) module.
You can also use the [`or_`](https://docs.python.org/2/library/operator.html#operator.or_) function from the `operator` module instead of writing the lambda yourself:
```
from operator import or_
f = reduce(or_, lst)
``` |
Why does this Python script delete '.txt' files, as well as all the '.tmp' files? | 32,407,377 | 3 | 2015-09-04T22:22:04Z | 32,407,404 | 7 | 2015-09-04T22:25:00Z | [
"python",
"python-2.7",
"os.walk",
"os.path"
] | I'm trying to write a script that will automatically delete all the temp files in a specific folder, and I noticed that this script also deletes all the *text files* in that folder as well. Can anyone explain why it does that?
```
import os
path = 'C:\scripts27'
for root, dirs, files in os.walk(path):
for currentFile in files:
print "processing file: " + currentFile
extensions=('.tmp')
if any(currentFile.lower().endswith(ext) for ext in extensions):
os.remove(os.path.join(root, currentFile))
```
I'm running this script using Python 2.7.10 on a Windows 8.1 PC 64-bit.
Thanks! | I'm assuming you meant providing a list of extensions. But in your case, `extensions` is defined as `('.tmp')` which is *not* a tuple but a string. This causes your code to loop over all files and check for names ending with `.`, `t`, `m` and `p` thereby deleting your `.txt` files.
The fix here is to define extensions as `['.tmp']` or `('.tmp',)` (notice the trailing `,`). |
What does `{...}` mean in the print output of a python variable? | 32,408,387 | 6 | 2015-09-05T00:39:09Z | 32,408,441 | 11 | 2015-09-05T00:48:06Z | [
"python",
"ellipsis"
] | Someone posted [this interesting formulation](https://stackoverflow.com/questions/32127908/python-assignment-operator-precedence-a-b-ab-5), and I tried it out in a Python 3 console:
```
>>> (a, b) = a[b] = {}, 5
>>> a
{5: ({...}, 5)}
```
While there is a lot to unpack here, what I don't understand (and the semantics of interesting character formulations seems particularly hard to search for) is what the `{...}` means in this context? Changing the above a bit:
```
>>> (a, b) = a[b] = {'x':1}, 5
>>> a
{5: ({...}, 5), 'x': 1}
```
It is this second output that really baffles me: I would have expected the `{...}` to have been altered, but my nearest **guess** is that the `, 5` implies a tuple where the first element is somehow undefined? And that is what the `{...}` means? If so, this is a new category of type for me in Python, and I'd like to have a name for it so I can learn more. | It's an indication that the dict recurses, i.e. contains itself. A much simpler example:
```
>>> a = []
>>> a.append(a)
>>> a
[[...]]
```
This is a list whose only element is itself. Obviously the repr can't be printed literally, or it would be infinitely long; instead, the builtin types notice when this has happened and use `...` to indicate self-containment.
So it's not a special type of value, just the normal English use of "..." to mean "something was omitted here", plus braces to indicate the omitted part is a dict. You may also see it with brackets for a list, as shown above, or occasionally with parentheses for a tuple:
```
>>> b = [],
>>> b[0].append(b)
>>> b
([(...)],)
```
Python 3 provides some tools so you can do this with your own objects, in the form of [`reprlib`](https://docs.python.org/3/library/reprlib.html). |
What is the right way to debug in iPython notebook? | 32,409,629 | 10 | 2015-09-05T04:53:43Z | 32,410,100 | 7 | 2015-09-05T06:13:10Z | [
"python",
"python-2.7",
"ipython",
"ipython-notebook",
"pdb"
] | As I know, %debug magic can do debug within one cell.
However, I have function calls across multiple cells.
For example,
```
In[1]: def fun1(a)
def fun2(b)
# I want to set a breakpoint for the following line #
return do_some_thing_about(b)
return fun2(a)
In[2]: import multiprocessing as mp
pool=mp.Pool(processes=2)
results=pool.map(fun1, 1.0)
pool.close()
pool.join
```
What I tried:
1. I tried to set %debug in the first line of cell-1. But it enter into debug mode immediately, even before executing cell-2.
2. I tried to add %debug in the line right before the code "return do\_some\_thing\_about(b)". But then the code run forever, never stopped.
What is the right way to set a break point within ipython notebook? | Use **ipdb**
Install it via
```
pip install ipdb
```
Usage:
```
In[1]: def fun1(a):
def fun2(a):
import ipdb; ipdb.set_trace() # debugging starts here
return do_some_thing_about(b)
return fun2(a)
In[2]: fun1(1)
```
For executing line by line use `n` and for step into a function use `s` and to exit from debugging prompt use `c`.
For complete list of available commands: <http://frid.github.io/blog/2014/06/05/python-ipdb-cheatsheet/> |
Python: Define a function only if package exists | 32,414,401 | 8 | 2015-09-05T14:49:40Z | 32,414,442 | 8 | 2015-09-05T14:55:11Z | [
"python"
] | Is it possible to tell Python 2.7 to only parse a function definition if a package exists?
I have a script that is run on multiple machines. There are some functions defined in the script that are very nice to have, but aren't required for the core operations the script performs. Some of the machines the script is run on don't have the package that the function imports, (and the package can't be installed on them). Currently I have to comment out the function definition before cloning the repo onto those machines. Another solution would be to maintain two different branches but that is even more tedious. Is there a solution that prevents us from having to constantly comment out code before pushing?
There are already solutions for when the function is called, such as this:
```
try:
someFunction()
except NameError:
print("someFunction() not found.")
``` | Function definitions and imports are just code in Python, and like other code, you can wrap them in a `try`:
```
try:
import bandana
except ImportError:
pass # Hat-wearing functions are optional
else:
def wear(hat):
bandana.check(hat)
...
```
This would define the `wear` function only if the `bandana` module is available.
Whether this is a good idea or not is up to you - I think it would be fine in your own scripts, but you might not want to do this in code other people will use. Another idea might be to do something like this:
```
def wear(hat):
try:
import bandana
except ImportError:
raise NotImplementedError("You need the bandana package to wear hats")
else:
bandana.check(hat)
...
```
This would make it clearer why you can't use the `wear` function. |
highest value that is less than 0 in a list that has mix of negative and positive values | 32,417,954 | 6 | 2015-09-05T21:41:26Z | 32,417,985 | 12 | 2015-09-05T21:45:09Z | [
"python",
"list"
] | I have a list with the these values.
```
lst1 = [1,-2,-4,-8,-9,-12,0,39,12,-3,-7]
```
I need to get the max value that is less that zero.
If I do `print max(last)`- I get 39 and what is need is -2.
`print max(p < 0 for p in lst1)`, I get True and not -2 | Never mind, I figured out and it should be
```
print max(p for p in lst1 if p < 0)
``` |
Accelerating scientific python program | 32,426,829 | 3 | 2015-09-06T18:24:23Z | 32,427,481 | 8 | 2015-09-06T19:34:28Z | [
"python",
"numpy",
"scientific-computing"
] | I have the following code in python:
```
def P(z, u0):
x = np.inner(z, u0)
tmp = x*u0
return (z - tmp)
def powerA2(A, u0):
x0 = np.random.rand(len(A))
for i in range(ITERATIONS):
x0 = P(np.dot(A, x0), u0)
x0 = x0 / np.linalg.norm(x0)
return (np.inner(np.dot(A, x0), x0))
```
`np` is **`numpy`** package.
I am interested in running this code for matrices in size of 100,000 \* 100,000, but it seems that there is no chance for this program to run fast (I need to run it many times, about 10,000).
**Is there any chance that tricks like multi-threading would work here?**
**Does anything else help to accelerate it?** | You could consider using [pythran](http://pythonhosted.org/pythran). Compiling the following code (`norm.py`):
```
#pythran export powerA2(float [][], float[])
import numpy as np
def P(z, u0):
x = np.inner(z, u0)
tmp = x*u0
return (z - tmp)
def norm(x):
return np.sqrt(np.sum(np.abs(x)**2))
def powerA2(A, u0):
ITERATIONS = 100
x0 = np.random.random(len(A))
for i in range(ITERATIONS):
x0 = P(np.dot(A, x0), u0)
x0 = x0 / norm(x0)
return (np.inner(np.dot(A, x0), x0))
```
with:
```
pythran norm.py
```
yields the following speedup:
```
$ python -m timeit -s 'import numpy as np; A =np.random.rand(100,100); B = np.random.random(100); import norm' 'norm.powerA2(A,B)'
100 loops, best of 3: 3.1 msec per loop
$ pythran norm.py -O3 -march=native
$ python -m timeit -s 'import numpy as np; A =np.random.rand(100,100); B = np.random.random(100); import norm' 'norm.powerA2(A,B)'
1000 loops, best of 3: 937 usec per loop
``` |
Saving Matplotlib graphs to image as full screen | 32,428,193 | 2 | 2015-09-06T20:52:11Z | 32,428,266 | 8 | 2015-09-06T21:00:55Z | [
"python",
"pandas",
"matplotlib"
] | I'm building a small graphing utility using Pandas and MatPlotLib to parse data and output graphs from a machine at work.
When I output the graph using
```
plt.show()
```
I end up with an unclear image that has legends and labels crowding each other out like so.
[](http://i.stack.imgur.com/eFRBG.png)
However, expanding the window to full-screen resolves my problem, repositioning everything in a way that allows the graph to be visible.
I then save the graph to a .png like so
```
plt.savefig('sampleFileName.png')
```
But when it saves to the image, the full-screen, correct version of the plot isn't saved, but instead the faulty default version.
How can I save the full-screen plt.show() of the plot to .png?
I hope I'm not too confusing.
Thank you for your help! | The method you use to maximise the window size depends on which matplotlib backend you are using. Please see the following example for the 3 most common backends:
```
import matplotlib.pyplot as plt
plt.figure()
plt.plot([1,2], [1,2])
# Option 1
# QT backend
manager = plt.get_current_fig_manager()
manager.window.showMaximized()
# Option 2
# TkAgg backend
manager = plt.get_current_fig_manager()
manager.resize(*manager.window.maxsize())
# Option 3
# WX backend
manager = plt.get_current_fig_manager()
manager.frame.Maximize(True)
plt.show()
plt.savefig('sampleFileName.png')
```
You can determine which backend you are using with the command `matplotlib.get_backend()`. When you save the maximized version of the figure it will save a larger image as desired. |
Wagtail: Display a list of child pages inside a parent page | 32,429,113 | 4 | 2015-09-06T23:02:26Z | 32,436,798 | 7 | 2015-09-07T10:46:59Z | [
"python",
"django",
"wagtail"
] | In Wagtail CMS, I'm trying to create an index page that will display a list of all its child pages along with a featured image associated with each child page.
I have created these two page models in models.py:
```
class IndexPage(Page):
intro = RichTextField(blank=True)
content_panels = Page.content_panels + [
FieldPanel('intro', classname='full'),
]
subpage_types = ['myapp.ItemPage']
class ItemPage(Page):
representative_image = models.ForeignKey(
'wagtailimages.Image',
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name='+'
)
body = RichTextField(blank=True)
promote_panels = Page.promote_panels + [
ImageChooserPanel('representative_image'),
]
content_panels = Page.content_panels + [
FieldPanel('body', classname='full'),
]
```
In the template index\_page.html, I added the following code:
```
<div class="intro">{{ self.intro|richtext }}</div>
{% for page in self.get_children %}
{{ page.title }}
{% image page.representative_image width-400 %}
{% endfor %}
```
This displays all of the child page titles, but not the images. Is it possible to retrieve the image fields for the child pages? | From the [release notes of wagtail version 1.1](https://github.com/torchbox/wagtail/blob/master/docs/releases/1.1.rst):
> Usually, an operation that retrieves a queryset of pages (such as `homepage.get_children()`) will return them as basic Page instances, which only include the core page data such as title. The `specific()` method (e.g. `homepage.get_children().specific()`) now allows them to be retrieved as their most specific type, using the minimum number of queries.
Therefore you don't need your custom function in the upcoming release 1.1 anymore and you can change your template to:
```
{% for page in self.get_children.specific %}
{{ page.title }}
{% image page.representative_image width-400 %}
{% endfor %}
```
At least since version 0.8 the following should work, too, using `specific`:
```
{% for page in self.get_children %}
{{ page.title }}
{% image page.specific.representative_image width-400 %}
{% endfor %}
``` |
What does django rest framework mean trade offs between view vs viewsets? | 32,430,689 | 3 | 2015-09-07T03:29:43Z | 32,431,191 | 9 | 2015-09-07T04:39:49Z | [
"python",
"django",
"python-2.7",
"view",
"django-rest-framework"
] | I don't know why document said "That doesn't mean it's always the right approach to take. There's a similar set of trade-offs to consider as when using class-based views instead of function based views. Using viewsets is less explicit than building your views individually."
If I want to make rest api, which is similar to ruby-on-rail. I think `viewsets` is a good approach to take.
Can Anyone explain more about it?
Document link:
<http://www.django-rest-framework.org/tutorial/6-viewsets-and-routers/#trade-offs-between-views-vs-viewsets> | The main advantage of using `viewsets` over `views` is brevity. In the simple case you can get more done with fewer lines of code.
The main disadvantage is that the simplifying assumptions made by `viewsets` might not always fit the problem space you are working in. As with class-based views in Django, if you try to apply the wrong pattern to a problem you can end up doing more work than you need to to solve a problem.
My personal heuristic is that if I am doing the full set of CRUD operations on a model, I start with `viewsets` and go from there until I feel the convenience they provide is no longer worth the trouble I am incurring in that specific instance; if I am working with an API endpoint that doesn't map to any models, I'm far more likely to just use a `view`.
**Edit**:
In response to your comment, here's an example in code. If I had the following models:
**models.py**
```
from django.db import models
class Gizmo(models.Model):
name = models.CharField(blank=True, null=False)
last_dusted = models.DateField(null=True)
class Sprocket(models.Model):
nom = models.CharField(blank=True, null=False)
last_dusted = models.DateField(null=True)
```
And I wanted to support the standard HTTP methods with their normal meanings, (namely GET and POST on the list view and GET, PUT, and DELETE on the detail view), I'd create a `GizmoViewSet`, a `SprocketViewSet` and call it a day.
Say I also wanted to offer API consumers the ability to dust off all of the gizmos at once. In that case it would make sense to add a `dust` method to the `GizmoViewSet` using the `@list_route` decorator. Suppose that what I really wanted to do though was to offer a single endpoint where that API consumer could dust all the `Gizmo`s and the `Sprocket`s off at once. That doesn't really map very well to either viewset, so I'd add a one off view:
```
import datetime
from rest_framework.decorators import api_view
from rest_framework.response import Response
from my_app.models import Gizmo, Sprocket
# I used a function-based API view here, but a CBV APIView
# would work just as well. Matter of personal preference...
@api_view
def dust_everything(request):
today = datetime.date.today()
Gizmo.objects.all().update(last_dusted=today)
Sprocket.objects.all().update(last_dusted=today)
return Response({"status_of_dusting": "successful"})
```
So in that case I wouldn't be tearing out all my viewsets and replacing them with views; I'm adding an extra view to supplement the existing viewsets where it makes sense. |
Cx_freeze ImportError no module named scipy | 32,432,887 | 4 | 2015-09-07T07:02:56Z | 32,480,170 | 9 | 2015-09-09T12:56:42Z | [
"python",
"scipy",
"cx-freeze"
] | Good day all,
I am having trouble using cx\_Freeze on a code I am working on converting to a .exe.
When I run cx\_Freeze I get the following ImportError that there no no module named scipy
```
running install
running build
running build_exe
Traceback (most recent call last):
File "setup.py", line 25, in <module>
executables = executables
File "C:\Python34\lib\site-packages\cx_Freeze\dist.py", line 362, in setup
distutils.core.setup(**attrs)
File "C:\Python34\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "C:\Python34\lib\distutils\dist.py", line 955, in run_commands
self.run_command(cmd)
File "C:\Python34\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Python34\lib\distutils\command\install.py", line 539, in run
self.run_command('build')
File "C:\Python34\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\Python34\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Python34\lib\distutils\command\build.py", line 126, in run
self.run_command(cmd_name)
File "C:\Python34\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\Python34\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Python34\lib\site-packages\cx_Freeze\dist.py", line 232, in run
freezer.Freeze()
File "C:\Python34\lib\site-packages\cx_Freeze\freezer.py", line 619, in Freeze
self.finder = self._GetModuleFinder()
File "C:\Python34\lib\site-packages\cx_Freeze\freezer.py", line 378, in _GetModuleFinder
finder.IncludePackage(name)
File "C:\Python34\lib\site-packages\cx_Freeze\finder.py", line 686, in IncludePackage
module = self._ImportModule(name, deferredImports)
File "C:\Python34\lib\site-packages\cx_Freeze\finder.py", line 386, in _ImportModule
raise ImportError("No module named %r" % name)
ImportError: No module named 'scipy'
```
I can confirm that I have Scipy 0.16 installed on my system which works when I import it into other python code. I am currently running python 3.4 on Windows. The following is my setup.py file for cx\_Freeze.
```
import cx_Freeze
import sys
import matplotlib
base = None
if sys.platform == 'win32':
base = 'Win32GUI'
executables = [cx_Freeze.Executable('fractureGUI.py', base=base, icon='star_square.ico')]
packages = ['tkinter','matplotlib','scipy']
include_files = ['star_square.ico', 'C:\\Python34\\Lib\\site-packages\\scipy']
cx_Freeze.setup(
name = 'FracturePositionMonteCarlo',
options = {'build_exe': {'packages':packages,
'include_files':include_files}},
version = '0.01',
description = 'Fracture Depth Monte Carlo',
executables = executables
)
```
The following is the import section of my main script, fractureGUI.py.
```
import scipy
from random import random
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
matplotlib.use('TkAgg')
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib import style
style.use('ggplot')
import tkinter as tk
from tkinter import ttk, filedialog
import sys
import json
```
If anybody has any ideas why cx\_Freeze is unable to find scipy please do let me know. I tried to add the filepath to scipy under include\_files but it made no difference.
Kind regards,
Jonnyishman | I had exactly the same issue. Found the solution here:
<https://bitbucket.org/anthony_tuininga/cx_freeze/issues/43/import-errors-when-using-cx_freeze-with>
Find the hooks.py file in cx\_freeze folder. Change line 548 from finder.IncludePackage("scipy.lib") to finder.IncludePackage("scipy.\_lib").
Leave the "scipy" entry in packages and delete 'C:\Python34\Lib\site-packages\scipy' in include\_files. |
Python merging two lists with all possible permutations | 32,438,350 | 9 | 2015-09-07T12:06:57Z | 32,438,848 | 11 | 2015-09-07T12:31:42Z | [
"python",
"list",
"itertools"
] | I'm trying to figure out the best way to merge two lists into all possible combinations. So, if I start with two lists like this:
```
list1 = [1, 2]
list2 = [3, 4]
```
The resulting list will look like this:
```
[[[1,3], [2,4]], [[1,4], [2,3]]]
```
That is, it basically produces a list of lists, with all the potential combinations between the two.
I've been working through itertools, which I'm pretty sure holds the answer, but I can't come up with a way to make it act this way. The closest I came was:
```
list1 = [1, 2, 3, 4]
list2 = [5, 6, 7, 8]
print list(itertools.product(list1, list2))
```
Which produced:
```
[(1, 5), (1, 6), (1, 7), (1, 8), (2, 5), (2, 6), (2, 7), (2, 8), (3, 5), (3, 6), (3, 7), (3, 8), (4, 5), (4, 6), (4, 7), (4, 8)]
```
So it does all the possible combinations of items in each list, but not all the possible resulting lists. How do I get that to happen?
EDIT: The end goal is to be able to individually process each list to determine efficiency (the actual data I'm working with is more complex). So, in the original example above, it would work something like this:
```
list1 = [1, 2]
list2 = [3, 4]
Get first merged list: [[1,3], [2, 4]]
Do stuff with this list
Get second merged list: [[1,4], [2, 3]]
Do stuff with this list
```
If I got the "list of lists of lists" output I described above, then I could put it into a for loop and process on. Other forms of output would work, but it seems the simplest to work with. | `repeat` the first list, `permutate` the second and `zip` it all together
```
>>> from itertools import permutations, repeat
>>> a = [1, 2, 3]
>>> b = [4, 5, 6]
>>> list(list(zip(r, p)) for (r, p) in zip(repeat(a), permutations(b)))
[[(1, 4), (2, 5), (3, 6)],
[(1, 4), (2, 6), (3, 5)],
[(1, 5), (2, 4), (3, 6)],
[(1, 5), (2, 6), (3, 4)],
[(1, 6), (2, 4), (3, 5)],
[(1, 6), (2, 5), (3, 4)]]
```
**EDIT**: As Peter Otten noted, the inner `zip` and the `repeat` is superfluous.
```
[list(zip(a, p)) for p in permutations(b)]
``` |
Python merging two lists with all possible permutations | 32,438,350 | 9 | 2015-09-07T12:06:57Z | 32,442,873 | 9 | 2015-09-07T16:29:27Z | [
"python",
"list",
"itertools"
] | I'm trying to figure out the best way to merge two lists into all possible combinations. So, if I start with two lists like this:
```
list1 = [1, 2]
list2 = [3, 4]
```
The resulting list will look like this:
```
[[[1,3], [2,4]], [[1,4], [2,3]]]
```
That is, it basically produces a list of lists, with all the potential combinations between the two.
I've been working through itertools, which I'm pretty sure holds the answer, but I can't come up with a way to make it act this way. The closest I came was:
```
list1 = [1, 2, 3, 4]
list2 = [5, 6, 7, 8]
print list(itertools.product(list1, list2))
```
Which produced:
```
[(1, 5), (1, 6), (1, 7), (1, 8), (2, 5), (2, 6), (2, 7), (2, 8), (3, 5), (3, 6), (3, 7), (3, 8), (4, 5), (4, 6), (4, 7), (4, 8)]
```
So it does all the possible combinations of items in each list, but not all the possible resulting lists. How do I get that to happen?
EDIT: The end goal is to be able to individually process each list to determine efficiency (the actual data I'm working with is more complex). So, in the original example above, it would work something like this:
```
list1 = [1, 2]
list2 = [3, 4]
Get first merged list: [[1,3], [2, 4]]
Do stuff with this list
Get second merged list: [[1,4], [2, 3]]
Do stuff with this list
```
If I got the "list of lists of lists" output I described above, then I could put it into a for loop and process on. Other forms of output would work, but it seems the simplest to work with. | The accepted answer can be simplified to
```
a = [1, 2, 3]
b = [4, 5, 6]
[list(zip(a, p)) for p in permutations(b)]
```
(The list() call can be omitted in Python 2) |
TypeError: int() argument must be a string or a number, not 'datetime.datetime' | 32,440,251 | 6 | 2015-09-07T13:47:13Z | 32,456,737 | 11 | 2015-09-08T11:34:13Z | [
"python",
"django",
"django-orm"
] | I have made App12/models.py module as:
```
from django.db import models
class Question(models.Model):
ques_text=models.CharField(max_length=300)
pub_date=models.DateTimeField('Published date')
def __str__(self):
return self.ques_text
class Choice(models.Model):
# question=models.ForeignKey(Question)
choice_text=models.CharField(max_length=300)
votes=models.IntegerField(default=0)
def __str__(self):
return self.choice_text
```
Then i run the cmds
```
python manage.py makemigrations App12
python manage.py migrate
```
and then enter 2 records in the Question model as:
```
Question.objects.create(ques_text="How are you?",pub_date='timezone.now()')
# and (ques_text="What are you doing?",pub_date='timezone.now()')
```
Then i realise that Question and Choice models should be in foreign key relation and uncomment the above commented statement in the models code
When i run the "`python manage.py makemigrations App12`", it is running fine but after that, i am getting the
```
"TypeError: int() argument must be a string or a number, not 'datetime.datetime"
```
error when i am running "python manage.py migrate" command.
Can anybody help me.How can i add a foreignkey relation between the Choice model and Question model now. | From your migration file it's normal that you get this error, you are trying to store a datetime on a Foreignkey which need to be an int.
This is happened when the migration asked you which value will be set for old Choice rows because the new ForeignKey is required.
To resolve it, you can change the migration file and change the datetime.date... to a valid id from the Question table like the code bellow. Or delete the migration file and re-run ./manage.py makemigrations, when you will be asked about the default value prompt a valid Question id, not a datetime.
```
from future import unicode_literals
from django.db import models, migrations
import datetime
class Migration(migrations.Migration):
dependencies = [ ('App11', '0003_remove_choice_question'), ]
operations = [
migrations.AddField(
model_name='choice',
name='question',
field=models.ForeignKey(default=1, to='App11.Question'), preserve_default=False, ),
]
``` |
'UCS-2' codec can't encode characters in position 1050-1050 | 32,442,608 | 3 | 2015-09-07T16:10:03Z | 32,442,684 | 10 | 2015-09-07T16:15:42Z | [
"python",
"unicode"
] | When I run my Python code, I get the following errors:
```
File "E:\python343\crawler.py", line 31, in <module>
print (x1)
File "E:\python343\lib\idlelib\PyShell.py", line 1347, in write
return self.shell.write(s, self.tags)
UnicodeEncodeError: 'UCS-2' codec can't encode characters in position 1050-1050: Non-BMP character not supported in Tk
```
Here is my code:
```
x = g.request('search', {'q' : 'TaylorSwift', 'type' : 'page', 'limit' : 100})['data'][0]['id']
# GET ALL STATUS POST ON PARTICULAR PAGE(X=PAGE ID)
for x1 in g.get_connections(x, 'feed')['data']:
print (x1)
for x2 in x1:
print (x2)
if(x2[1]=='status'):
x2['message']
```
How can I fix this? | Your data contains characters outside of the [*Basic Multilingual Plane*](https://en.wikipedia.org/wiki/Plane_(Unicode)#Basic_Multilingual_Plane). Emoji's for example, are outside the BMP, and the window system used by IDLE, Tk, cannot handle such characters.
You could use a [translation table](https://docs.python.org/3/library/stdtypes.html#str.translate) to map everything outside of the BMP to the [replacement character](https://en.wikipedia.org/wiki/Specials_%28Unicode_block%29#Replacement_character):
```
import sys
non_bmp_map = dict.fromkeys(range(0x10000, sys.maxunicode + 1), 0xfffd)
print(x.translate(non_bmp_map))
```
The `non_bmp_map` maps all codepoints outside the BMP (any codepoint higher than 0xFFFF, all the way up to the [highest Unicode codepoint your Python version can handle](https://docs.python.org/3/library/sys.html#sys.maxunicode)) to [U+FFFD REPLACEMENT CHARACTER](https://codepoints.net/U+FFFD):
```
>>> print('This works! \U0001F44D')
This works! í ½í±
>>> print('This works! \U0001F44D'.translate(non_bmp_map))
This works! �
``` |
Python strange multiprocessing with variable name | 32,443,135 | 7 | 2015-09-07T16:50:58Z | 32,443,422 | 9 | 2015-09-07T17:14:00Z | [
"python",
"multiprocessing"
] | a simple example:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import multiprocessing
class Klass(object):
def __init__(self):
print "Constructor ... %s" % multiprocessing.current_process().name
def __del__(self):
print "... Destructor %s" % multiprocessing.current_process().name
if __name__ == '__main__':
kls = Klass()
```
run with error when do `current_process` in `__del__`:
```
Constructor ... MainProcess
Exception AttributeError: "'NoneType' object has no attribute 'current_process'" in <bound method Klass.__del__ of <__main__.Klass object at 0x7f5c34e52090>> ignored
```
if I change a variable name:
```
als = Klass()
```
it get the right result:
```
Constructor ... MainProcess
... Destructor MainProcess
```
and I tried many variable name, some ok, some error.
Why different instance name, will cause multiprocessing module be None in `__del__`? | The code raises
```
AttributeError: "'NoneType' object has no attribute 'current_process'"
```
if the global variable `multiprocessing` is deleted before `kls` gets deleted.
In general, the order in which objects are deleted is not predictable. However, [per the docs](https://docs.python.org/3/reference/datamodel.html#object.__del__):
> Starting with version 1.5, Python guarantees that globals whose name begins with a single underscore are deleted from their module before other globals are deleted; if no other references to such globals exist, this may help in assuring that imported modules are still available at the time when the `__del__()` method is called.
Therefore, if you name the instance `_kls` (with an underscore), then you can be assured that its `__del__` will be called before `multiprocessing` is deleted:
```
import multiprocessing
class Klass(object):
def __init__(self):
print "Constructor ... %s" % multiprocessing.current_process().name
def __del__(self):
print "... Destructor %s" % multiprocessing.current_process().name
if __name__ == '__main__':
_kls = Klass()
```
yields
```
Constructor ... MainProcess
... Destructor MainProcess
```
---
[Other methods](http://stackoverflow.com/q/14986568/190597) of ensuring a `del` method is called before the module is deleted include
* using `atexit`
* using a context manager
* saving a reference to the module as an attribute of `Klass`. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.