title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
What is python Sting? | 30,583,416 | 2 | 2015-06-01T21:15:26Z | 30,583,523 | 7 | 2015-06-01T21:22:06Z | [
"python",
"netflix"
] | Sting is referenced in [this blog post by netflix](http://techblog.netflix.com/2013/03/python-at-netflix.html) as a RESTful web service tool for reporting on data from hadoop / hive.
> One such application is Sting, a lightweight RESTful web service that slices, dices, and produces visualizations of large in-memory datasets.
I've spent the last 45 minutes trying to find the project online with no luck. There was also a mention of Sting in a [slideshare presentation](http://www.slideshare.net/BlakeIrvine/netflix-enabling-a-culture-of-analytics). It is noted on slide 12 and 13 as a reporting tool that is in the same vertical as tableau.
Does anyone know what this project is, where it can be found, or if this is a Netflix closed source / proprietary tool? | You can find the answer in the comments. There, someone asks:
> lovely wrapup, is Sting a proprietary tool? found no trace in the interwebs whatsoever.
And of the authors replies:
> It's currently internal-only. I don't know what plans there are to OSS it, but there's been significant interest expressed in Sting coming out of this blog post, and the authors of it are aware of that interest :) |
python strptime format with optional bits | 30,584,364 | 7 | 2015-06-01T22:28:32Z | 30,584,445 | 8 | 2015-06-01T22:35:56Z | [
"python",
"datetime-format"
] | Right now I have:
```
timestamp = datetime.strptime(date_string, '%Y-%m-%d %H:%M:%S.%f')
```
This works great unless I'm converting a string that doesn't have the microseconds. How can I specify that the microseconds are optional (and should be considered 0 if they aren't in the string)? | You could use a `try/except` block:
```
try:
timestamp = datetime.strptime(date_string, '%Y-%m-%d %H:%M:%S.%f')
except ValueError:
timestamp = datetime.strptime(date_string, '%Y-%m-%d %H:%M:%S')
``` |
Disable hash randomization from within python program | 30,585,108 | 11 | 2015-06-01T23:42:03Z | 30,586,046 | 8 | 2015-06-02T01:48:15Z | [
"python",
"python-3.x",
"hash"
] | Starting with Python 3.3, the hashing algorithm is non-deterministically [salted](https://docs.python.org/3.4/reference/datamodel.html#object.__hash__) to avoid a certain kind of attack. This is nice for webservers but it's a pain when trying to debug a program: Every time I run my script, dict contents are iterated in a different order.
Some earlier versions of python had a `-R` flag for **enabling** hash randomization, but now that it's the default behavior, the flag has not been replaced by its opposite.
Randomization can be disabled by setting the environment variable [`PYTHONHASHSEED`](https://docs.python.org/3.4/using/cmdline.html#envvar-PYTHONHASHSEED):
> **PYTHONHASHSEED**
>
> If this variable is not set or set to random, a random value is used to seed the hashes of str, bytes and datetime objects.
> If PYTHONHASHSEED is set to an integer value, it is used as a fixed seed for generating the hash() of the types covered by the hash randomization.
The catch is that this variable must be set before launching the python process. I've tried to set it with `os.putenv()`, or in `os.environ`, but these seem to have no effect on the hashing method. This is not too surprising: I wouldn't expect python to check the environment before every single set or dictionary lookup! So, the question remains:
Is there a way for a python program to disable its own hash randomization? | I suspect this isn't possible, unfortunately. Looking at [`test_hash.py`](https://hg.python.org/cpython/file/5e8fa1b13516/Lib/test/test_hash.py#l145) the `HashRandomizationTests` class and its descendants were added in the [commit that introduced this behavior](https://hg.python.org/cpython/rev/6b7704fe1be1#l9.26). They test the hashing behavior by modifying the environment and starting a new process with `PYTHONHASHSEED` explicitly set. You could try to copy that pattern, perhaps.
I also just noticed you said "*Every time I run my script, dict contents are iterated in a different order.*" - I assume you're aware of [`collections.OrderedDict`](https://docs.python.org/3/library/collections.html#collections.OrderedDict), right? That's the normal way to get reliable hash iteration.
---
If you're willing to set the value in your shell environment, you could also just wrap your python call in a bash script, e.g.
```
#! /bin/bash
export PYTHONHASHSEED=0
# call your python program here
```
That avoids needing to manipulate your whole environment, as long as you're ok with a wrapper script.
Or even just pass the value on the command line:
```
$ PYTHONHASHSEED=0 python YOURSCRIPT.py
``` |
When reading huge HDF5 file with "pandas.read_hdf() ", why do I still get MemoryError even though I read in chunks by specifying chunksize? | 30,587,026 | 7 | 2015-06-02T03:54:52Z | 30,708,056 | 7 | 2015-06-08T11:32:04Z | [
"python",
"pandas",
"hdf5"
] | ## Problem description:
I use python pandas to read a few large CSV file and store it in HDF5 file, the resulting HDF5 file is about 10GB.
**The problem happens when reading it back. Even though I tried to read it back in chunks, I still get MemoryError.**
## Here is How I create the HDF5 file:
```
import glob, os
import pandas as pd
hdf = pd.HDFStore('raw_sample_storage2.h5')
os.chdir("C:/RawDataCollection/raw_samples/PLB_Gate")
for filename in glob.glob("RD_*.txt"):
raw_df = pd.read_csv(filename,
sep=' ',
header=None,
names=['time', 'GW_time', 'node_id', 'X', 'Y', 'Z', 'status', 'seq', 'rssi', 'lqi'],
dtype={'GW_time': uint32, 'node_id': uint8, 'X': uint16, 'Y': uint16, 'Z':uint16, 'status': uint8, 'seq': uint8, 'rssi': int8, 'lqi': uint8},
parse_dates=['time'],
date_parser=dateparse,
chunksize=50000,
skip_blank_lines=True)
for chunk in raw_df:
hdf.append('raw_sample_all', chunk, format='table', data_columns = True, index = True, compression='blosc', complevel=9)
```
## Here is How I try to read it back in chunks:
```
for df in pd.read_hdf('raw_sample_storage2.h5','raw_sample_all', chunksize=300000):
print(df.head(1))
```
## Here is the error message I got:
```
---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
<ipython-input-7-ef278566a16b> in <module>()
----> 1 for df in pd.read_hdf('raw_sample_storage2.h5','raw_sample_all', chunksize=300000):
2 print(df.head(1))
C:\Anaconda\lib\site-packages\pandas\io\pytables.pyc in read_hdf(path_or_buf, key, **kwargs)
321 store = HDFStore(path_or_buf, **kwargs)
322 try:
--> 323 return f(store, True)
324 except:
325
C:\Anaconda\lib\site-packages\pandas\io\pytables.pyc in <lambda>(store, auto_close)
303
304 f = lambda store, auto_close: store.select(
--> 305 key, auto_close=auto_close, **kwargs)
306
307 if isinstance(path_or_buf, string_types):
C:\Anaconda\lib\site-packages\pandas\io\pytables.pyc in select(self, key, where, start, stop, columns, iterator, chunksize, auto_close, **kwargs)
663 auto_close=auto_close)
664
--> 665 return it.get_result()
666
667 def select_as_coordinates(
C:\Anaconda\lib\site-packages\pandas\io\pytables.pyc in get_result(self, coordinates)
1346 "can only use an iterator or chunksize on a table")
1347
-> 1348 self.coordinates = self.s.read_coordinates(where=self.where)
1349
1350 return self
C:\Anaconda\lib\site-packages\pandas\io\pytables.pyc in read_coordinates(self, where, start, stop, **kwargs)
3545 self.selection = Selection(
3546 self, where=where, start=start, stop=stop, **kwargs)
-> 3547 coords = self.selection.select_coords()
3548 if self.selection.filter is not None:
3549 for field, op, filt in self.selection.filter.format():
C:\Anaconda\lib\site-packages\pandas\io\pytables.pyc in select_coords(self)
4507 return self.coordinates
4508
-> 4509 return np.arange(start, stop)
4510
4511 # utilities ###
MemoryError:
```
## My python environment:
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.3.final.0
python-bits: 32
OS: Windows
OS-release: 7
machine: x86
processor: x86 Family 6 Model 42 Stepping 7, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.15.2
nose: 1.3.4
Cython: 0.22
numpy: 1.9.2
scipy: 0.15.1
statsmodels: 0.6.1
IPython: 3.0.0
sphinx: 1.2.3
patsy: 0.3.0
dateutil: 2.4.1
pytz: 2015.2
bottleneck: None
tables: 3.1.1
numexpr: 2.3.1
matplotlib: 1.4.3
openpyxl: 1.8.5
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: 0.6.7
lxml: 3.4.2
bs4: 4.3.2
html5lib: None
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: 0.9.9
pymysql: None
psycopg2: None
```
## Edit 1:
**It took about half an hour for the MemoryError to happen after executing read\_hdf(), and in the meanwhile I checked taskmgr, and there's little CPU activity and total memory used never exceeded 2.2G.** It was about 2.1 GB before I execute the code. So whatever pandas read\_hdf() loaded into the RAM is less than 100 MB *(I have 4G RAM, and my 32-bit-Windows system can only use 2.7G, and I used the rest for RAM disk)*
**Here's the hdf file info:**
```
In [2]:
hdf = pd.HDFStore('raw_sample_storage2.h5')
hdf
Out[2]:
<class 'pandas.io.pytables.HDFStore'>
File path: C:/RawDataCollection/raw_samples/PLB_Gate/raw_sample_storage2.h5
/raw_sample_all frame_table (typ->appendable,nrows->308581091,ncols->10,indexers->[index],dc->[time,GW_time,node_id,X,Y,Z,status,seq,rssi,lqi])
```
**Moreover, I can read a portion of the hdf file by indicating 'start' and 'stop' instead of 'chunksize':**
```
%%time
df = pd.read_hdf('raw_sample_storage2.h5','raw_sample_all', start=0,stop=300000)
print df.info()
print(df.head(5))
```
The execution only took 4 seconds, and the output is:
```
<class 'pandas.core.frame.DataFrame'>
Int64Index: 300000 entries, 0 to 49999
Data columns (total 10 columns):
time 300000 non-null datetime64[ns]
GW_time 300000 non-null uint32
node_id 300000 non-null uint8
X 300000 non-null uint16
Y 300000 non-null uint16
Z 300000 non-null uint16
status 300000 non-null uint8
seq 300000 non-null uint8
rssi 300000 non-null int8
lqi 300000 non-null uint8
dtypes: datetime64[ns](1), int8(1), uint16(3), uint32(1), uint8(4)
memory usage: 8.9 MB
None
time GW_time node_id X Y Z status seq \
0 2013-10-22 17:20:58 39821761 3 20010 21716 22668 0 33
1 2013-10-22 17:20:58 39821824 4 19654 19647 19241 0 33
2 2013-10-22 17:20:58 39821888 1 16927 21438 22722 0 34
3 2013-10-22 17:20:58 39821952 2 17420 22882 20440 0 34
4 2013-10-22 17:20:58 39822017 3 20010 21716 22668 0 34
rssi lqi
0 -43 49
1 -72 47
2 -46 48
3 -57 46
4 -42 50
Wall time: 4.26 s
```
Noticing 300000 rows only took 8.9 MB RAM, I tried to use chunksize together with start and stop:
```
for df in pd.read_hdf('raw_sample_storage2.h5','raw_sample_all', start=0,stop=300000,chunksize = 3000):
print df.info()
print(df.head(5))
```
Same MemoryError happens.
I don't understand what's happening here, if the internal mechanism somehow ignore chunksize/start/stop and tried to load the whole thing into RAM, how come there's almost no increase in RAM usage (only 100 MB) when MemoryError happens? And why does the execution take half an hour just to reach the error at the very beginning of the process without noticeable CPU usage? | So the iterator is built mainly to deal with a `where` clause. `PyTables` returns a list of the indicies where the clause is True. These are row numbers. In this case, there is no where clause, but we still use the indexer, which in this case is simply `np.arange` on the list of rows.
300MM rows takes 2.2GB. which is too much for windows 32-bit (generally maxes out around 1GB). On 64-bit this would be no problem.
```
In [1]: np.arange(0,300000000).nbytes/(1024*1024*1024.0)
Out[1]: 2.2351741790771484
```
So this should be handled by slicing semantics, which would make this take only a trivial amount of memory. Issue opened [here](https://github.com/pydata/pandas/issues/10310).
So I would suggest this. Here the indexer is computed directly and this provides iterator semantics.
```
In [1]: df = DataFrame(np.random.randn(1000,2),columns=list('AB'))
In [2]: df.to_hdf('test.h5','df',mode='w',format='table',data_columns=True)
In [3]: store = pd.HDFStore('test.h5')
In [4]: nrows = store.get_storer('df').nrows
In [6]: chunksize = 100
In [7]: for i in xrange(nrows//chunksize + 1):
chunk = store.select('df',
start=i*chunksize,
stop=(i+1)*chunksize)
# work on the chunk
In [8]: store.close()
``` |
How to update a document using elasticsearch-py? | 30,598,152 | 9 | 2015-06-02T13:56:00Z | 30,598,673 | 15 | 2015-06-02T14:17:43Z | [
"python",
"elasticsearch",
"elasticsearch-py"
] | Does anyone have an example for how to use update? It's documented [here](http://elasticsearch-py.readthedocs.org/en/master/api.html#elasticsearch.Elasticsearch.update), but the documentation is unclear and doesn't included a working example. I've tried the following:
```
coll = Elasticsearch()
coll.update(index='stories-test',doc_type='news',id=hit.meta.id,
body={"stanford": 1, "parsed_sents": parsed })
```
and I get
```
elasticsearch.exceptions.RequestError:
TransportError(400, u'ActionRequestValidationException[Validation Failed: 1: script or doc is missing;]')
```
I would like to update using a partial doc, but the update method doesn't take any argument named 'doc' or 'document'. | You're almost there, you just need to enclose your body inside a "doc" field. The correct way of doing a partial update with elasticsearch-py goes like this:
```
coll = Elasticsearch()
coll.update(index='stories-test',doc_type='news',id=hit.meta.id,
body={"doc": {"stanford": 1, "parsed_sents": parsed }})
``` |
Move models between Django (1.8) apps with required ForeignKey references | 30,601,107 | 14 | 2015-06-02T16:05:58Z | 30,613,732 | 32 | 2015-06-03T07:38:44Z | [
"python",
"database",
"django-models",
"schema-migration"
] | This is an extension to this question: [How to move a model between two Django apps (Django 1.7)](http://stackoverflow.com/questions/25648393/how-to-move-a-model-between-two-django-apps-django-1-7/26472482#26472482)
I need to move a bunch of models from `old_app` to `new_app`. The best answer seems to be [Ozan's](http://stackoverflow.com/questions/25648393/how-to-move-a-model-between-two-django-apps-django-1-7/26472482#26472482), but with required foreign key references, things are bit trickier. @halfnibble presents a solution in the comments to Ozan's answer, but I'm still having trouble with the precise order of steps (e.g. when do I copy the models over to `new_app`, when do I delete the models from `old_app`, which migrations will sit in `old_app.migrations` vs. `new_app.migrations`, etc.)
Any help is much appreciated! | **Migrating a model between apps.**
The short answer is, *don't do it!!*
But that answer rarely works in the real world of living projects and production databases. Therefore, I have created a [sample GitHub repo](https://github.com/halfnibble/factory) to demonstrate this rather complicated process.
I am using MySQL. *(No, those aren't my real credentials).*
**The Problem**
The example I'm using is a factory project with a **cars** app that initially has a `Car` model and a `Tires` model.
```
factory
|_ cars
|_ Car
|_ Tires
```
The `Car` model has a ForeignKey relationship with `Tires`. (As in, you specify the tires via the car model).
However, we soon realize that `Tires` is going to be a large model with its own views, etc., and therefore we want it in its own app. The desired structure is therefore:
```
factory
|_ cars
|_ Car
|_ tires
|_ Tires
```
And we need to keep the ForeignKey relationship between `Car` and `Tires` because too much depends on preserving the data.
**The Solution**
**Step 1.** Setup initial app with bad design.
Browse through the code of [step 1.](https://github.com/halfnibble/factory/tree/step1)
**Step 2.** Create an admin interface and add a bunch of data containing ForeignKey relationships.
View [step 2.](https://github.com/halfnibble/factory/tree/step2)
**Step 3.** Decide to move the `Tires` model to its own app. Meticulously cut and paste code into the new tires app. Make sure you update the `Car` model to point to the new `tires.Tires` model.
Then run `./manage.py makemigrations` and backup the database somewhere (just in case this fails horribly).
Finally, run `./manage.py migrate` and see the error message of doom,
**django.db.utils.IntegrityError: (1217, 'Cannot delete or update a parent row: a foreign key constraint fails')**
View code and migrations so far in [step 3.](https://github.com/halfnibble/factory/tree/step3)
**Step 4.** The tricky part. The auto-generated migration fails to see that you've merely copied a model to a different app. So, we have to do some things to remedy this.
You can follow along and view the final migrations with comments in [step 4.](https://github.com/halfnibble/factory/tree/step4) I did test this to verify it works.
First, we are going to work on `cars`. You have to make a new, empty migration. This migration actually needs to run before the most recently created migration (the one that failed to execute). Therefore, I renumbered the migration I created and changed the dependencies to run my custom migration first and then the last auto-generated migration for the `cars` app.
You can create an empty migration with:
```
./manage.py makemigrations --empty cars
```
**Step 4.a.** Make custom *old\_app* migration.
In this first custom migration, I'm only going to perform a "database\_operations" migration. Django gives you the option to split "state" and "database" operations. You can see how this is done by viewing the [code here](https://github.com/halfnibble/factory/blob/step4/cars/migrations/0002_auto_20150603_0642.py).
My goal in this first step is to rename the database tables from `oldapp_model` to `newapp_model` without messing with Django's state. You have to figure out what Django would have named your database table based on the app name and model name.
Now you are ready to modify the initial `tires` migration.
**Step 4.b.** Modify *new\_app* initial migration
The operations are fine, but we only want to modify the "state" and not the database. Why? Because we are keeping the database tables from the `cars` app. Also, you need to make sure that the previously made custom migration is a dependency of this migration. See the tires [migration file](https://github.com/halfnibble/factory/blob/step4/tires/migrations/0001_initial.py).
So, now we have renamed `cars.Tires` to `tires.Tires` in the database, and changed the Django state to recognize the `tires.Tires` table.
**Step 4.c.** Modify *old\_app* last auto-generated migration.
Going *back* to cars, we need to modify that last auto-generated migration. It should require our first custom cars migration, and the initial tires migration (that we just modified).
Here we should leave the `AlterField` operations because the `Car` model *is pointing* to a different model (even though it has the same data). However, we need to remove the lines of migration concerning `DeleteModel` because the `cars.Tires` model no longer exists. It has fully converted into `tires.Tires`. View [this migration](https://github.com/halfnibble/factory/blob/step4/cars/migrations/0003_auto_20150603_0630.py).
**Step 4.d.** Clean up stale model in *old\_app*.
Last but not least, you need to make a final custom migration in the cars app. Here, we will do a "state" operation only to delete the `cars.Tires` model. It is state-only because the database table for `cars.Tires` has already been renamed. This [last migration](https://github.com/halfnibble/factory/blob/step4/cars/migrations/0004_auto_20150603_0701.py) cleans up the remaining Django state. |
Why is S == S[::-1] faster than looping? | 30,601,354 | 2 | 2015-06-02T16:18:08Z | 30,601,525 | 7 | 2015-06-02T16:26:31Z | [
"python",
"string",
"performance",
"palindrome"
] | Why is the pythonic way of checking if a string, `S`, is a palindrome -- `S == S[::-1]` -- faster than the following implementation?
```
i = 0
j = len(S) - 1
while i < j:
if S[i] != S[j]:
return False
i += 1
j -= 1
return True
``` | Because Python code is only compiled to bytecode, which is then interpreted. That's going to be a lot slower than creating a new reversed string in C code, then comparing the string to another string with more C code.
Note that both algorithms are essentially O(N) complexity; the Python code executes at most 1/2 N iterations, and the string reversal version makes at most 2 N iterations, but asymptotically speaking that doesn't make a difference.
Since both algorithms are O(N) linear approaches, what matters then is their constant cost, how much time each iteration takes. That fixed cost is *vastly* lower for `s == s[::-1]`. |
What does ,= mean in python? | 30,601,479 | 35 | 2015-06-02T16:24:26Z | 30,601,537 | 43 | 2015-06-02T16:27:00Z | [
"python",
"operators"
] | I wonder what `,=` or `, =` means in python?
Example from matplotlib:
```
plot1, = ax01.plot(t,yp1,'b-')
``` | It's a form of tuple unpacking. With parentheses:
```
(plot1,) = ax01.plot(t,yp1,'b-')
```
`ax01.plot()` returns a tuple containing one element, and this element is assigned to `plot1`. Without that comma (and possibly the parentheses), `plot1` would have been assigned the whole tuple. Observe the difference between `a` and `b` in the following example:
```
>>> def foo():
... return (1,)
...
>>> (a,) = foo()
>>> b = foo()
>>> a
1
>>> b
(1,)
```
You can omit the parentheses both in `(a,)` and `(1,)`, I left them for the sake of clarity. |
What does ,= mean in python? | 30,601,479 | 35 | 2015-06-02T16:24:26Z | 30,601,610 | 16 | 2015-06-02T16:30:27Z | [
"python",
"operators"
] | I wonder what `,=` or `, =` means in python?
Example from matplotlib:
```
plot1, = ax01.plot(t,yp1,'b-')
``` | Python allows you to put tuples on the left hand side of the assignment.
The code in the question is an example of this, it might look like it's a special case of an operator but it's really just a case tuple assignment going on here. Some examples might help:
```
a, b = (1, 2)
```
which gives you `a = 1` and `b = 2`.
Now there's the concept of the one element tuple as well.
```
x = (3,)
```
gives you `x = (3,)` which is a tuple with one element, the syntax looks a bit strange but Python needs to differentiate from plain parenthesis so it has the trailing comma for this (For example `z=(4)` makes z be the integer value 4, not a tuple). If you wanted to now extract that element then you would want to use something like you have in the question:
```
y, = x
```
now `y` is 3. Note that this is just tuple assignment here, the syntax just appears a bit strange because it is tuple of length one.
See this script for an example: <http://ideone.com/qroNcx> |
How to import OpenSSL in python | 30,602,843 | 5 | 2015-06-02T17:33:17Z | 30,602,906 | 8 | 2015-06-02T17:37:04Z | [
"python",
"python-2.7",
"openssl"
] | I am trying to run this simple code to retrieve SSL certificate:
```
import ssl, socket
#print ssl.get_server_certificate(('www.google.com', 443))
cert=ssl.get_server_certificate(('www.google.com', 443))
# OpenSSL
x509 = OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_PEM, cert)
x509.get_subject().get_components()
```
But I get error saying:
```
Traceback (most recent call last):
File "C:\Users\e\Desktop\Python\ssl\test.py", line 6, in <module>
x509 = OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_PEM, cert)
NameError: name 'OpenSSL' is not defined
```
I am aware that I have to import OpenSSL. But I do not know how? and where to get the OpenSSL from?
I downloaded a module called pyOpenSSL from <https://pypi.python.org/pypi/pyOpenSSL>
Which contains two folders: pyOpenSSL-0.15.1.dist-info and OpenSSL.
When I tried to add import OpenSSL or import pyOpenSSL I get errors.
Can you explain clearly please, how to import these libraries or modules? where they should be placed? if not in the same directory of my code file? how to write the path in the import syntax??
Please, help.
**EDIT:**
when tried to add `from OpenSSL import SSL` in the code, I got:
```
C:\Users\e\Desktop\Python\ssl>test.py
Traceback (most recent call last):
File "C:\Users\e\Desktop\Python\ssl\test.py", line 2, in <module>
from OpenSSL import SSL
File "C:\Users\e\Desktop\Python\ssl\OpenSSL\__init__.py", line 8, in <module>
from OpenSSL import rand, crypto, SSL
File "C:\Users\e\Desktop\Python\ssl\OpenSSL\rand.py", line 9, in <module>
from six import integer_types as _integer_types
ImportError: No module named six
``` | From the [tests](https://github.com/pyca/pyopenssl/blob/master/examples/SecureXMLRPCServer.py):
```
from OpenSSL import SSL
```
Response to the edit: `pip install pyopenssl` should have installed six. If you're trying to install yourself, I'd not do this, but you can install the dependencies manually using `pip install six cryptography` and then your import should work fine. If not, leave a comment and I'll do some further investigation.
Response to comment: There are instructions on [installing pip on windows](http://stackoverflow.com/questions/4750806/how-to-install-pip-on-windows). |
Why does `mylist[:] = reversed(mylist)` work? | 30,604,127 | 11 | 2015-06-02T18:45:32Z | 30,604,265 | 12 | 2015-06-02T18:52:21Z | [
"python",
"list",
"reverse",
"assign",
"python-internals"
] | The following reverses a list "in-place" and works in Python 2 and 3:
```
>>> mylist = [1, 2, 3, 4, 5]
>>> mylist[:] = reversed(mylist)
>>> mylist
[5, 4, 3, 2, 1]
```
Why/how? Since `reversed` gives me an iterator and doesn't copy the list beforehand, and since `[:]=` replaces "in-place", I am surprised. And the following, also using `reversed`, breaks as expected:
```
>>> mylist = [1, 2, 3, 4, 5]
>>> for i, item in enumerate(reversed(mylist)):
mylist[i] = item
>>> mylist
[5, 4, 3, 4, 5]
```
Why doesn't the `[:] =` fail like that?
And yes, I do know `mylist.reverse()`. | CPython list slice assigment will convert the iterable to a list first by calling [`PySequence_Fast`](https://docs.python.org/2/c-api/sequence.html#c.PySequence_Fast). Source: <https://hg.python.org/cpython/file/7556df35b913/Objects/listobject.c#l611>
```
v_as_SF = PySequence_Fast(v, "can only assign an iterable");
```
---
Even PyPy does something [similar](https://bitbucket.org/pypy/pypy/src/5e2a8b1e5782/pypy/objspace/std/listobject.py#cl-1148):
```
def setslice__List_ANY_ANY_ANY(space, w_list, w_start, w_stop, w_iterable):
length = w_list.length()
start, stop = normalize_simple_slice(space, length, w_start, w_stop)
sequence_w = space.listview(w_iterable)
w_other = W_ListObject(space, sequence_w)
w_list.setslice(start, 1, stop-start, w_other)
```
Here [`space.listview`](https://bitbucket.org/pypy/pypy/src/0c9994fe9a365bb42daf8179a485799ecbb1b19b/pypy/objspace/std/objspace.py?at=default#cl-429) will call [`ObjSpace.unpackiterable`](https://bitbucket.org/pypy/pypy/src/0c9994fe9a365bb42daf8179a485799ecbb1b19b/pypy/interpreter/baseobjspace.py?at=default#cl-838) to unpack the iterable which in turn returns a list. |
Why does upsert a record using update_one raise ValueError? | 30,605,638 | 9 | 2015-06-02T20:08:52Z | 30,605,914 | 12 | 2015-06-02T20:25:37Z | [
"python",
"mongodb",
"mongodb-query",
"pymongo"
] | I want to add a record to the collection if the key doesn't already exist. I understand [MongoDB][1] offers the `upsert`for this so I did a
```
db.collection.update({"_id":"key1"},{"_id":"key1"},True)
```
This seems to work.
However in the [Pymongo documentation](http://api.mongodb.org/python/current/api/pymongo/collection.html?_ga=1.96647199.1473856847.1458939091#pymongo.collection.Collection.update_one) it says that update is deprecated and use to `update_one()`.
But:
```
db.collection.update_one({"_id":"key1"},{"_id":"key1"},True)
```
Gives:
```
raise ValueError('update only works with $ operators')
ValueError: update only works with $ operators
```
I don't really understand why `update_one` is different and why I need to use a `$` operator. Can anyone help? | This is because you didn't specify any [update operator](http://docs.mongodb.org/manual/reference/operator/update/).
For example to [`$set`](http://docs.mongodb.org/manual/reference/operator/update/set/#up._S_set) the `id` value use:
```
db.collection.update_one({"_id":"key1"}, {"$set": {"id":"key1"}}, upsert=True)
``` |
Pandas to_csv call is prepending a comma | 30,605,909 | 4 | 2015-06-02T20:25:31Z | 30,605,928 | 7 | 2015-06-02T20:26:10Z | [
"python",
"csv",
"pandas"
] | I have a data file, apples.csv, that has headers like:
```
"id","str1","str2","str3","num1","num2"
```
I read it into a dataframe with pandas:
```
apples = pd.read_csv('apples.csv',delimiter=",",sep=r"\s+")
```
Then I do some stuff to it, but ignore that (I have it all commented out, and my overall issues still occurs, so said stuff is irrelevant here).
I then save it out:
```
apples.to_csv('bananas.csv',columns=["id","str1","str2","str3","num1","num2"])
```
Now, looking at bananas.csv, its headers are:
```
,id,str1,str2,str3,num1,num2
```
No more quotes (which I don't really care about, as it doesn't impact anything in the file), and then that leading comma.
The ensuing rows are now with an additional column in there, so it saves out 7 columns. But if I do:
```
print(len(apples.columns))
```
Immediately prior to saving, it shows 6 columns...
I am normally in Java/Perl/R, and less experienced with Python and particularly Pandas, so I am not sure if this is "yeah, it just does that" or what the issue is - but I have spent amusingly long trying to figure this out and cannot find it via searching.
How can I get it to not do that prepending of a comma, and maybe as important - why is it doing it? | Set `index=False` (the default is `True` hence why you see this output) so that it doesn't save the index values to your csv, see the [docs](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html#pandas.DataFrame.to_csv)
So this:
```
df = pd.DataFrame({'a':np.arange(5), 'b':np.arange(5)})
df.to_csv(r'c:\data\t.csv')
```
results in
```
,a,b
0,0,0
1,1,1
2,2,2
3,3,3
4,4,4
```
Whilst this:
```
df.to_csv(r'c:\data\t.csv', index=False)
```
results in this:
```
a,b
0,0
1,1
2,2
3,3
4,4
```
It's for the situation where you may have some index values you want to save |
Is there a pandas function to display the first/last n columns, as in .head() & .tail()? | 30,608,310 | 5 | 2015-06-02T23:32:13Z | 30,608,484 | 7 | 2015-06-02T23:51:24Z | [
"python",
"pandas"
] | I love using the `.head()` and `.tail()` functions in pandas to circumstantially display a certain amount of rows (sometimes I want less, sometimes I want more!). But is there a way to do this with the columns of a DataFrame?
Yes, I know that I can change the display options, as in:
`pd.set_option('display.max_columns', 20)`
But that is too clunky to keep having to change on-the-fly, and anyway, it would only replace the `.head()` functionality, but not the `.tail()` functionality.
I also know that this could be done using an accessor:
`yourDF.iloc[:,:20]` to emulate .head(20) and `yourDF.iloc[:,-20:]` to emulate .tail(20).
It may look like a short amount of code, but honestly it's not as intuitive nor swift as when I use .head().
Does such a command exist? I couldn't find one! | No, such methods are not supplied by Pandas, but it is easy to make these methods yourself:
```
import pandas as pd
def front(self, n):
return self.iloc[:, :n]
def back(self, n):
return self.iloc[:, -n:]
pd.DataFrame.front = front
pd.DataFrame.back = back
df = pd.DataFrame(np.random.randint(10, size=(4,10)))
```
So that now *all* DataFrame would possess these methods:
```
In [272]: df.front(4)
Out[272]:
0 1 2 3
0 2 5 2 8
1 9 9 1 3
2 7 0 7 4
3 8 3 9 2
In [273]: df.back(3)
Out[273]:
7 8 9
0 3 2 7
1 9 9 4
2 5 7 1
3 3 2 5
In [274]: df.front(4).back(2)
Out[274]:
2 3
0 2 8
1 1 3
2 7 4
3 9 2
```
---
If you put the code in a utility module, say, `utils_pandas.py`, then you can activate it with an import statement:
```
import utils_pandas
``` |
Detect when TCP is congested python twisted socket server | 30,612,841 | 2 | 2015-06-03T06:52:13Z | 30,613,647 | 9 | 2015-06-03T07:34:44Z | [
"python",
"tcp",
"udp",
"server",
"twisted"
] | I'm working on a realtime MMO game, and have a working TCP server (along with game client), but now I'm **considering using UDP** for constantly updating other player's postions(to greatly reduce random game stuffer from TCP congestion control!)
I'd love some help from people smarter than me in this stuff (I'm new to python/twisted, and couldn't find this info elsewhere ;) )
**Currently, my server accepts connections with a simple Twisted Protocol. Eg.**
```
''' TCP reciever '''
class TCPProtocol(Protocol):
def connectionMade(self):
#add to list of connected clients
factory.clients.append(self)
def dataReceived(self, data):
pass
#setup factory and TCP protocol class
factory = Factory()
factory.protocol = TCPProtocol
factory.clients = []
reactor.listenTCP(1959, factory)
```
**@@@@@@@@@@@@@@ UPDATE @@@@@@@@@@@@@ :
How can I implement congestion checking for each Protocol instance seperately? Please start me off with something, inside the code sample below: (where it says 'HELP HERE PLEASE'!)**
Am I thinking of this wrongly? Any guidance would be awesome, thanks!
```
from twisted.internet.protocol import DatagramProtocol
from twisted.internet import reactor
KServerIP='127.0.0.1'
KServerPortTCP=8000
#KClientPortUDP=7055
''' TCP reciever '''
class TCPProtocol(Protocol):
def connectionMade(self):
#add to list of connected clients
factory.clients.append(self)
#set client variables (simplified)
self.pos_x=100
self.pos_y=100
#add to game room (to recieve updates)
mainGameRoom.clientsInRoom.append(self)
def dataReceived(self, data):
pass
#send custom byte message to client (I have special class to read it)
def sendMessageToClient(self, message, isUpdate):
''' @@@@@@@@@@@@@@@@@ HELP HERE PLEASE! @@@@@@@@@@@
if isUpdate and (CHECK IF CLIENT IS CONGESTED??? )
return (and send next update when not congested)'''
'''@@@@@@@@@@@@@@@@@ HELP HERE PLEASE! @@@@@@@@@@@ '''
#if not congested, write update!
msgLen = pack('!I', len(message.data))
self.transport.write(msgLen) #add length before message
self.transport.write(message.data)
#simplified version of my game room
#this room runs the game, and clients recieve pos updates for
#everyone in this room (up to 50 people)
dt_gameloop=1.0/60 #loop time difference
dt_sendClientUpdate=0.1 #update intervar
class GameRoom(object):
#room constants
room_w=1000
room_h=1000
def __init__(self):
super(GameRoom, self).__init__()
#schedule main game loop
l=task.LoopingCall(self.gameLoop)
l.start(dt_gameloop) # call every X seconds
#schedule users update loop
l=task.LoopingCall(self.sendAllClientsPosUpdate)
l.start(dt_sendClientUpdate) # call every X seconds
def gameLoop(self):
#game logic runs here (60 times a second), EG.
for anUpdateClient in self.clientsInRoom:
anUpdateClient.pos_x+=10
anUpdateClient.pos_y+=10
#send position update every 0.1 seconds,
#send all player positions to all clients
def sendAllClientsPosUpdate(self):
message = MessageWriter()
message.writeByte(MessageGameLoopUpdate) #message type
#create one byte message containing info for all players
message.writeInt(len(self.clientsInRoom)) #num items to read
for aClient in self.clientsInRoom:
message.writeInt(aClient.ID)
message.writeFloat( aCell.pos_x )#pos
message.writeFloat( aCell.pos_y )
#send message to all clients
for aClient in self.clientsInRoom:
aClient.sendMessageToClient(message, True)
#setup factory and TCP protocol class
factory = Factory()
factory.protocol = TCPProtocol
factory.clients = []
reactor.listenTCP(KServerPortTCP, factory)
#simple example, one game room
mainGameRoom=GameRoom()
print "Server started..."
reactor.run()
``` | [You probably don't need UDP (yet)](https://thoughtstreams.io/glyph/your-game-doesnt-need-udp-yet/).
The first thing that you say is that you want to "reduce network congestion ... from TCP". That is not what UDP does. UDP allows you to work around *congestion control*, which actually *increases* network congestion. Until you're aware of how to implement your own congestion control algorithms, UDP traffic on high-latency connections is just going to cause packet storms which overwhelm your server and flood your users' network connections, making them unusable.
The important thing about sending movement packets in a real-time game is that you always want to ensure that you don't waste time "catching up" with old movement packets when a new position is already available. In Twisted, you can use the [producer and consumer APIs](http://twistedmatrix.com/documents/15.2.1/core/howto/producers.html) to do that on a TCP connection just fine, like so:
```
from zope.interface import implementer
from twisted.internet.protocol import Protocol
from twisted.internet.interfaces import IPullProducer
def serializePosition(position):
"... take a 'position', return some bytes ..."
@implementer(IPullProducer)
class MovementUpdater(Protocol, object):
def updatePosition(self, newPosition):
if newPosition != self.currentPosition:
self.currentPosition = newPosition
self.needToSendPosition()
waitingToSend = False
def needToSendPosition(self):
if not self.waitingToSend:
self.waitingToSend = True
self.transport.registerProducer(self, False)
def resumeProducing(self):
self.transport.write(serializePosition(self.currentPosition))
self.transport.unregisterProducer()
self.waitingToSend = False
def stopProducing(self):
"nothing to do here"
```
Every time the game needs to send a new position, it can call `updatePosition` to update the player's current position. `updatePosition` first updates the current position, then calls `needToSendPosition` which marks the connection as needing to send a position update. This registers the protocol as a producer for its transport, which will cause `resumeProducing` to be called each time write-buffer space is available. As soon as `resumeProducing` is called, we send whatever the *latest* position is - if `updatePosition` is called 500 times while the network is congested, only one update will be sent as soon as the congestion alleviates.
This is a bit oversimplified, because each `transport` can only have one `producer` at a time, and your game server will probably have lots of different position updates to send to clients, so you will need a multiplexor which aggregates all the position updates from multiple clients, and also some code to order messages so that things other than position updates still get through but that position updates have priority.
This might seem like extra work if you're going to do UDP anyway, but if you're going to do UDP correctly and actually get any benefit from it, you will need to implement something very much like this *anyway*, so this won't be wasted. |
How not to miss the next element after itertools.takewhile() | 30,615,659 | 13 | 2015-06-03T09:11:53Z | 30,615,837 | 10 | 2015-06-03T09:19:30Z | [
"python",
"itertools"
] | Say we wish to process an iterator and want to handle it by chunks.
The logic per chunk depends on previously-calculated chunks, so `groupby()` does not help.
Our friend in this case is itertools.takewhile():
```
while True:
chunk = itertools.takewhile(getNewChunkLogic(), myIterator)
process(chunk)
```
The problem is that `takewhile()` needs to go past the last element that meets the new chunk logic, thus 'eating' the first element for the next chunk.
There are various solutions to that, including wrapping or à la C's `ungetc()`, etc..
My question is: is there an **elegant** solution? | `takewhile()` indeed needs to look at the next element to determine when to toggle behaviour.
You could use a wrapper that tracks the last seen element, and that can be 'reset' to back up one element:
```
_sentinel = object()
class OneStepBuffered(object):
def __init__(self, it):
self._it = iter(it)
self._last = _sentinel
self._next = _sentinel
def __iter__(self):
return self
def __next__(self):
if self._next is not _sentinel:
next_val, self._next = self._next, _sentinel
return next_val
try:
self._last = next(self._it)
return self._last
except StopIteration:
self._last = self._next = _sentinel
raise
next = __next__ # Python 2 compatibility
def step_back(self):
if self._last is _sentinel:
raise ValueError("Can't back up a step")
self._next, self._last = self._last, _sentinel
```
Wrap your iterator in this one before using it with `takewhile()`:
```
myIterator = OneStepBuffered(myIterator)
while True:
chunk = itertools.takewhile(getNewChunkLogic(), myIterator)
process(chunk)
myIterator.step_back()
```
Demo:
```
>>> from itertools import takewhile
>>> test_list = range(10)
>>> iterator = OneStepBuffered(test_list)
>>> list(takewhile(lambda i: i < 5, iterator))
[0, 1, 2, 3, 4]
>>> iterator.step_back()
>>> list(iterator)
[5, 6, 7, 8, 9]
``` |
Stuck in a while loop while using if statement | 30,619,535 | 6 | 2015-06-03T12:00:42Z | 30,619,565 | 7 | 2015-06-03T12:02:00Z | [
"python",
"python-2.7"
] | I'm new to python and I'm trying to make a simple Guess the number game, and I'm stuck in an if statement in a while loop. here is the code.
I'm experiencing it at the Your guess it too high and the low one. I tried **break**ing it, but it simply stops the whole things
```
def guess_the_number():
number = random.randrange(20)
guessesMade = 0
print('Take a guess')
guess = input()
guess = int(guess)
while guessesMade < 6:
if guess < number:
print('Your guess is too low.')
if guess > number:
print('Your guess is too high.')
if guess == number:
break
if guess == number:
print'You got it in', guessesMade, 'guess(es)! Congratulaions!'
else:
print'I\'m sorry, the number was', number
``` | You never increment `guessesMade` so `guessesMade < 6` will always be `True`. You need to modify this value within your loop. You also need to move your prompt for user input into the loop
```
while guessesMade < 6:
guess = int(input('Take a guess'))
if guess < number:
print('Your guess is too low.')
guessesMade += 1
elif guess > number:
print('Your guess is too high.')
guessesMade += 1
else:
break
``` |
Is filter thread-safe | 30,619,828 | 18 | 2015-06-03T12:15:28Z | 30,709,536 | 7 | 2015-06-08T12:44:26Z | [
"python"
] | I have a thread which is updating a list called `l`. Am I right in saying that it is thread-safe to do the following from another thread?
```
filter(lambda x: x[0] == "in", l)
```
If its not thread safe, is this then the correct approach:
```
import threading
import time
import Queue
class Logger(threading.Thread):
def __init__(self, log):
super(Logger, self).__init__()
self.log = log
self.data = []
self.finished = False
self.data_lock = threading.Lock()
def run(self):
while not self.finished:
try:
with self.data_lock:
self.data.append(self.log.get(block=True, timeout=0.1))
except Queue.Empty:
pass
def get_data(self, cond):
with self.data_lock:
d = filter(cond, self.data)
return d
def stop(self):
self.finished = True
self.join()
print("Logger stopped")
```
where the `get_data(self, cond)` method is used to retrieve a small subset of the data in the self.data in a thread safe manner. | First, to answer your question in the title: `filter` is just a function. Hence, its thread-safety will rely on the data-structure you use it with.
As pointed out in the comments already, list operations themselves are thread-safe in CPython and protected by the GIL, but that is arguably only an implementation detail of CPython that you shouldn't really rely on. Even if you could rely on it, thread safety of some of their operations probably does not mean the kind of thread safety you mean:
The problem is that iterating over a sequence with `filter` is in general not an atomic operation. The sequence could be changed during iteration. Depending on the data-structure underlying your iterator this might cause more or less weird effects. One way to overcome this problem is by iterating over a copy of the sequence that is created with one atomic action. Easiest way to do this for standard sequences like `tuple`, `list`, `string` is with the slice operator like this:
```
filter(lambda x: x[0] == "in", l[:])
```
Apart from this not necessarily being thread-safe for other data-types, there's one problem with this though: it's only a shallow copy. As your list's elements seem to be list-like as well, another thread could in parallel do `del l[1000][:]` to empty one of the inner lists (which are pointed to in your shallow copy as well). This would make your filter expression fail with an `IndexError`.
All that said, it's not a shame to use a lock to protect access to your list and I'd definitely recommend it. Depending on how your data changes and how you work with the returned data, it might even be wise to deep-copy the elements while holding the lock and to return those copies. That way you can guarantee that once returned the filter condition won't suddenly change for the returned elements.
Wrt. your `Logger` code: I'm not 100 % sure how you plan to use this and if it's critical for you to run several threads on one queue and `join` them. What looks weird to me is that you never use [`Queue.task_done()`](https://docs.python.org/2/library/queue.html#Queue.Queue.task_done) (assuming that its `self.log` is a `Queue`). Also your polling of the queue is potentially wasteful. If you don't need the `join` of the thread, I'd suggest to at least turn the lock acquisition around:
```
class Logger(threading.Thread):
def __init__(self, log):
super(Logger, self).__init__()
self.daemon = True
self.log = log
self.data = []
self.data_lock = threading.Lock()
def run(self):
while True:
l = self.log.get() # thread will sleep here indefinitely
with self.data_lock:
self.data.append(l)
self.log.task_done()
def get_data(self, cond):
with self.data_lock:
d = filter(cond, self.data)
# maybe deepcopy d here
return d
```
Externally you could still do `log.join()` to make sure that all of the elements of the `log` queue are processed. |
Flask and React routing | 30,620,276 | 13 | 2015-06-03T12:35:31Z | 30,620,788 | 14 | 2015-06-03T12:57:32Z | [
"python",
"flask",
"routing",
"reactjs"
] | I'm building the Flask app with React, I ended up having a problem with routing.
The backend is responsible to be an API, hence some routes look like:
```
@app.route('/api/v1/do-something/', methods=["GET"])
def do_something():
return something()
```
and the main route which leads to the React:
```
@app.route('/')
def index():
return render_template('index.html')
```
I'm using [react-router](https://github.com/rackt/react-router) in the React app, everything works fine, react-router takes me to `/something` and I get the rendered view, but when I refresh the page on `/something` then Flask app takes care of this call and I get `Not Found` error.
What is the best solution? I was thinking about redirecting all calls which are not calling `/api/v1/...` to `/` it's not ideal as I will get back the home page of my app, not rendered React view. | We used [catch-all URLs](http://flask.pocoo.org/snippets/57/) for this.
```
from flask import Flask
app = Flask(__name__)
@app.route('/', defaults={'path': ''})
@app.route('/<path:path>')
def catch_all(path):
return 'You want path: %s' % path
if __name__ == '__main__':
app.run()
```
You can also go an extra mile and reuse the Flask routing system to match `path` to the same routes as client so you can embed the data client will need as JSON inside the HTML response. |
Python List Slicing with None as argument | 30,622,809 | 14 | 2015-06-03T14:20:27Z | 30,622,949 | 14 | 2015-06-03T14:26:50Z | [
"python",
"list"
] | Via trial and error I found out that
```
my_list = range(10)
my_list[:None] == my_list[:]
```
I use this for django query sets so I can define a size or take all:
```
some_queryset[:length if length else None]
# @IanAuld
some_queryset[:length or None]
# @Bakuriu
# length works for all numbers and None if you want all elements
# does not work with False of any other False values
some_queryset[:length]
```
* Is this good practice to use `None` while slicing?
* Can problems occur with this method in any case? | Yes, it is fine to use `None`, as its behavior is specified by the [documentation](https://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange):
> The slice of s from i to j is defined as the sequence of items with index k such that i <= k < j. If i or j is greater than len(s), use len(s). If i is omitted or None, use 0. **If j is omitted or None, use len(s)**. If i is greater than or equal to j, the slice is empty.
Using `None` for one of the slice parameters is the same as omitting it. |
No such file or directory "limits.h" when installing Pillow on Alpine Linux | 30,624,829 | 12 | 2015-06-03T15:45:03Z | 30,873,179 | 22 | 2015-06-16T16:27:53Z | [
"python",
"linux",
"python-2.7",
"alpine"
] | I'm running alpine-linux on a Raspberry Pi 2. I'm trying to install Pillow via this command:
```
pip install pillow
```
This is the output from the command:
```
Installing collected packages: pillow
Running setup.py install for pillow
Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-gNq0WA/pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-nDKwei-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.linux-armv7l-2.7
creating build/lib.linux-armv7l-2.7/PIL
copying PIL/XVThumbImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/XpmImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/XbmImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/WmfImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/WebPImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/WalImageFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/TiffTags.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/TiffImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/TgaImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/TarIO.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/SunImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/SpiderImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/SgiImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PyAccess.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PSDraw.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PsdImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PpmImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PngImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PixarImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PdfImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PcxImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PcfFontFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PcdImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PalmImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PaletteFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/OleFileIO.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/MspImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/MpoImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/MpegImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/MicImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/McIdasImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/JpegPresets.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/JpegImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/Jpeg2KImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/IptcImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImtImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageWin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageTransform.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageTk.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageStat.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageShow.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageSequence.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageQt.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImagePath.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImagePalette.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageOps.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageMorph.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageMode.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageMath.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageGrab.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageFont.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageFilter.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageFileIO.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageEnhance.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageDraw2.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageDraw.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageColor.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageCms.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageChops.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/Image.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/IcoImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/IcnsImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/Hdf5StubImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/GribStubImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/GimpPaletteFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/GimpGradientFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/GifImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/GdImageFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/GbrImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/FpxImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/FontFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/FliImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/FitsStubImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ExifTags.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/EpsImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/DcxImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/CurImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ContainerIO.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/BufrStubImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/BmpImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/BdfFontFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/_util.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/_binary.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/__init__.py -> build/lib.linux-armv7l-2.7/PIL
running egg_info
writing Pillow.egg-info/PKG-INFO
writing top-level names to Pillow.egg-info/top_level.txt
writing dependency_links to Pillow.egg-info/dependency_links.txt
warning: manifest_maker: standard file '-c' not found
reading manifest file 'Pillow.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'LICENSE' under directory 'docs'
writing manifest file 'Pillow.egg-info/SOURCES.txt'
copying PIL/OleFileIO-README.md -> build/lib.linux-armv7l-2.7/PIL
running build_ext
building 'PIL._imaging' extension
creating build/temp.linux-armv7l-2.7/libImaging
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c _imaging.c -o build/temp.linux-armv7l-2.7/_imaging.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c outline.c -o build/temp.linux-armv7l-2.7/outline.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Bands.c -o build/temp.linux-armv7l-2.7/libImaging/Bands.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/ConvertYCbCr.c -o build/temp.linux-armv7l-2.7/libImaging/ConvertYCbCr.o
In file included from _imaging.c:76:0:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from outline.c:20:0:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/ConvertYCbCr.c:15:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/Bands.c:19:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Draw.c -o build/temp.linux-armv7l-2.7/libImaging/Draw.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Filter.c -o build/temp.linux-armv7l-2.7/libImaging/Filter.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/GifEncode.c -o build/temp.linux-armv7l-2.7/libImaging/GifEncode.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/LzwDecode.c -o build/temp.linux-armv7l-2.7/libImaging/LzwDecode.o
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/Draw.c:35:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/Filter.c:27:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/GifEncode.c:20:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/LzwDecode.c:31:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Offset.c -o build/temp.linux-armv7l-2.7/libImaging/Offset.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Quant.c -o build/temp.linux-armv7l-2.7/libImaging/Quant.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/PcxDecode.c -o build/temp.linux-armv7l-2.7/libImaging/PcxDecode.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/RawEncode.c -o build/temp.linux-armv7l-2.7/libImaging/RawEncode.o
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/Offset.c:18:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/Quant.c:21:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/PcxDecode.c:17:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/RawEncode.c:21:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/UnpackYCC.c -o build/temp.linux-armv7l-2.7/libImaging/UnpackYCC.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/ZipEncode.c -o build/temp.linux-armv7l-2.7/libImaging/ZipEncode.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/BoxBlur.c -o build/temp.linux-armv7l-2.7/libImaging/BoxBlur.o
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/UnpackYCC.c:17:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/ZipEncode.c:18:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/BoxBlur.c:1:0:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
Building using 4 processes
gcc -shared -Wl,--as-needed build/temp.linux-armv7l-2.7/_imaging.o build/temp.linux-armv7l-2.7/decode.o build/temp.linux-armv7l-2.7/encode.o build/temp.linux-armv7l-2.7/map.o build/temp.linux-armv7l-2.7/display.o build/temp.linux-armv7l-2.7/outline.o build/temp.linux-armv7l-2.7/path.o build/temp.linux-armv7l-2.7/libImaging/Access.o build/temp.linux-armv7l-2.7/libImaging/AlphaComposite.o build/temp.linux-armv7l-2.7/libImaging/Resample.o build/temp.linux-armv7l-2.7/libImaging/Bands.o build/temp.linux-armv7l-2.7/libImaging/BitDecode.o build/temp.linux-armv7l-2.7/libImaging/Blend.o build/temp.linux-armv7l-2.7/libImaging/Chops.o build/temp.linux-armv7l-2.7/libImaging/Convert.o build/temp.linux-armv7l-2.7/libImaging/ConvertYCbCr.o build/temp.linux-armv7l-2.7/libImaging/Copy.o build/temp.linux-armv7l-2.7/libImaging/Crc32.o build/temp.linux-armv7l-2.7/libImaging/Crop.o build/temp.linux-armv7l-2.7/libImaging/Dib.o build/temp.linux-armv7l-2.7/libImaging/Draw.o build/temp.linux-armv7l-2.7/libImaging/Effects.o build/temp.linux-armv7l-2.7/libImaging/EpsEncode.o build/temp.linux-armv7l-2.7/libImaging/File.o build/temp.linux-armv7l-2.7/libImaging/Fill.o build/temp.linux-armv7l-2.7/libImaging/Filter.o build/temp.linux-armv7l-2.7/libImaging/FliDecode.o build/temp.linux-armv7l-2.7/libImaging/Geometry.o build/temp.linux-armv7l-2.7/libImaging/GetBBox.o build/temp.linux-armv7l-2.7/libImaging/GifDecode.o build/temp.linux-armv7l-2.7/libImaging/GifEncode.o build/temp.linux-armv7l-2.7/libImaging/HexDecode.o build/temp.linux-armv7l-2.7/libImaging/Histo.o build/temp.linux-armv7l-2.7/libImaging/JpegDecode.o build/temp.linux-armv7l-2.7/libImaging/JpegEncode.o build/temp.linux-armv7l-2.7/libImaging/LzwDecode.o build/temp.linux-armv7l-2.7/libImaging/Matrix.o build/temp.linux-armv7l-2.7/libImaging/ModeFilter.o build/temp.linux-armv7l-2.7/libImaging/MspDecode.o build/temp.linux-armv7l-2.7/libImaging/Negative.o build/temp.linux-armv7l-2.7/libImaging/Offset.o build/temp.linux-armv7l-2.7/libImaging/Pack.o build/temp.linux-armv7l-2.7/libImaging/PackDecode.o build/temp.linux-armv7l-2.7/libImaging/Palette.o build/temp.linux-armv7l-2.7/libImaging/Paste.o build/temp.linux-armv7l-2.7/libImaging/Quant.o build/temp.linux-armv7l-2.7/libImaging/QuantOctree.o build/temp.linux-armv7l-2.7/libImaging/QuantHash.o build/temp.linux-armv7l-2.7/libImaging/QuantHeap.o build/temp.linux-armv7l-2.7/libImaging/PcdDecode.o build/temp.linux-armv7l-2.7/libImaging/PcxDecode.o build/temp.linux-armv7l-2.7/libImaging/PcxEncode.o build/temp.linux-armv7l-2.7/libImaging/Point.o build/temp.linux-armv7l-2.7/libImaging/RankFilter.o build/temp.linux-armv7l-2.7/libImaging/RawDecode.o build/temp.linux-armv7l-2.7/libImaging/RawEncode.o build/temp.linux-armv7l-2.7/libImaging/Storage.o build/temp.linux-armv7l-2.7/libImaging/SunRleDecode.o build/temp.linux-armv7l-2.7/libImaging/TgaRleDecode.o build/temp.linux-armv7l-2.7/libImaging/Unpack.o build/temp.linux-armv7l-2.7/libImaging/UnpackYCC.o build/temp.linux-armv7l-2.7/libImaging/UnsharpMask.o build/temp.linux-armv7l-2.7/libImaging/XbmDecode.o build/temp.linux-armv7l-2.7/libImaging/XbmEncode.o build/temp.linux-armv7l-2.7/libImaging/ZipDecode.o build/temp.linux-armv7l-2.7/libImaging/ZipEncode.o build/temp.linux-armv7l-2.7/libImaging/TiffDecode.o build/temp.linux-armv7l-2.7/libImaging/Incremental.o build/temp.linux-armv7l-2.7/libImaging/Jpeg2KDecode.o build/temp.linux-armv7l-2.7/libImaging/Jpeg2KEncode.o build/temp.linux-armv7l-2.7/libImaging/BoxBlur.o -L/usr/lib -L/usr/local/lib -L/usr/lib -ljpeg -lpython2.7 -o build/lib.linux-armv7l-2.7/PIL/_imaging.so
gcc: error: build/temp.linux-armv7l-2.7/_imaging.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/decode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/encode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/map.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/display.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/outline.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/path.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Access.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/AlphaComposite.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Resample.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Bands.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/BitDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Blend.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Chops.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Convert.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/ConvertYCbCr.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Copy.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Crc32.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Crop.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Dib.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Draw.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Effects.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/EpsEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/File.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Fill.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Filter.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/FliDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Geometry.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/GetBBox.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/GifDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/GifEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/HexDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Histo.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/JpegDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/JpegEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/LzwDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Matrix.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/ModeFilter.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/MspDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Negative.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Offset.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Pack.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/PackDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Palette.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Paste.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Quant.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/QuantOctree.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/QuantHash.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/QuantHeap.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/PcdDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/PcxDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/PcxEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Point.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/RankFilter.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/RawDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/RawEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Storage.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/SunRleDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/TgaRleDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Unpack.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/UnpackYCC.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/UnsharpMask.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/XbmDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/XbmEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/ZipDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/ZipEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/TiffDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Incremental.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Jpeg2KDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Jpeg2KEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/BoxBlur.o: No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python -c "import setup tools, tokenize;__file__='/tmp/pip-build-gNq0WA/pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-nDKwei-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-gNq0WA/pillow
```
I think this is probably the relevant section:
```
In file included from libImaging/BoxBlur.c:1:0:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
```
My research shows it's probably something with the header files. I have installed these:
```
apk add py-configobj libusb py-pip python-dev gcc linux-headers
pip install --upgrade pip
pip install -U setuptools
pip install Cheetah
pip install pyusb
``` | Alpine linux uses musl libc. You probably need to install musl-dev. |
No such file or directory "limits.h" when installing Pillow on Alpine Linux | 30,624,829 | 12 | 2015-06-03T15:45:03Z | 35,695,626 | 11 | 2016-02-29T09:12:18Z | [
"python",
"linux",
"python-2.7",
"alpine"
] | I'm running alpine-linux on a Raspberry Pi 2. I'm trying to install Pillow via this command:
```
pip install pillow
```
This is the output from the command:
```
Installing collected packages: pillow
Running setup.py install for pillow
Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-gNq0WA/pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-nDKwei-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.linux-armv7l-2.7
creating build/lib.linux-armv7l-2.7/PIL
copying PIL/XVThumbImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/XpmImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/XbmImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/WmfImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/WebPImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/WalImageFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/TiffTags.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/TiffImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/TgaImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/TarIO.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/SunImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/SpiderImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/SgiImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PyAccess.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PSDraw.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PsdImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PpmImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PngImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PixarImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PdfImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PcxImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PcfFontFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PcdImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PalmImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/PaletteFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/OleFileIO.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/MspImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/MpoImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/MpegImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/MicImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/McIdasImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/JpegPresets.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/JpegImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/Jpeg2KImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/IptcImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImtImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageWin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageTransform.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageTk.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageStat.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageShow.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageSequence.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageQt.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImagePath.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImagePalette.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageOps.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageMorph.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageMode.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageMath.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageGrab.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageFont.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageFilter.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageFileIO.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageEnhance.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageDraw2.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageDraw.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageColor.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageCms.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ImageChops.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/Image.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/IcoImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/IcnsImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/Hdf5StubImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/GribStubImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/GimpPaletteFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/GimpGradientFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/GifImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/GdImageFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/GbrImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/FpxImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/FontFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/FliImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/FitsStubImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ExifTags.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/EpsImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/DcxImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/CurImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/ContainerIO.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/BufrStubImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/BmpImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/BdfFontFile.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/_util.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/_binary.py -> build/lib.linux-armv7l-2.7/PIL
copying PIL/__init__.py -> build/lib.linux-armv7l-2.7/PIL
running egg_info
writing Pillow.egg-info/PKG-INFO
writing top-level names to Pillow.egg-info/top_level.txt
writing dependency_links to Pillow.egg-info/dependency_links.txt
warning: manifest_maker: standard file '-c' not found
reading manifest file 'Pillow.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'LICENSE' under directory 'docs'
writing manifest file 'Pillow.egg-info/SOURCES.txt'
copying PIL/OleFileIO-README.md -> build/lib.linux-armv7l-2.7/PIL
running build_ext
building 'PIL._imaging' extension
creating build/temp.linux-armv7l-2.7/libImaging
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c _imaging.c -o build/temp.linux-armv7l-2.7/_imaging.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c outline.c -o build/temp.linux-armv7l-2.7/outline.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Bands.c -o build/temp.linux-armv7l-2.7/libImaging/Bands.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/ConvertYCbCr.c -o build/temp.linux-armv7l-2.7/libImaging/ConvertYCbCr.o
In file included from _imaging.c:76:0:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from outline.c:20:0:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/ConvertYCbCr.c:15:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/Bands.c:19:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Draw.c -o build/temp.linux-armv7l-2.7/libImaging/Draw.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Filter.c -o build/temp.linux-armv7l-2.7/libImaging/Filter.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/GifEncode.c -o build/temp.linux-armv7l-2.7/libImaging/GifEncode.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/LzwDecode.c -o build/temp.linux-armv7l-2.7/libImaging/LzwDecode.o
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/Draw.c:35:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/Filter.c:27:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/GifEncode.c:20:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/LzwDecode.c:31:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Offset.c -o build/temp.linux-armv7l-2.7/libImaging/Offset.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Quant.c -o build/temp.linux-armv7l-2.7/libImaging/Quant.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/PcxDecode.c -o build/temp.linux-armv7l-2.7/libImaging/PcxDecode.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/RawEncode.c -o build/temp.linux-armv7l-2.7/libImaging/RawEncode.o
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/Offset.c:18:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/Quant.c:21:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/PcxDecode.c:17:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/RawEncode.c:21:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/UnpackYCC.c -o build/temp.linux-armv7l-2.7/libImaging/UnpackYCC.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/ZipEncode.c -o build/temp.linux-armv7l-2.7/libImaging/ZipEncode.o
gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/BoxBlur.c -o build/temp.linux-armv7l-2.7/libImaging/BoxBlur.o
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/UnpackYCC.c:17:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/ImPlatform.h:10:0,
from libImaging/Imaging.h:14,
from libImaging/ZipEncode.c:18:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
In file included from libImaging/BoxBlur.c:1:0:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
Building using 4 processes
gcc -shared -Wl,--as-needed build/temp.linux-armv7l-2.7/_imaging.o build/temp.linux-armv7l-2.7/decode.o build/temp.linux-armv7l-2.7/encode.o build/temp.linux-armv7l-2.7/map.o build/temp.linux-armv7l-2.7/display.o build/temp.linux-armv7l-2.7/outline.o build/temp.linux-armv7l-2.7/path.o build/temp.linux-armv7l-2.7/libImaging/Access.o build/temp.linux-armv7l-2.7/libImaging/AlphaComposite.o build/temp.linux-armv7l-2.7/libImaging/Resample.o build/temp.linux-armv7l-2.7/libImaging/Bands.o build/temp.linux-armv7l-2.7/libImaging/BitDecode.o build/temp.linux-armv7l-2.7/libImaging/Blend.o build/temp.linux-armv7l-2.7/libImaging/Chops.o build/temp.linux-armv7l-2.7/libImaging/Convert.o build/temp.linux-armv7l-2.7/libImaging/ConvertYCbCr.o build/temp.linux-armv7l-2.7/libImaging/Copy.o build/temp.linux-armv7l-2.7/libImaging/Crc32.o build/temp.linux-armv7l-2.7/libImaging/Crop.o build/temp.linux-armv7l-2.7/libImaging/Dib.o build/temp.linux-armv7l-2.7/libImaging/Draw.o build/temp.linux-armv7l-2.7/libImaging/Effects.o build/temp.linux-armv7l-2.7/libImaging/EpsEncode.o build/temp.linux-armv7l-2.7/libImaging/File.o build/temp.linux-armv7l-2.7/libImaging/Fill.o build/temp.linux-armv7l-2.7/libImaging/Filter.o build/temp.linux-armv7l-2.7/libImaging/FliDecode.o build/temp.linux-armv7l-2.7/libImaging/Geometry.o build/temp.linux-armv7l-2.7/libImaging/GetBBox.o build/temp.linux-armv7l-2.7/libImaging/GifDecode.o build/temp.linux-armv7l-2.7/libImaging/GifEncode.o build/temp.linux-armv7l-2.7/libImaging/HexDecode.o build/temp.linux-armv7l-2.7/libImaging/Histo.o build/temp.linux-armv7l-2.7/libImaging/JpegDecode.o build/temp.linux-armv7l-2.7/libImaging/JpegEncode.o build/temp.linux-armv7l-2.7/libImaging/LzwDecode.o build/temp.linux-armv7l-2.7/libImaging/Matrix.o build/temp.linux-armv7l-2.7/libImaging/ModeFilter.o build/temp.linux-armv7l-2.7/libImaging/MspDecode.o build/temp.linux-armv7l-2.7/libImaging/Negative.o build/temp.linux-armv7l-2.7/libImaging/Offset.o build/temp.linux-armv7l-2.7/libImaging/Pack.o build/temp.linux-armv7l-2.7/libImaging/PackDecode.o build/temp.linux-armv7l-2.7/libImaging/Palette.o build/temp.linux-armv7l-2.7/libImaging/Paste.o build/temp.linux-armv7l-2.7/libImaging/Quant.o build/temp.linux-armv7l-2.7/libImaging/QuantOctree.o build/temp.linux-armv7l-2.7/libImaging/QuantHash.o build/temp.linux-armv7l-2.7/libImaging/QuantHeap.o build/temp.linux-armv7l-2.7/libImaging/PcdDecode.o build/temp.linux-armv7l-2.7/libImaging/PcxDecode.o build/temp.linux-armv7l-2.7/libImaging/PcxEncode.o build/temp.linux-armv7l-2.7/libImaging/Point.o build/temp.linux-armv7l-2.7/libImaging/RankFilter.o build/temp.linux-armv7l-2.7/libImaging/RawDecode.o build/temp.linux-armv7l-2.7/libImaging/RawEncode.o build/temp.linux-armv7l-2.7/libImaging/Storage.o build/temp.linux-armv7l-2.7/libImaging/SunRleDecode.o build/temp.linux-armv7l-2.7/libImaging/TgaRleDecode.o build/temp.linux-armv7l-2.7/libImaging/Unpack.o build/temp.linux-armv7l-2.7/libImaging/UnpackYCC.o build/temp.linux-armv7l-2.7/libImaging/UnsharpMask.o build/temp.linux-armv7l-2.7/libImaging/XbmDecode.o build/temp.linux-armv7l-2.7/libImaging/XbmEncode.o build/temp.linux-armv7l-2.7/libImaging/ZipDecode.o build/temp.linux-armv7l-2.7/libImaging/ZipEncode.o build/temp.linux-armv7l-2.7/libImaging/TiffDecode.o build/temp.linux-armv7l-2.7/libImaging/Incremental.o build/temp.linux-armv7l-2.7/libImaging/Jpeg2KDecode.o build/temp.linux-armv7l-2.7/libImaging/Jpeg2KEncode.o build/temp.linux-armv7l-2.7/libImaging/BoxBlur.o -L/usr/lib -L/usr/local/lib -L/usr/lib -ljpeg -lpython2.7 -o build/lib.linux-armv7l-2.7/PIL/_imaging.so
gcc: error: build/temp.linux-armv7l-2.7/_imaging.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/decode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/encode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/map.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/display.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/outline.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/path.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Access.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/AlphaComposite.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Resample.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Bands.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/BitDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Blend.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Chops.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Convert.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/ConvertYCbCr.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Copy.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Crc32.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Crop.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Dib.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Draw.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Effects.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/EpsEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/File.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Fill.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Filter.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/FliDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Geometry.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/GetBBox.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/GifDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/GifEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/HexDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Histo.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/JpegDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/JpegEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/LzwDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Matrix.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/ModeFilter.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/MspDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Negative.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Offset.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Pack.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/PackDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Palette.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Paste.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Quant.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/QuantOctree.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/QuantHash.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/QuantHeap.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/PcdDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/PcxDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/PcxEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Point.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/RankFilter.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/RawDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/RawEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Storage.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/SunRleDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/TgaRleDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Unpack.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/UnpackYCC.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/UnsharpMask.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/XbmDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/XbmEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/ZipDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/ZipEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/TiffDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Incremental.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Jpeg2KDecode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/Jpeg2KEncode.o: No such file or directory
gcc: error: build/temp.linux-armv7l-2.7/libImaging/BoxBlur.o: No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python -c "import setup tools, tokenize;__file__='/tmp/pip-build-gNq0WA/pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-nDKwei-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-gNq0WA/pillow
```
I think this is probably the relevant section:
```
In file included from libImaging/BoxBlur.c:1:0:
/usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory
#include <limits.h>
^
compilation terminated.
```
My research shows it's probably something with the header files. I have installed these:
```
apk add py-configobj libusb py-pip python-dev gcc linux-headers
pip install --upgrade pip
pip install -U setuptools
pip install Cheetah
pip install pyusb
``` | @zakaria answer is correct, but if you stumble upon
```
fatal error: linux/limits.h: No such file or directory
```
then you need the package `linux-headers` (notice the prefix `linux` before `limits.h`
```
apk add linux-headers
``` |
Conditional renaming of strings in Python | 30,624,992 | 2 | 2015-06-03T15:53:02Z | 30,625,154 | 9 | 2015-06-03T16:00:07Z | [
"python",
"string"
] | How do I create a conditional renaming of variable strings in Python?
Let's say I have:
```
fruit = "Apfel"
```
If it's "Apfel", I want to rename it to "`Apple`".
Alternatively, the variable may return a different string.
```
fruit = "Erdbeere"
```
If so, I want to rename it to "`Strawberry`". | You need to prepare a dictionary in advance. Either use publicly available lexicons or invest some time building it.
```
fruit = "Apfel"
myDict = {"Apfel":"Apple", "Erdbeere":"Strawberry"}
fruit=myDict[fruit]
print fruit
```
Also take care if the word is or isn't capitalized. |
Traceback: AttributeError:addinfourl instance has no attribute '__exit__' | 30,627,937 | 3 | 2015-06-03T18:31:12Z | 30,628,142 | 29 | 2015-06-03T18:42:54Z | [
"python",
"python-2.7"
] | ```
from urllib import urlopen
with urlopen('https://www.python.org') as story:
story_words = []
for line in story:
line_words = line.split()
for words in line_words:
story_words.append(word)
```
Error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: addinfourl instance has no attribute '__exit__'
```
I am not understanding what's wrong with the above code and how to resolve it?
System information : python 2.7 in ubuntu oracle virtual box. | That error is caused by this line:
```
with urlopen('https://www.python.org') as story:
```
To fix this, replace that line with the following line:
```
story = urlopen('https://www.python.org')
```
---
# Additional info on the error
**Why is this happening?**
In order for `with ... as` statement to work on an object, the context manager for that object must be implemented. The `AttributeError` was raised because there isn't any context manager implemented for `urlopen`.
Since the author of `urlopen` hasn't implemented that, there's not much you can do about it, except for:
1. either don't use `with...as` statement.
2. or, if you must, [see these docs](https://docs.python.org/2/library/contextlib.html#contextlib.closing) (thanks to [@vaultah](http://stackoverflow.com/users/2301450/vaultah) who provided this solution in comments below).
**How to implement the context manager?**
You can implement the context manager by defining `__enter__` and `__exit__` methods for an object/class.
[Do read these docs on context managers](https://docs.python.org/2/library/stdtypes.html#context-manager-types)
Example:
```
>>> class Person(object):
def __init__(self, name):
self.name = name
>>> with Person("John Doe") as p:
print p.name
>>> AttributeError: __exit__
```
Above, we got an `AttributeError` because we haven't implemented context manager for `Person`. Below is how context manager is implemented.
```
>>> class Person(object):
def __init__(self, name):
self.name = name
def __enter__(self):
# The value returned by this method is
# assigned to the variable after ``as``
return self
def __exit__(self, exc_type, exc_value, exc_traceback ):
# returns either True or False
# Don't raise any exceptions in this method
return True
>>> with Person("John Doe") as p:
print p.name
>>> "John Doe"
``` |
Switch every pair of characters in a string | 30,628,176 | 5 | 2015-06-03T18:44:33Z | 30,628,231 | 7 | 2015-06-03T18:47:33Z | [
"python",
"string",
"performance",
"python-2.7"
] | For example, having the string:
```
abcdefghijklmnopqrstuvwxyz
```
should result in something like this:
```
badcfehgjilknmporqtsvuxwzy
```
How do I even go about it?
I thought of something not very efficient, such as:
```
s = str(range(ord('a'), ord('z') + 1))
new_s = ''
for i in xrange(len(s)):
if i != 0 and i % 2 == 0:
new_s += '_' + s[i]
else:
new_s += s[i]
# Now it should result in a string such as 'ab_cd_ef_...wx_yz'
l = new_s.split('_')
for i in xrange(len(l)):
l[i] = l[i][::-1]
result = str(l)
```
Is there any better way ? Some way that is more efficient or more general so I could also go about it with 3 letters more easily ? | You can use `zip()` function which woud return a list of tuples as `[(b,a), (d,c), ...]` and the applying `.join()` method to both the elements of the tuple and list as well.
```
a = "abcdefghijklmnopqrstuvwxyz"
# a[::2] = "acegikmoqsuwy"
# a[1::2] = "bdfhjlnprtvx"
print "".join("".join(i) for i in zip(a[1::2], a[::2]))
>>> badcfehgjilknmporqtsvuxwzy
```
**EDIT:** To handle the case of odd length strings, as suggested by @Ashwini and @TigerhawkT3, you may change the code as:
```
print "".join("".join(i) for i in zip(a2, a1)) + a[-1] if len(a)%2 else ''
``` |
2 names for a same attribute | 30,630,700 | 8 | 2015-06-03T21:05:02Z | 30,630,885 | 11 | 2015-06-03T21:15:14Z | [
"python"
] | I would like to know if there is a way to "link" two attributes of a class or to give 2 names to a same attribute?
For example, I'm actually working on a script which create triangle from data given by users. My triangle is ABC. Sides of this triangle are AB, BC and CA. So the triangle has got these 3 attributes (self.AB, self.BC, self.CA). But AB = BA so I would like to allow users to do `print myInstance.BA` instead of `print myInstance.AB`.
So I thought to create the attribute self.AB and the property BA (which return self.AB). That work fine when I try to do `print myInstance.BA` instead of `print myInstance.AB` but I'm greedy...
I also would like to allow users to do `myInstance.BA = 5` instead of `myInstance.AB = 5` and when doing this also edit the attribute AB.
Is there is a way to do this ? | Python properties can have setters. So all you need is
```
class Foo(object):
@property
def BA(self):
return self.AB
@BA.setter
def BA(self, value):
self.AB = value
```
And insipred by @amccormack's answer, if you can rely on the order of attribute names, this works more generically for e.g. edges bc, cd:
```
class Foo(object):
def __init__(self):
self.ab = 100
def __getattr__(self, name):
return getattr(self, "".join(sorted(name)))
def __setattr__(self, name, value):
super(Foo, self).__setattr__("".join(sorted(name)), value)
f = Foo()
print f.ba
f.ba = 200
print f.ba
``` |
Writing to MySQL database with pandas using SQLAlchemy, to_sql | 30,631,325 | 13 | 2015-06-03T21:45:10Z | 30,653,988 | 16 | 2015-06-04T20:56:29Z | [
"python",
"mysql",
"pandas",
"sqlalchemy",
"mysql-connector"
] | trying to write pandas dataframe to MySQL table using to\_sql. Previously been using flavor='mysql', however it will be depreciated in the future and wanted to start the transition to using SQLAlchemy engine.
sample code:
```
import pandas as pd
import mysql.connector
from sqlalchemy import create_engine
engine = create_engine('mysql+mysqlconnector://[user]:[pass]@[host]:[port]/[schema]', echo=False)
cnx = engine.raw_connection()
data = pd.read_sql('SELECT * FROM sample_table', cnx)
data.to_sql(name='sample_table2', con=cnx, if_exists = 'append', index=False)
```
The read works fine but the to\_sql has an error:
**DatabaseError: Execution failed on sql 'SELECT name FROM sqlite\_master WHERE type='table' AND name=?;': Wrong number of arguments during string formatting**
Why does it look like it is trying to use sqlite? What is the correct use of a sqlalchemy connection with mysql and specifically mysql.connector?
I also tried passing the engine in as the connection as well, and that gave me an error referencing no cursor object.
```
data.to_sql(name='sample_table2', con=engine, if_exists = 'append', index=False)
>>AttributeError: 'Engine' object has no attribute 'cursor'
``` | Using the engine in place of the raw\_connection() worked:
```
import pandas as pd
import mysql.connector
from sqlalchemy import create_engine
engine = create_engine('mysql+mysqlconnector://[user]:[pass]@[host]:[port]/[schema]', echo=False)
data.to_sql(name='sample_table2', con=engine, if_exists = 'append', index=False)
```
not clear on why when I tried this yesterday it gave me the earlier error |
How can i use signals in django bulk create | 30,632,743 | 8 | 2015-06-03T23:49:43Z | 30,632,974 | 8 | 2015-06-04T00:20:39Z | [
"python",
"django",
"signals"
] | I have this code
`Task.objects.bulk_create(ces)`
Now this is my signal
```
@receiver(pre_save, sender=Task)
def save_hours(sender, instance, *args, **kwargs):
logger.debug('test')
```
Now this signal is not triggered in bulk create
I am using django 1.8 | As mentioned `bulk_create` does not trigger these signals -
<https://docs.djangoproject.com/en/1.8/ref/models/querysets/#bulk-create>
> This method inserts the provided list of objects into the database in
> an efficient manner (generally only 1 query, no matter how many
> objects there are).
>
> This has a number of caveats though:
>
> * The modelâs save() method will not be called, and the pre\_save and post\_save signals will not be sent.
> * It does not work with child models in a multi-table inheritance scenario.
> * If the modelâs primary key is an AutoField it does not retrieve and set the primary key attribute, as save() does.
> * It does not work with many-to-many relationships.
> * The batch\_size parameter controls how many objects are created in single query. The default is to create all objects in one batch,
> except for SQLite where the default is such that at most 999 variables
> per query are used.
So you have to trigger them manually. If you want this for all models you can override the `bulk_create` and send them yourself like this -
```
class CustomManager(models.Manager):
def bulk_create(items,....):
super().bulk_create(...)
for i in items:
[......] # code to send signal
```
Then use this manager -
```
class Task(models.Model):
objects = CustomManager()
....
``` |
TypeError: Super does not take Key word arguments? | 30,633,889 | 9 | 2015-06-04T02:20:37Z | 30,633,917 | 9 | 2015-06-04T02:24:00Z | [
"python",
"class",
"inheritance",
"super"
] | First, here is my code:
```
class Enemy():
def __init__(self, name, hp, damage):
self.name = name
self.hp = hp
self.damage = damage
def is_alive(self):
"""Checks if alive"""
return self.hp > 0
class WildBoar(Enemy):
def __init__(self):
super(WildBoar, name="Wild Boar", hp=10, damage=2).__init__()
class Marauder(Enemy):
def __init__(self):
super(Marauder, name="Marauder", hp=20, damage=5).__init__()
class Kidnappers(Enemy):
def __init__(self):
super(Kidnappers, name="The Kidnappers", hp=30, damage=7).__init__()
```
When I compile this I get this error:
```
super(WildBoar, name="Wild Boar", hp=10, damage=2).__init__()
TypeError: super does not take keyword arguments
```
I tried looking around for any kind of help but I couldn't find anything. I also have some Kwargs in some other class's supers, but these are the ones raising any kind of issues (as of right now). So what could be causing this? I've also seen someone say that putting a `super` in the base class will fix it, but it didn't work (I passed in the same arguments that are in the Base class's `__init__`). | The arguments to the parent's `__init__` method should be passed to the `__init__` method:
```
super(Kidnappers, self).__init__(name="The Kidnappers", hp=30, damage=7)
# or
super(Kidnappers, self).__init__("The Kidnappers", 30, 7)
```
All you pass to `super()` is the child class (`Kidnappers` in this case) and a reference to the current instance (`self`).
---
Note however that if you are using Python 3.x, all you need to do is:
```
super().__init__("The Kidnappers", 30, 7)
```
and Python will work out the rest.
---
Here are some links to where this is explained in the documentation:
* [Python 2.x `super()`](https://docs.python.org/2/library/functions.html#super)
* [Python 3.x `super()`](https://docs.python.org/3/library/functions.html#super) |
Split a string only by first space in python | 30,636,248 | 4 | 2015-06-04T06:13:25Z | 30,636,260 | 8 | 2015-06-04T06:14:31Z | [
"python",
"string",
"split"
] | I have string for example: "238 NEO Sports". I want to split this string only for the first space. The output should be ["238","NEO Sports"]. One way I could think to this is by split() and adding the other two. Is their any other way to do so? | Just pass the count as second parameter inside the split function.
```
>>> s = "238 NEO Sports"
>>> s.split(" ", 1)
['238', 'NEO Sports']
``` |
Split a string only by first space in python | 30,636,248 | 4 | 2015-06-04T06:13:25Z | 30,636,261 | 7 | 2015-06-04T06:14:37Z | [
"python",
"string",
"split"
] | I have string for example: "238 NEO Sports". I want to split this string only for the first space. The output should be ["238","NEO Sports"]. One way I could think to this is by split() and adding the other two. Is their any other way to do so? | RTFM: [`string.split(s[, sep[, maxsplit]])`](https://docs.python.org/2/library/string.html#string.split)
```
>>> "238 NEO Sports".split(None, 1)
['238', 'NEO Sports']
``` |
Connecting to database using SQLAlchemy | 30,638,003 | 6 | 2015-06-04T07:51:07Z | 30,638,039 | 9 | 2015-06-04T07:53:25Z | [
"python",
"sqlalchemy"
] | I'm trying to connect to a database on my local machine.
```
import sqlalchemy
engine = sqlalchemy.create_engine('mssql+pyodbc://localhost\\SQLEXPRESS/NCM')
```
It fails with the following error:
```
DBAPIError: (pyodbc.Error) ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)')
```
And also outputs this warning:
```
C:\Miniconda\envs\bees\lib\site-packages\sqlalchemy\connectors\pyodbc.py:82: SAWarning: No driver name specified; this is expected by PyODBC when using DSN-less connections
"No driver name specified; "
```
Where should I be looking to diagnose the problem? | As shown in [this link](http://docs.sqlalchemy.org/en/latest/dialects/mssql.html#hostname-connections), as of version 1.0.0 you need to specify the driver explicitly for hostname connections.
```
Changed in version 1.0.0: Hostname-based PyODBC connections now require the
SQL Server driver name specified explicitly. SQLAlchemy cannot choose an
optimal default here as it varies based on platform and installed drivers.
``` |
Does __ne__ use an overridden __eq__? | 30,643,236 | 6 | 2015-06-04T11:57:17Z | 30,643,308 | 12 | 2015-06-04T12:00:49Z | [
"python",
"comparison",
"equality"
] | Suppose I have the following program:
```
class A(object):
def __eq__(self, other):
return True
a0 = A()
a1 = A()
print a0 != a1
```
If you run it with Python the output is `True`. My question is
1. the `__ne__` method is not implemented, does Python fall on a default one?
2. if Python fall on a default method to determine whether two objects are equal or not, shouldn't it call `__eq__` and then negate the result? | From [the docs](https://docs.python.org/2/reference/datamodel.html#object.__ne__):
> There are no implied relationships among the comparison operators. The truth of `x==y` does not imply that `x!=y` is false. Accordingly, when defining `__eq__()`, one should also define `__ne__()` so that the operators will behave as expected. |
Write double (triple) sum as inner product? | 30,644,968 | 7 | 2015-06-04T13:21:44Z | 30,645,430 | 8 | 2015-06-04T13:42:17Z | [
"python",
"arrays",
"performance",
"numpy",
"linear-algebra"
] | Since my `np.dot` is accelerated by OpenBlas and Openmpi I am wondering if there was a possibility to write the double sum
```
for i in range(N):
for j in range(N):
B[k,l] += A[i,j,k,l] * X[i,j]
```
as an inner product. Right at the moment I am using
```
B = np.einsum("ijkl,ij->kl",A,X)
```
but unfortunately it is quite slow and only uses one processor.
Any ideas?
Edit:
I benchmarked the answers given until now with a simple example, seems like they are all in the same order of magnitude:
```
A = np.random.random([200,200,100,100])
X = np.random.random([200,200])
def B1():
return es("ijkl,ij->kl",A,X)
def B2():
return np.tensordot(A, X, [[0,1], [0, 1]])
def B3():
shp = A.shape
return np.dot(X.ravel(),A.reshape(shp[0]*shp[1],1)).reshape(shp[2],shp[3])
%timeit B1()
%timeit B2()
%timeit B3()
1 loops, best of 3: 300 ms per loop
10 loops, best of 3: 149 ms per loop
10 loops, best of 3: 150 ms per loop
```
Concluding from these results I would choose np.einsum, since its syntax is still the most readable and the improvement with the other two are only a factor 2x. I guess the next step would be to externalize the code into C or fortran. | You can use `np.tensordot()`:
```
np.tensordot(A, X, [[0,1], [0, 1]])
```
which does use multiple cores.
---
EDIT: it is insteresting to see how `np.einsum` and `np.tensordot` scale when increasing the size of the input arrays:
```
In [18]: for n in range(1, 31):
....: A = np.random.rand(n, n+1, n+2, n+3)
....: X = np.random.rand(n, n+1)
....: print(n)
....: %timeit np.einsum('ijkl,ij->kl', A, X)
....: %timeit np.tensordot(A, X, [[0, 1], [0, 1]])
....:
1
1000000 loops, best of 3: 1.55 µs per loop
100000 loops, best of 3: 8.36 µs per loop
...
11
100000 loops, best of 3: 15.9 µs per loop
100000 loops, best of 3: 17.2 µs per loop
12
10000 loops, best of 3: 23.6 µs per loop
100000 loops, best of 3: 18.9 µs per loop
...
21
10000 loops, best of 3: 153 µs per loop
10000 loops, best of 3: 44.4 µs per loop
```
and it becomes clear the advantage of using `tensordot` for larger arrays. |
Error Installing pymssql on Mac OS X Yosemite | 30,646,171 | 6 | 2015-06-04T14:10:24Z | 30,654,579 | 12 | 2015-06-04T21:34:01Z | [
"python",
"sql-azure",
"osx-yosemite",
"pymssql"
] | I receive the following error when installing pymssql on OS X Yosemite 10.10.3 - has anyone gotten around the following error? I am using FreeTDS (v0.91.112) version 7.1 and Python 2.7.6 - the tsql utility connects to a SQL Database with no issue.
`sudo pip install pymssql`
**Error**:
```
Command "/usr/bin/python -c "import setuptools, tokenize;
__file__='/private/tmp/pip-build-T5Usla/pymssql/setup.py';
exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n',
'\n'), __file__, 'exec'))" install --record /tmp/pip-uZGqK4-record/install-
record.txt --single-version-externally-managed --compile" failed with error
code 1 in /private/tmp/pip-build-T5Usla/pymssql
``` | You should be able to install pymmsql on your Mac for Azure SQL DB by following these three steps.
Step 1: Install Homebrew
Go to your terminal and run the following command :
```
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
```
Step 2 : Install FreeTDS.
From the terminal run the following command :
```
brew install freetds
```
This should install freetds on your system
Step 3 : Install pymmsql. From the terminal run the following command
```
sudo -H pip install pymssql
```
Now you should be able to use pymssql to connect to Azure SQL DB and SQL Server. |
How do I run a google appengine docker image on a compute engine instance? | 30,646,410 | 2 | 2015-06-04T14:20:38Z | 30,715,496 | 8 | 2015-06-08T17:29:02Z | [
"python",
"google-app-engine",
"google-compute-engine"
] | I have the following docker file:
```
FROM gcr.io/google_appengine/python-compat
MAINTAINER [email protected]
RUN apt-get update
RUN apt-get -y upgrade
ADD ./app/ /app
ADD app.yaml /app/
RUN mkdir -p /var/log/app_engine
```
I create the log directory because otherwise I get the following error
```
sudo docker run gcr.io/MY-PROJECT/ae-image
Traceback (most recent call last):
File "/home/vmagent/python_vm_runtime/vmboot.py", line 133, in <module>
run_file(__file__, globals())
File "/home/vmagent/python_vm_runtime/vmboot.py", line 129, in run_file
execfile(_PATHS.script_file(script_name), globals_)
File "/home/vmagent/python_vm_runtime/google/appengine/tools/vmboot.py", line 32, in <module>
initialize.InitializeFileLogging()
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/initialize.py", line 92, in InitializeFileLogging
APP_LOG_FILE, maxBytes=MAX_LOG_BYTES, backupCount=LOG_BACKUP_COUNT)
File "/usr/lib/python2.7/logging/handlers.py", line 117, in __init__
BaseRotatingHandler.__init__(self, filename, mode, encoding, delay)
File "/usr/lib/python2.7/logging/handlers.py", line 64, in __init__
logging.FileHandler.__init__(self, filename, mode, encoding, delay)
File "/usr/lib/python2.7/logging/__init__.py", line 901, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/lib/python2.7/logging/__init__.py", line 924, in _open
stream = open(self.baseFilename, self.mode)
IOError: [Errno 2] No such file or directory: '/var/log/app_engine/app.log.json'
```
But now I get the following error:
```
sudo docker run -ti gcr.io/MY-PROJECT/ae-image /app/app.yaml
LOG 1 1433422040537094 Using module_yaml_path from argv: /app/app.yaml
Traceback (most recent call last):
File "/home/vmagent/python_vm_runtime/vmboot.py", line 133, in <module>
run_file(__file__, globals())
File "/home/vmagent/python_vm_runtime/vmboot.py", line 129, in run_file
execfile(_PATHS.script_file(script_name), globals_)
File "/home/vmagent/python_vm_runtime/google/appengine/tools/vmboot.py", line 65, in <module>
main()
File "/home/vmagent/python_vm_runtime/google/appengine/tools/vmboot.py", line 61, in main
vmservice.CreateAndRunService(module_yaml_path)
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/vmservice.py", line 154, in CreateAndRunService
service.CreateServer()
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/vmservice.py", line 126, in CreateServer
appengine_config = vmconfig.BuildVmAppengineEnvConfig()
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/vmconfig.py", line 60, in BuildVmAppengineEnvConfig
_MetadataGetter('gae_backend_instance'))
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/vmconfig.py", line 37, in _MetadataGetter
return urllib2.urlopen(req).read()
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 445, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not Found
```
What kind of configuration or actions can I do so that I can run this docker image inside of docker like:
```
sudo docker run gcr.io/MY-PROJECT/ae-image
```
Another option for me would be to run AE in python but I don't know how I can tell it to use the production datastore (GCD). I suppose that that is the problem I'm having with the docker implementation above. | Running the application container standalone is possible, but there are a few nuances.
**Volume Bindings**
Firstly, let's look at the log directory part, where you create /var/log/appengine. When `gcloud preview app run` acts on the container, it actually runs it with volume bindings, that map /var/log/appengine within the container to /var/log/appengine/(~additional paths here~) on the host machine. If you're using boot2docker, you can run boot2docker ssh and see those server logs there.
**Environment Variables**
Secondly, let's piece apart why your application is still not running. When the application container starts, it goes through a few steps to start the app server. The entrypoint for this whole process is `/home/vmagent/python_vm_runtime/vmboot.py`, a script in the container that is run upon startup.
The problem is that this script pulls information about your application from several environment variables. If you do `gcloud preview app run` and start the container that way, you can start a shell in it like this:
```
$ gcloud preview app run <your_application.yaml>
$ boot2docker ssh
$ docker ps
<output omitted - use it to find your container id>
$ docker exec -ti <your_container_id> /bin/sh
$ env
API_PORT=10000
HOSTNAME=b758e92cb8d6
HOME=/root
OLDPWD=/home/vmagent/python_vm_runtime/google/appengine/ext
GAE_MODULE_INSTANCE=0
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
DEBIAN_FRONTEND=noninteractive
MODULE_YAML_PATH=app.yaml
GAE_PARTITION=dev
API_HOST=192.168.42.1
PWD=/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime
GAE_LONG_APP_ID=temp
```
It turns out that you're specifically missing the environment variable `GAE_MODULE_INSTANCE=0`. There's a line in vmconfig.py that says:
```
instance = (os.environ.get('GAE_MODULE_INSTANCE') or
_MetadataGetter('gae_backend_instance'))
```
In development mode, the script needs to work off the environment variable, not the \_MetadataGetter. Ultimately, the application container will need all of these environment variables, or it will continue to break in other places.
So, you can get your app running by setting all of those environment variables when you do docker run. You can do this with -e flag. You'll need to set quite a few environment variables, so I suggest you write a shell script to do it :-) |
Remove the new line "\n" from base64 encoded strings in Python3? | 30,647,219 | 7 | 2015-06-04T14:52:53Z | 32,243,566 | 17 | 2015-08-27T07:49:36Z | [
"python",
"python-3.x",
"base64"
] | I'm trying to make a HTTPS connection in Python3 and when I try to encode my username and password the `base64` `encodebytes` method returns the encoded value with a new line character at the end "\n" and because of this I'm getting an error when I try to connect.
Is there a way to tell the `base64` library not to append a new line character when encoding or what is the best way to remove this new line character? I tried using the `replace` method but I get the following error:
```
Traceback (most recent call last):
File "data_consumer.py", line 33, in <module>
auth_base64 = auth_base64.replace('\n', '')
TypeError: expected bytes, bytearray or buffer compatible object
```
My code:
```
auth = b'[email protected]:passWORD'
auth_base64 = base64.encodebytes(auth)
auth_base64 = auth_base64.replace('\n', '')
```
Any ideas? Thanks | Instead of `encodestring` consider using `b64encode`. Later does not add `\n` characters. e.g.
```
In [11]: auth = b'[email protected]:passWORD'
In [12]: base64.encodestring(auth)
Out[12]: b'dXNlcm5hbWVAZG9tYWluLmNvbTpwYXNzV09SRA==\n'
In [13]: base64.b64encode(auth)
Out[13]: b'dXNlcm5hbWVAZG9tYWluLmNvbTpwYXNzV09SRA=='
```
It produces identical encoded string except the `\n` |
Print a variable selected by a random number | 30,651,487 | 5 | 2015-06-04T18:31:15Z | 30,651,522 | 14 | 2015-06-04T18:33:38Z | [
"python",
"variables",
"python-3.x",
"random",
"introspection"
] | I have a list of names, and I would like my program to randomly select one of those names. I tried using the following:
```
import random
def main():
Arkansas = 1
Manchuria = 2
Bengal = "3"
Baja_California = 4
Tibet = 5
Indonesia = 6
Cascade_Range = 7
Hudson_Bay = 8
High_Plains = 9
map = random.randrange(1, 10)
print(map)
main()
```
I also tried making each of the numbers as strings, using the `eval()`function for `randrange()`, but none of this worked. | Don't assign numbers OR strings. Use a list.
```
choices = ['Arkansas', 'Manchuria', 'Bengal', 'Baja California'] # etc.
```
Then take a `random.choice`
```
random_choice = random.choice(choices)
``` |
Is there a better way to check for vowels in the first position of a word? | 30,652,692 | 7 | 2015-06-04T19:38:20Z | 30,652,736 | 15 | 2015-06-04T19:40:51Z | [
"python",
"string",
"python-2.7"
] | I'm trying to check for a vowel as the first character of a word. For my code I currently have this:
```
if first == 'a' or first == 'e' or first == 'i' or first == 'o' or first == 'u':
```
I was wondering is there a much better way to do this check or is this the best and most efficient way? | You can try like this using the `in`:
```
if first.lower() in 'aeiou':
```
or better like
```
if first.lower() in ('a', 'e', 'i', 'o', 'u'):
``` |
Is there a better way to check for vowels in the first position of a word? | 30,652,692 | 7 | 2015-06-04T19:38:20Z | 30,652,800 | 7 | 2015-06-04T19:44:38Z | [
"python",
"string",
"python-2.7"
] | I'm trying to check for a vowel as the first character of a word. For my code I currently have this:
```
if first == 'a' or first == 'e' or first == 'i' or first == 'o' or first == 'u':
```
I was wondering is there a much better way to do this check or is this the best and most efficient way? | Better create a set of vowels, like this
```
>>> vowels = set('aeiouAEIOU')
>>> vowels
set(['a', 'A', 'e', 'i', 'o', 'I', 'u', 'O', 'E', 'U'])
```
and then check if `first` is one of them like this
```
>>> if first in vowels:
...
```
---
**Note:** The problem with
```
if first in 'aeiouAEIOU':
```
approach is, if your input is wrong, for example, if `first` is `'ae'`, then the test will fail.
```
>>> first = 'ae'
>>> first in 'aeiouAEIOU'
True
```
But `ae` is clearly not a vowel.
---
**Improvement:**
If it is just a one-time job, where you don't care to create a set beforehand, then you can use `if first in 'aeiouAEIOU':` itself, but check the length of `first` first, like this
```
>>> first = 'ae'
>>> len(first) == 1 and first in 'aeiouAEIOU'
False
``` |
What does it mean when you assign int to a variable in Python? | 30,658,964 | 3 | 2015-06-05T05:30:41Z | 30,658,991 | 7 | 2015-06-05T05:32:24Z | [
"python",
"variables",
"int",
"type-conversion"
] | i.e. `x = int`
I understand that this will make `x` an integer if it is not already one, but I'd like to understand the process behind this. In particular, I'd like to know what `int` is (as opposed to `int()`). I know that `int()` is a function, but I'm not sure what `int` is. Links to documentation about `int` would be helpful since I couldn't find any. | `x = int` will not make `x` into an integer. `int` is the integer type. Doing `x = int` will set `x` to the value of the `int` type. Loosely speaking, `x` will become an "alias" for the integer type.
If you *call* the int type on something, like `int('2')`, it will convert what you give into an integer, if it can. If you assign the result of that call to a variable, it will set that variable to the integer value you got from calling `int`. So setting `x = int('2')` will set `x` to 2.
You should read [the Python tutorial](https://docs.python.org/tutorial) to understand how types, variables, and calling work in Python. |
How to get the vertices from an edge using igraph in python? | 30,660,808 | 4 | 2015-06-05T07:30:55Z | 30,674,221 | 9 | 2015-06-05T19:20:28Z | [
"python",
"igraph",
"vertex",
"edges"
] | I am looping through the edges of a graph with:
```
for es in graph.es:
....
# v = []
# v = es.vertices()?
...
```
What method can I use to get the source and the target vertices for each edge? | These are the very basic functionalities of igraph, described [here](http://igraph.org/python/doc/igraph.Graph-class.html) thoroughly.
If you iterate the `<EdgeSeq>` object (`graph.es`), you will go through all `<Edge>` objects (here `edge`). `<Edge>` has properties `source` and `target`. These are vertex ids, simply integers. You can get the corresponding `<Vertex>` object by `graph.vs[]`:
```
for edge in graph.es:
source_vertex_id = edge.source
target_vertex_id = edge.target
source_vertex = graph.vs[source_vertex_id]
target_vertex = graph.vs[target_vertex_id]
# using get_eid() you can do the opposite:
same_edge_id = graph.get_eid(source_vertex_id, target_vertex_id)
same_edge = graph.es[same_edge_id]
# by .index you get the id from the Vertex or Edge object:
source_vertex.index == source_vertex_id
# True
edge.index == same_edge_id
# True
```
Be aware if you have directed graph, otherwise source and target are simply two equivalent endpoints. With directed graphs you may use `error = False` with `get_eid()`, which then returns `-1` in case there is no edge in the given direction between the vertices. |
Extract list of Persons and Organizations using Stanford NER Tagger in NLTK | 30,664,677 | 10 | 2015-06-05T10:49:58Z | 30,665,014 | 11 | 2015-06-05T11:07:51Z | [
"python",
"nltk",
"stanford-nlp",
"named-entity-recognition"
] | I am trying to extract list of persons and organizations using Stanford Named Entity Recognizer (NER) in Python NLTK.
When I run:
```
from nltk.tag.stanford import NERTagger
st = NERTagger('/usr/share/stanford-ner/classifiers/all.3class.distsim.crf.ser.gz',
'/usr/share/stanford-ner/stanford-ner.jar')
r=st.tag('Rami Eid is studying at Stony Brook University in NY'.split())
print(r)
```
the output is:
```
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'),
('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'),
('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'LOCATION')]
```
what I want is to extract from this list all persons and organizations in this form:
```
Rami Eid
Sony Brook University
```
I tried to loop through the list of tuples:
```
for x,y in i:
if y == 'ORGANIZATION':
print(x)
```
But this code only prints every entity one per line:
```
Sony
Brook
University
```
With real data there can be more than one organizations, persons in one sentence, how can I put the limits between different entities? | Thanks to the [link](http://stackoverflow.com/questions/13765349/multi-term-named-entities-in-stanford-named-entity-recognizer) discovered by @Vaulstein, it is clear that the trained Stanford tagger, as distributed (at least in 2012) **does not chunk named entities**. From [the accepted answer](http://stackoverflow.com/a/13781588/699305):
> Many NER systems use more complex labels such as IOB labels, where codes like B-PERS indicates where a person entity starts. The CRFClassifier class and feature factories support such labels, **but they're not used in the models we currently distribute (as of 2012)**
You have the following options:
1. Collect runs of identically tagged words; e.g., all adjacent words tagged `PERSON` should be taken together as one named entity. That's very easy, but of course it will sometimes combine different named entities. (E.g. `New York, Boston [and] Baltimore` is about three cities, not one.) **Edit:** This is what Alvas's code does in the accepted anwser. See below for a simpler implementation.
2. Use `nltk.ne_recognize()`. It doesn't use the Stanford recognizer but it does chunk entities. (It's a wrapper around an IOB named entity tagger).
3. Figure out a way to do your own chunking on top of the results that the Stanford tagger returns.
4. Train your own IOB named entity chunker (using the Stanford tools, or the NLTK's framework) for the domain you are interested in. If you have the time and resources to do this right, it will probably give you the best results.
**Edit:** If all you want is to pull out runs of continuous named entities (option 1 above), you should use `itertools.groupby`:
```
from itertools import groupby
for tag, chunk in groupby(netagged_words, lambda x:x[1]):
if tag != "O":
print("%-12s"%tag, " ".join(w for w, t in chunk))
```
If `netagged_words` is the list of `(word, type)` tuples in your question, this produces:
```
PERSON Rami Eid
ORGANIZATION Stony Brook University
LOCATION NY
```
Note again that if two named entities of the same type occur right next to each other, this approach will combine them. E.g. `New York, Boston [and] Baltimore` is about three cities, not one. |
Extract list of Persons and Organizations using Stanford NER Tagger in NLTK | 30,664,677 | 10 | 2015-06-05T10:49:58Z | 30,666,949 | 14 | 2015-06-05T12:47:16Z | [
"python",
"nltk",
"stanford-nlp",
"named-entity-recognition"
] | I am trying to extract list of persons and organizations using Stanford Named Entity Recognizer (NER) in Python NLTK.
When I run:
```
from nltk.tag.stanford import NERTagger
st = NERTagger('/usr/share/stanford-ner/classifiers/all.3class.distsim.crf.ser.gz',
'/usr/share/stanford-ner/stanford-ner.jar')
r=st.tag('Rami Eid is studying at Stony Brook University in NY'.split())
print(r)
```
the output is:
```
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'),
('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'),
('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'LOCATION')]
```
what I want is to extract from this list all persons and organizations in this form:
```
Rami Eid
Sony Brook University
```
I tried to loop through the list of tuples:
```
for x,y in i:
if y == 'ORGANIZATION':
print(x)
```
But this code only prints every entity one per line:
```
Sony
Brook
University
```
With real data there can be more than one organizations, persons in one sentence, how can I put the limits between different entities? | IOB/BIO means **I**nside, **O**utside, **B**eginning (IOB), or sometimes aka **B**eginning, **I**nside, **O**utside (BIO)
The Stanford NE tagger returns IOB/BIO style tags, e.g.
```
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'),
('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'),
('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'LOCATION')]
```
The `('Rami', 'PERSON'), ('Eid', 'PERSON')` are tagged as PERSON and "Rami" is the Beginning or a NE chunk and "Eid" is the inside. And then you see that any non-NE will be tagged with "O".
The idea to extract continuous NE chunk is very similar to [Named Entity Recognition with Regular Expression: NLTK](http://stackoverflow.com/questions/24398536/named-entity-recognition-with-regular-expression-nltk) but because the Stanford NE chunker API doesn't return a nice tree to parse, you have to do this:
```
def get_continuous_chunks(tagged_sent):
continuous_chunk = []
current_chunk = []
for token, tag in tagged_sent:
if tag != "O":
current_chunk.append((token, tag))
else:
if current_chunk: # if the current chunk is not empty
continuous_chunk.append(current_chunk)
current_chunk = []
# Flush the final current_chunk into the continuous_chunk, if any.
if current_chunk:
continuous_chunk.append(current_chunk)
return continuous_chunk
ne_tagged_sent = [('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'LOCATION')]
named_entities = get_continuous_chunks(ne_tagged_sent)
named_entities = get_continuous_chunks(ne_tagged_sent)
named_entities_str = [" ".join([token for token, tag in ne]) for ne in named_entities]
named_entities_str_tag = [(" ".join([token for token, tag in ne]), ne[0][1]) for ne in named_entities]
print named_entities
print
print named_entities_str
print
print named_entities_str_tag
print
```
[out]:
```
[[('Rami', 'PERSON'), ('Eid', 'PERSON')], [('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION')], [('NY', 'LOCATION')]]
['Rami Eid', 'Stony Brook University', 'NY']
[('Rami Eid', 'PERSON'), ('Stony Brook University', 'ORGANIZATION'), ('NY', 'LOCATION')]
```
But please note the limitation that if two NEs are continuous, then it might be wrong, nevertheless i still can't think of any example where two NEs are continuous without any "O" between them.
---
As @alexis suggested, it's better to convert the stanford NE output into NLTK trees:
```
from nltk import pos_tag
from nltk.chunk import conlltags2tree
from nltk.tree import Tree
def stanfordNE2BIO(tagged_sent):
bio_tagged_sent = []
prev_tag = "O"
for token, tag in tagged_sent:
if tag == "O": #O
bio_tagged_sent.append((token, tag))
prev_tag = tag
continue
if tag != "O" and prev_tag == "O": # Begin NE
bio_tagged_sent.append((token, "B-"+tag))
prev_tag = tag
elif prev_tag != "O" and prev_tag == tag: # Inside NE
bio_tagged_sent.append((token, "I-"+tag))
prev_tag = tag
elif prev_tag != "O" and prev_tag != tag: # Adjacent NE
bio_tagged_sent.append((token, "B-"+tag))
prev_tag = tag
return bio_tagged_sent
def stanfordNE2tree(ne_tagged_sent):
bio_tagged_sent = stanfordNE2BIO(ne_tagged_sent)
sent_tokens, sent_ne_tags = zip(*bio_tagged_sent)
sent_pos_tags = [pos for token, pos in pos_tag(sent_tokens)]
sent_conlltags = [(token, pos, ne) for token, pos, ne in zip(sent_tokens, sent_pos_tags, sent_ne_tags)]
ne_tree = conlltags2tree(sent_conlltags)
return ne_tree
ne_tagged_sent = [('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'),
('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'),
('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'),
('in', 'O'), ('NY', 'LOCATION')]
ne_tree = stanfordNE2tree(ne_tagged_sent)
print ne_tree
```
[out]:
```
(S
(PERSON Rami/NNP Eid/NNP)
is/VBZ
studying/VBG
at/IN
(ORGANIZATION Stony/NNP Brook/NNP University/NNP)
in/IN
(LOCATION NY/NNP))
```
Then:
```
ne_in_sent = []
for subtree in ne_tree:
if type(subtree) == Tree: # If subtree is a noun chunk, i.e. NE != "O"
ne_label = subtree.label()
ne_string = " ".join([token for token, pos in subtree.leaves()])
ne_in_sent.append((ne_string, ne_label))
print ne_in_sent
```
[out]:
```
[('Rami Eid', 'PERSON'), ('Stony Brook University', 'ORGANIZATION'), ('NY', 'LOCATION')]
``` |
Send JSON to Flask using requests | 30,673,079 | 2 | 2015-06-05T18:07:31Z | 30,673,376 | 10 | 2015-06-05T18:25:56Z | [
"python",
"flask"
] | I am trying to send some JSON data to a Flask app using the requests library. I expect to get `application/json` back from the server. This works fine when I use Postman, but when I use requests I get `application/html` back instead.
```
import requests
server_ip = 'server_ip:port/events'
headers = {'Content-Type': 'application/json'}
event_data = {'data_1': 75, 'data_2': -1, 'data_3': 47, 'data_4': 'SBY'}
server_return = requests.post(server_ip, headers=headers, data=event_data)
print server_return.headers
{'date': 'Fri, 05 Jun 2015 17:57:43 GMT', 'content-length': '192', 'content-type': 'text/html', 'server': 'Werkzeug/0.10.4 Python/2.7.3'}
```
Why isn't Flask seeing the JSON data and responding correctly? | You are not sending JSON data currently. You need to set the `json` argument, not `data`. It's unnecessary to set `content-type` yourself in this case.
```
r = requests.post(url, json=event_data)
```
The `text/html` header you are seeing is the *response's* content type. Flask seems to be sending some HTML back to you, which seems normal. If you expect `application/json` back, perhaps this is an error page being returned since you weren't sending the JSON data correctly.
You can read json data in Flask by using `request.json`.
```
from flask import request
@app.route('/events', methods=['POST'])
def events():
event_data = request.json
``` |
spark in yarn-cluser 'sc' not defined | 30,674,467 | 2 | 2015-06-05T19:36:29Z | 30,674,801 | 7 | 2015-06-05T19:59:05Z | [
"python",
"hadoop",
"apache-spark",
"apache-spark-sql"
] | I am using spark 1.3.1.
Do I have to declare sc when spark run in yarn-cluster mode? I have no problem running the same python program in spark python shell.
This is how I submit the job :
```
/bin/spark-submit --master yarn-cluster test.py --conf conf/spark-defaults.conf
```
where in spark-defaults I did declare where the `spark.yarn.jar` is, also check permission on where `spark.yarn.jar` is and `/user/admin`, the spark user, to make there is read-write-execute for all.
In my `test.py` program, I have `from pyspark.sql import SQLContext` and the first line is
```
sqlctx=SQLContext(sc)
```
and the error is
```
NameError: name 'sc' is not defined
```
on that line.
Any idea? | `sc` is a helper value created in the `spark-shell`, but is not automatically created with `spark-submit`. You must instantiate your own `SparkContext` and use that
```
conf = SparkConf().setAppName(appName)
sc = SparkContext(conf=conf)
``` |
Pivot Tables or Group By for Pandas? | 30,679,467 | 3 | 2015-06-06T05:56:25Z | 30,679,543 | 7 | 2015-06-06T06:05:49Z | [
"python",
"pandas",
"count",
"group-by",
"pivot-table"
] | I have a hopefully straightforward question that has been giving me a lot of difficulty for the last 3 hours. It should be easy.
Here's the challenge.
I have a pandas dataframe:
```
+--------------------------+
| Col 'X' Col 'Y' |
+--------------------------+
| class 1 cat 1 |
| class 2 cat 1 |
| class 3 cat 2 |
| class 2 cat 3 |
+--------------------------+
```
What I am looking to transform the dataframe into:
```
+------------------------------------------+
| cat 1 cat 2 cat 3 |
+------------------------------------------+
| class 1 1 0 0 |
| class 2 1 0 1 |
| class 3 0 1 0 |
+------------------------------------------+
```
Where the values are value counts. Anybody have any insight? Thanks! | You could use [`pd.crosstab()`](http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.crosstab.html)
```
In [27]: df
Out[27]:
Col X Col Y
0 class 1 cat 1
1 class 2 cat 1
2 class 3 cat 2
3 class 2 cat 3
In [28]: pd.crosstab(df['Col X'], df['Col Y'])
Out[28]:
Col Y cat 1 cat 2 cat 3
Col X
class 1 1 0 0
class 2 1 0 1
class 3 0 1 0
```
Alternatively, you [`groupby`](http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DataFrame.groupby.html) on `'Col X','Col Y'` and `unstack` over `Col Y`, then fill `NaNs` with zeros.
```
In [29]: df.groupby(['Col X','Col Y']).size().unstack('Col Y').fillna(0)
Out[29]:
Col Y cat 1 cat 2 cat 3
Col X
class 1 1 0 0
class 2 1 0 1
class 3 0 1 0
```
Or use [`pd.pivot_table()`](http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.pivot_table.html) with `index=Col X`, `columns=Col Y`
```
In [30]: pd.pivot_table(df, index=['Col X'], columns=['Col Y'], aggfunc=len, fill_value=0)
Out[30]:
Col Y cat 1 cat 2 cat 3
Col X
class 1 1 0 0
class 2 1 0 1
class 3 0 1 0
``` |
How to reshape a networkx graph in Python? | 30,689,391 | 21 | 2015-06-07T02:09:00Z | 30,889,340 | 12 | 2015-06-17T10:48:13Z | [
"python",
"matplotlib",
"nodes",
"shape",
"networkx"
] | So I created a really naive (probably inefficient) way of generating hasse diagrams.
***Question:***
I have 4 dimensions... **`p`** **`q`** **`r`** **`s`** .
I want to display it uniformly (tesseract) but I have no idea how to reshape it. **How can one reshape a networkx graph in Python?**
I've seen some examples of people using `spring_layout()` and `draw_circular()` but it doesn't shape in the way I'm looking for because they aren't uniform.
**Is there a way to reshape my graph and make it uniform?** (i.e. reshape my hasse diagram into a tesseract shape (preferably using `nx.draw()` )
Here's what mine currently look like:

Here's my code to generate the hasse diagram of N dimensions
```
#!/usr/bin/python
import networkx as nx
import matplotlib.pyplot as plt
import itertools
H = nx.DiGraph()
axis_labels = ['p','q','r','s']
D_len_node = {}
#Iterate through axis labels
for i in xrange(0,len(axis_labels)+1):
#Create edge from empty set
if i == 0:
for ax in axis_labels:
H.add_edge('O',ax)
else:
#Create all non-overlapping combinations
combinations = [c for c in itertools.combinations(axis_labels,i)]
D_len_node[i] = combinations
#Create edge from len(i-1) to len(i) #eg. pq >>> pqr, pq >>> pqs
if i > 1:
for node in D_len_node[i]:
for p_node in D_len_node[i-1]:
#if set.intersection(set(p_node),set(node)): Oops
if all(p in node for p in p_node) == True: #should be this!
H.add_edge(''.join(p_node),''.join(node))
#Show Plot
nx.draw(H,with_labels = True,node_shape = 'o')
plt.show()
```
I want to reshape it like this:

If anyone knows of an easier way to make Hasse Diagrams, please **share some wisdom** but that's not the main aim of this post. | This is a pragmatic, rather than purely mathematical answer.
I think you have two issues - one with layout, the other with your network.
### 1. Network
You have too many edges in your network for it to represent the unit tesseract. ***Caveat*** I'm not an expert on the maths here - just came to this from the plotting angle (matplotlib tag). Please explain if I'm wrong.
Your desired projection and, for instance, the [wolfram mathworld](http://mathworld.wolfram.com/HasseDiagram.html) page for a Hasse diagram for n=4 has only 4 edges connected all nodes, whereas you have 6 edges to the 2 and 7 edges to the 3 bit nodes. Your graph fully connects each "level", i.e. 4-D vectors with 0 `1` values connect to all vectors with 1 `1` value, which then connect to all vectors with 2 `1` values and so on. This is most obvious in the projection based on the Wikipedia answer (2nd image below)
### 2. Projection
I couldn't find a pre-written algorithm or library to automatically project the 4D tesseract onto a 2D plane, but I did find a couple of examples, [e.g. Wikipedia](https://en.wikipedia.org/w/index.php?title=Tesseract§ion=3#Projections_to_2_dimensions). From this, you can work out a co-ordinate set that would suit you and pass that into the `nx.draw()` call.
Here is an example - I've included two co-ordinate sets, one that looks like the projection you show above, one that matches [this one from wikipedia](https://en.wikipedia.org/wiki/File:Hypercubeorder_binary.svg).
```
import networkx as nx
import matplotlib.pyplot as plt
import itertools
H = nx.DiGraph()
axis_labels = ['p','q','r','s']
D_len_node = {}
#Iterate through axis labels
for i in xrange(0,len(axis_labels)+1):
#Create edge from empty set
if i == 0:
for ax in axis_labels:
H.add_edge('O',ax)
else:
#Create all non-overlapping combinations
combinations = [c for c in itertools.combinations(axis_labels,i)]
D_len_node[i] = combinations
#Create edge from len(i-1) to len(i) #eg. pq >>> pqr, pq >>> pqs
if i > 1:
for node in D_len_node[i]:
for p_node in D_len_node[i-1]:
if set.intersection(set(p_node),set(node)):
H.add_edge(''.join(p_node),''.join(node))
#This is manual two options to project tesseract onto 2D plane
# - many projections are available!!
wikipedia_projection_coords = [(0.5,0),(0.85,0.25),(0.625,0.25),(0.375,0.25),
(0.15,0.25),(1,0.5),(0.8,0.5),(0.6,0.5),
(0.4,0.5),(0.2,0.5),(0,0.5),(0.85,0.75),
(0.625,0.75),(0.375,0.75),(0.15,0.75),(0.5,1)]
#Build the "two cubes" type example projection co-ordinates
half_coords = [(0,0.15),(0,0.6),(0.3,0.15),(0.15,0),
(0.55,0.6),(0.3,0.6),(0.15,0.4),(0.55,1)]
#make the coords symmetric
example_projection_coords = half_coords + [(1-x,1-y) for (x,y) in half_coords][::-1]
print example_projection_coords
def powerset(s):
ch = itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(len(s)+1))
return [''.join(t) for t in ch]
pos={}
for i,label in enumerate(powerset(axis_labels)):
if label == '':
label = 'O'
pos[label]= example_projection_coords[i]
#Show Plot
nx.draw(H,pos,with_labels = True,node_shape = 'o')
plt.show()
```
Note - unless you change what I've mentioned in 1. above, they still have your edge structure, so won't look exactly the same as the examples from the web. Here is what it looks like with your existing network generation code - you can see the extra edges if you compare it to your example (e.g. I don't this `pr` should be connected to `pqs`:
### 'Two cube' projection

### Wikimedia example projection

---
### Note
If you want to get into the maths of doing your own projections (and building up `pos` mathematically), you might look at [this research paper](http://www.researchgate.net/profile/Daniela_Velichova/publication/265003103_MODELLING_OF_ORTHOGONAL_PROJECTIONS/links/54328df00cf225bddcc7b9c4.pdf).
---
### EDIT:
Curiosity got the better of me and I had to search for a mathematical way to do this. I found [this blog](https://andrewharvey4.wordpress.com/2008/10/21/an-introduction-to-hypercubes/) - the main result of which being the projection matrix:

This led me to develop this function for projecting each label, taking the label containing 'p' to mean the point has value 1 on the 'p' axis, i.e. we are dealing with the unit tesseract. Thus:
```
def construct_projection(label):
r1 = r2 = 0.5
theta = math.pi / 6
phi = math.pi / 3
x = int( 'p' in label) + r1 * math.cos(theta) * int('r' in label) - r2 * math.cos(phi) * int('s' in label)
y = int( 'q' in label) + r1 * math.sin(theta) * int('r' in label) + r2 * math.sin(phi) * int('s' in label)
return (x,y)
```
Gives a nice projection into a regular 2D octagon with all points distinct.
This will run in the above program, just replace
```
pos[label] = example_projection_coords[i]
```
with
```
pos[label] = construct_projection(label)
```
This gives the result:

play with `r1`,`r2`,`theta` and `phi` to your heart's content :) |
A + B without arithmetic operators, Python vs C++ | 30,696,484 | 28 | 2015-06-07T17:33:10Z | 30,696,550 | 8 | 2015-06-07T17:40:43Z | [
"python",
"c++",
"algorithm",
"bit-manipulation"
] | I was trying to solve an old question:
> Write a function that add two [integer] numbers A and B. You should not use + or any arithmetic operators.
The best solution is like this, quoted from "[LintCode-A+B Problem](http://www.cnblogs.com/lishiblog/p/4194937.html)":
> For a + b in any base, we can treat the plus as two part: 1. a + b without carry; 2. the carry generated by a +b. The a+b then equals to part 1 plus part 2. If part1+part2 generates more carry, we can then repeat this procedure, until there is no carry.
I can understand this algorithm and everything seems good, so I tested it on [lintcode](http://www.lintcode.com/en/problem/a-b-problem/) with code pasted below.
```
class Solution:
"""
@param a: The first integer
@param b: The second integer
@return: The sum of a and b
"""
def aplusb(self, a, b):
while b != 0:
carry = a & b
a = a ^ b
b = carry << 1
return a
```
But surprisingly, it gave me `Time Limit Exceeded` error in test case `[100, -100]`. So I ran it locally and print a, b for every loop:
```
(-8, 8)
(-16, 16)
(-32, 32)
(-64, 64)
(-128, 128)
(-256, 256)
(-512, 512)
(-1024, 1024)
(-2048, 2048)
(-4096, 4096)
(-8192, 8192)
(-16384, 16384)
(-32768, 32768)
(-65536, 65536)
(-131072, 131072)
...
```
The calculation is correct, so I think this algorithm does not work for such input but when I wrote the same algorithm in C++, it just works:
```
class Solution {
public:
int aplusb(int a, int b) {
while (b!=0){
int carry = a & b;
a = a^b;
b = carry << 1;
}
return a;
}
};
```
I don't know what should be asked exactly, basically the questions are:
1. Why does C++ give the correct output `0` while Python doesn't?
2. If I use Python, how do I modify this algorithm to make it work? | The problem are negative numbers, or, how they are represented. In Python integer numbers have arbitrary accuracy, while C++ ints are 32bit or 64bit. So in Python, you have to handle negative numbers, e.g. subtraction, separately, or limit the number of bits by hand. |
A + B without arithmetic operators, Python vs C++ | 30,696,484 | 28 | 2015-06-07T17:33:10Z | 30,696,900 | 24 | 2015-06-07T18:18:07Z | [
"python",
"c++",
"algorithm",
"bit-manipulation"
] | I was trying to solve an old question:
> Write a function that add two [integer] numbers A and B. You should not use + or any arithmetic operators.
The best solution is like this, quoted from "[LintCode-A+B Problem](http://www.cnblogs.com/lishiblog/p/4194937.html)":
> For a + b in any base, we can treat the plus as two part: 1. a + b without carry; 2. the carry generated by a +b. The a+b then equals to part 1 plus part 2. If part1+part2 generates more carry, we can then repeat this procedure, until there is no carry.
I can understand this algorithm and everything seems good, so I tested it on [lintcode](http://www.lintcode.com/en/problem/a-b-problem/) with code pasted below.
```
class Solution:
"""
@param a: The first integer
@param b: The second integer
@return: The sum of a and b
"""
def aplusb(self, a, b):
while b != 0:
carry = a & b
a = a ^ b
b = carry << 1
return a
```
But surprisingly, it gave me `Time Limit Exceeded` error in test case `[100, -100]`. So I ran it locally and print a, b for every loop:
```
(-8, 8)
(-16, 16)
(-32, 32)
(-64, 64)
(-128, 128)
(-256, 256)
(-512, 512)
(-1024, 1024)
(-2048, 2048)
(-4096, 4096)
(-8192, 8192)
(-16384, 16384)
(-32768, 32768)
(-65536, 65536)
(-131072, 131072)
...
```
The calculation is correct, so I think this algorithm does not work for such input but when I wrote the same algorithm in C++, it just works:
```
class Solution {
public:
int aplusb(int a, int b) {
while (b!=0){
int carry = a & b;
a = a^b;
b = carry << 1;
}
return a;
}
};
```
I don't know what should be asked exactly, basically the questions are:
1. Why does C++ give the correct output `0` while Python doesn't?
2. If I use Python, how do I modify this algorithm to make it work? | The binary, 2's complement representation of `-4` is
```
...11100
```
Yes, I really do mean infinitely many `1`'s to the left; this is a binary repeating numeral. Technically, `4` is a repeating numeral too:
```
...00100
```
it's just repeating `0`'s to the left.
Your addition problem is
```
...11100
+ ...00100
--------------------
...00000
```
The operators `^`, `<<`, and `&` have no trouble computing with infinitely many binary digits, but the problem is that there are infinitely many carries, and you are computing them *one digit at a time*. This will never finish.
Thus, you have to recognize when this algorithm will get stuck in this situation and do something else to account for it.
---
You don't run into this problem in C/C++, because, for example, if `int` is 32-bits, then all of the digits except for the rightmost 31 digits get collapsed into a single bit, so it does the remaining carries all at once.
However, technically speaking, the meaning of left shifting an `int` is in terms of the value as an integer, rather than as a bit pattern, so you are invoking *undefined behavior* if the two most significant bits `carry` are ever different, because then `carry << 1` would produce an overflow). |
How can I serialize a numpy array while preserving matrix dimensions? | 30,698,004 | 10 | 2015-06-07T20:13:50Z | 30,699,208 | 7 | 2015-06-07T22:41:17Z | [
"python",
"json",
"numpy",
"apache-storm"
] | `numpy.array.tostring` doesn't seem to preserve information about matrix dimensions (see [this question](http://stackoverflow.com/q/30697769/1156707)), requiring the user to issue a call to `numpy.array.reshape`.
Is there a way to serialize a numpy array to JSON format while preserving this information?
**Note:** The arrays may contain ints, floats or bools. It's reasonable to expect a transposed array.
**Note 2:** this is being done with the intent of passing the numpy array through a Storm topology using streamparse, in case such information ends up being relevant. | [`pickle.dumps`](https://docs.python.org/2/library/pickle.html) or [`numpy.save`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.save.html) encode all the information needed to reconstruct an arbitrary NumPy array, even in the presence of endianness issues, non-contiguous arrays, or weird tuple dtypes. Endianness issues are probably the most important; you don't want `array([1])` to suddenly become `array([16777216])` because you loaded your array on a big-endian machine. `pickle` is probably the more convenient option, though `save` has its own benefits, given in the [`npy` format rationale](https://github.com/numpy/numpy/blob/master/doc/neps/npy-format.rst).
The `pickle` option:
```
import pickle
a = # some NumPy array
serialized = pickle.dumps(a, protocol=0) # protocol 0 is printable ASCII
deserialized_a = pickle.loads(serialized)
```
`numpy.save` uses a binary format, and it needs to write to a file, but you can get around that with `StringIO`:
```
a = # any NumPy array
memfile = StringIO.StringIO()
numpy.save(memfile, a)
memfile.seek(0)
serialized = json.dumps(memfile.read().decode('latin-1'))
# latin-1 maps byte n to unicode code point n
```
And to deserialize:
```
memfile = StringIO.StringIO()
memfile.write(json.loads(serialized).encode('latin-1'))
memfile.seek(0)
a = numpy.load(memfile)
``` |
How to use jinja2 as a templating engine in Django 1.8 | 30,701,631 | 11 | 2015-06-08T05:13:57Z | 30,715,508 | 13 | 2015-06-08T17:29:58Z | [
"python",
"django",
"jinja2"
] | I have been looking on how to use jinja2 in django 1.8, but there is no complete source for using django with jinja2. I was wondering if you guys knew the process for using jinja2 in django. I have looked through the the official documentation and I have looked at the following question: [How to setup django 1.8 to use jinja2?](http://stackoverflow.com/questions/30288351/how-to-setup-django-1-8-to-use-jinja2)
but none of them clearly explain how to use jinja2 in an put-togther manner. I just started using django and don't know all the lingo in the docs. I would really appreciate the help. | Frist you have to install `jinja2`:
```
$ pip install Jinja2
```
Then modify your `TEMPLATES` list in the **settings.py** to contain the `jinja2` `BACKEND` :
```
TEMPLATES = [
{
'BACKEND': 'django.template.backends.jinja2.Jinja2',
'DIRS': [os.path.join(BASE_DIR, 'templates/jinja2')],
'APP_DIRS': True,
'OPTIONS': {'environment': 'myproject.jinja2.Environment',},
},
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
```
where `templates/jinja2` is the directory with your jinja2 template files.
And in your views.py file:
```
from __future__ import absolute_import # Python 2 only
from jinja2 import Environment
from django.contrib.staticfiles.storage import staticfiles_storage
from django.core.urlresolvers import reverse
def environment(**options):
env = Environment(**options)
env.globals.update({
'static': staticfiles_storage.url,
'url': reverse,
})
return env
```
This makes `static` and `url` available in your Jinja2 templates.
**P.S.** For more details see [this article](http://jonathanchu.is/posts/upgrading-jinja2-templates-django-18-with-admin/). |
Syntax error while using Lambda functions | 30,705,624 | 3 | 2015-06-08T09:33:23Z | 30,705,693 | 8 | 2015-06-08T09:36:43Z | [
"python",
"filter",
"lambda"
] | I have a list with some dummy email address as below:
```
listEmails = ['[email protected]', '[email protected]', '[email protected]']
```
I was trying to use `lambda` and `filter` to get the list of valid email address.
let's assume `[email protected]` is the only invalid email address.
I used the regular expression to filter out the invalid emails using the below code.
```
listValid = list(filter(lambda x: x if re.match(r"^[A-Za-z0-9\.\+_-]+@[A-Za-z0-9\._-]+\.[a-zA-Z]{0,3}$",x) ,listEmails))
```
I have been receiving a syntax error at `,` before `listEmails))`.
Generally, the `lambda` function takes the value after the comma(`,`) as the **input value**, so I am not sure if the `lambda` function is assuming `x` from `re.match(r"^[A-Za-z0-9\.\+_-]+@[A-Za-z0-9\._-]+\.[a-zA-Z]{0,3}$",x)` as the input value.
Lambda function with `if` conditions are possible from the below case:
```
from functools import reduce
f = lambda a,b: a if (a > b) else b
reduce(f, [47,11,42,102,13])
```
So, I wanted to know why it isn't working in my case?
Note: Since I got an error at the `lambda` function itself, I haven't evaluated if the `list(filter(` would return the desired result. | You are missing an *`else`* clause in the conditional expression:
```
x if re.match(...) else None
```
You cannot just use the `if` on its own; all expressions always produce a result, so if the `re.match()` returns `None`, you need to decide what should be returned *instead*.
You don't need a conditional expression here at all, just return the result of the `re.match()` call:
```
listValid = list(filter(lambda x: re.match(r"^[A-Za-z0-9\.\+_-]+@[A-Za-z0-9\._-]+\.[a-zA-Z]{0,3}$", x), listEmails))
```
In Python 3, it is often easier to just use a list comprehension instead of `filter()`:
```
listValid = [x for x in listEmails if re.match(r"^[A-Za-z0-9\.\+_-]+@[A-Za-z0-9\._-]+\.[a-zA-Z]{0,3}$", x)]
```
I'd store the compiled regular expression in a separate variable to make that a little more readable:
```
email_test = re.compile(r"^[A-Za-z0-9\.\+_-]+@[A-Za-z0-9\._-]+\.[a-zA-Z]{0,3}$")
listValid = [x for x in listEmails if email_test.match(x)]
``` |
Which Model Field to use in Django to store longitude and latitude values? | 30,706,799 | 14 | 2015-06-08T10:30:30Z | 30,711,177 | 15 | 2015-06-08T14:00:24Z | [
"python",
"django",
"geodjango"
] | I want to store my users location using longitude and latitude, at the moment this comes from Google Maps, but I will be using GeoDango and some point to work out distances between to points also.
However, my first confusion is which field in Django I should be using to store the longitude and latitude values? The information I'm getting is conflicting.
The official documentation uses a `FloatField`
<https://docs.djangoproject.com/en/dev/ref/contrib/gis/tutorial/#geographic-models>
```
lon = models.FloatField()
lat = models.FloatField()
```
Where almost every answer on stackoverflow shows a `DecimalField`
```
long = models.DecimalField(max_digits=8, decimal_places=3)
lat = models.DecimalField(max_digits=8, decimal_places=3)
```
So what should I be using? | Float is generally an approximation, see [here](https://docs.python.org/2/library/decimal.html#quick-start-tutorial) for some simple examples. You could get very nice results modifying your model to something like `DecimalField(max_digits=9, decimal_places=6)`, since decimals are very important in coordinates but using more than 6 is basically meaningless. |
Django models: managing transactions when commit_manually is deprecated | 30,707,408 | 3 | 2015-06-08T11:00:04Z | 30,708,253 | 7 | 2015-06-08T11:41:41Z | [
"python",
"django",
"django-models",
"override"
] | I'm running Django 1.4.11. I overrode the `save()` method of a Django model in a way similar to the following code:
```
from django.db import models
from django.db import transaction
class MyModel(models.Model):
# model definition
@transaction.commit_manually
def save(self, *args, **kwargs):
try:
super(self.__class__, self).save(*args, **kwargs)
foo() # do_other_things
except:
transaction.rollback()
raise
else:
transaction.commit()
```
When I run my code, sometimes I read this message in the Apache log:
> RemovedInDjango18Warning: commit\_manually is deprecated in favor of
> set\_autocommit.
How can I implement the same logic with set\_autocommit? | The same logic would look like this:
```
from django.db import models
from django.db import transaction
class MyModel(models.Model):
# model definition
def save(self, *args, **kwargs):
transaction.set_autocommit(False)
try:
super(MyModel, self).save(*args, **kwargs)
foo() # do_other_things
except:
transaction.rollback()
raise
else:
transaction.commit()
finally:
transaction.set_autocommit(True)
```
However, this would be equivalent to using the `atomic()` decorator:
```
from django.db import models
from django.db import transaction
class MyModel(models.Model):
# model definition
@transaction.atomic
def save(self, *args, **kwargs):
super(MyModel, self).save(*args, **kwargs)
foo() # do_other_things
```
This will commit the transaction on a successful `__exit__`, and roll back in the case of an exception. |
Python: What is the difference between math.exp and numpy.exp and why do numpy creators choose to introduce exp again | 30,712,402 | 12 | 2015-06-08T14:51:07Z | 30,712,501 | 15 | 2015-06-08T14:55:49Z | [
"python",
"arrays",
"numpy"
] | `exp` means exponential function
`exp` in `math module`: <https://docs.python.org/2/library/math.html>
`exp` in `numpy module`: <http://docs.scipy.org/doc/numpy/reference/generated/numpy.exp.html>
Why do `numpy` creators introduce this function again? | The `math.exp` works only for scalars as [EdChum](http://stackoverflow.com/users/704848/edchum) mentions. Whereas `numpy.exp` will work for arrays.
Example:
```
>>> import math
>>> import numpy as np
>>> x = [1.,2.,3.,4.,5.]
>>> math.exp(x)
Traceback (most recent call last):
File "<pyshell#10>", line 1, in <module>
math.exp(x)
TypeError: a float is required
>>> np.exp(x)
array([ 2.71828183, 7.3890561 , 20.08553692, 54.59815003,
148.4131591 ])
>>>
```
It is the same case for other `math` functions.
```
>>> math.sin(x)
Traceback (most recent call last):
File "<pyshell#12>", line 1, in <module>
math.sin(x)
TypeError: a float is required
>>> np.sin(x)
array([ 0.84147098, 0.90929743, 0.14112001, -0.7568025 , -0.95892427])
>>>
```
Also refer to [THIS ANSWER](http://stackoverflow.com/questions/3650194/are-numpys-math-functions-faster-than-pythons) to check out how `numpy` is faster than `math`. |
How can I draw a scatter plot with contour density lines in polar coordinates using Matplotlib? | 30,713,586 | 4 | 2015-06-08T15:43:23Z | 30,715,310 | 7 | 2015-06-08T17:17:07Z | [
"python",
"matplotlib",
"scatter-plot",
"polar-coordinates"
] | I am trying to make a scatter plot in **polar coordinates** with the contour lines superposed to the cloud of points. I am aware of how to do that in cartesian coordinates using `numpy.histogram2d`:
```
# Simple case: scatter plot with density contours in cartesian coordinates
import matplotlib.pyplot as pl
import numpy as np
np.random.seed(2015)
N = 1000
shift_value = -6.
x1 = np.random.randn(N) + shift_value
y1 = np.random.randn(N) + shift_value
fig, ax = pl.subplots(nrows=1,ncols=1)
ax.scatter(x1,y1,color='hotpink')
H, xedges, yedges = np.histogram2d(x1,y1)
extent = [xedges[0],xedges[-1],yedges[0],yedges[-1]]
cset1 = ax.contour(H,extent=extent)
# Modify xlim and ylim to be a bit more consistent with what's next
ax.set_xlim(xmin=-10.,xmax=+10.)
ax.set_ylim(ymin=-10.,ymax=+10.)
```
Output is here:

However, when I try to transpose my code to polar coordinates I get distorted contour lines. Here is my code and the produced (wrong) output:
```
# Case with polar coordinates; the contour lines are distorted
np.random.seed(2015)
N = 1000
shift_value = -6.
def CartesianToPolar(x,y):
r = np.sqrt(x**2 + y**2)
theta = np.arctan2(y,x)
return theta, r
x2 = np.random.randn(N) + shift_value
y2 = np.random.randn(N) + shift_value
theta2, r2 = CartesianToPolar(x2,y2)
fig2 = pl.figure()
ax2 = pl.subplot(projection="polar")
ax2.scatter(theta2, r2, color='hotpink')
H, xedges, yedges = np.histogram2d(x2,y2)
theta_edges, r_edges = CartesianToPolar(xedges[:-1],yedges[:-1])
ax2.contour(theta_edges, r_edges,H)
```
The *wrong* output is here:

Is there any way to have the contour lines at the proper scale?
EDIT to address suggestions made in comments.
EDIT2: Someone suggested that the question might be a duplicate of [this question](http://stackoverflow.com/questions/6548556/polar-contour-plot-in-matplotlib). Although I recognize that the problems are similar, mine deals specifically with plotting the density contours of points over a scatter plot. The other question is about how to plot the contour levels of any quantity that is specified along with the coordinates of the points. | The problem is that you're only converting the edges of the array. By converting only the x and y coordinates of the edges, you're effectively converting the coordinates of a diagonal line across the 2D array. This line has a very small range of `theta` values, and you're applying that range to the entire grid.
## The quick (but incorrect) fix
In most cases, you could convert the entire grid (i.e. 2D arrays of `x` and `y`, producing 2D arrays of `theta` and `r`) to polar coordinates.
Instead of:
```
H, xedges, yedges = np.histogram2d(x2,y2)
theta_edges, r_edges = CartesianToPolar(xedges[:-1],yedges[:-1])
```
Do something similar to:
```
H, xedges, yedges = np.histogram2d(x2,y2)
xedges, yedges = np.meshgrid(xedges[:-1],yedges[:-1]
theta_edges, r_edges = CartesianToPolar(xedges, yedges)
```
As a complete example:
```
import numpy as np
import matplotlib.pyplot as plt
def main():
x2, y2 = generate_data()
theta2, r2 = cart2polar(x2,y2)
fig2 = plt.figure()
ax2 = fig2.add_subplot(111, projection="polar")
ax2.scatter(theta2, r2, color='hotpink')
H, xedges, yedges = np.histogram2d(x2,y2)
xedges, yedges = np.meshgrid(xedges[:-1], yedges[:-1])
theta_edges, r_edges = cart2polar(xedges, yedges)
ax2.contour(theta_edges, r_edges, H)
plt.show()
def generate_data():
np.random.seed(2015)
N = 1000
shift_value = -6.
x2 = np.random.randn(N) + shift_value
y2 = np.random.randn(N) + shift_value
return x2, y2
def cart2polar(x,y):
r = np.sqrt(x**2 + y**2)
theta = np.arctan2(y,x)
return theta, r
main()
```

However, you may notice that this looks slightly incorrect. That's because `ax.contour` implicitly assumes that the input data is on a regular grid. We've given it a regular grid in cartesian coordinates, but not a regular grid in polar coordinates. It's assuming we've passed it a regular grid in polar coordinates. We could resample the grid, but there's an easier way.
## The correct solution
To correctly plot the 2D histogram, compute the histogram in polar space.
For example, do something similar to:
```
theta2, r2 = cart2polar(x2,y2)
H, theta_edges, r_edges = np.histogram2d(theta2, r2)
ax2.contour(theta_edges[:-1], r_edges[:-1], H)
```
As a complete example:
```
import numpy as np
import matplotlib.pyplot as plt
def main():
x2, y2 = generate_data()
theta2, r2 = cart2polar(x2,y2)
fig2 = plt.figure()
ax2 = fig2.add_subplot(111, projection="polar")
ax2.scatter(theta2, r2, color='hotpink')
H, theta_edges, r_edges = np.histogram2d(theta2, r2)
ax2.contour(theta_edges[:-1], r_edges[:-1], H)
plt.show()
def generate_data():
np.random.seed(2015)
N = 1000
shift_value = -6.
x2 = np.random.randn(N) + shift_value
y2 = np.random.randn(N) + shift_value
return x2, y2
def cart2polar(x,y):
r = np.sqrt(x**2 + y**2)
theta = np.arctan2(y,x)
return theta, r
main()
```

Finally, you might notice a slight shift in the above result. This has to do with cell-oriented grid conventions (`x[0,0], y[0,0]` gives the center of the cell) vs edge-oriented grid conventions (`x[0,0], y[0,0]` gives the lower-left corner of the cell. `ax.contour` is expecting things to be cell-centered, but you're giving it edge-aligned x and y values.
It's only a half-cell shift, but if you'd like to fix it, do something like:
```
def centers(bins):
return np.vstack([bins[:-1], bins[1:]]).mean(axis=0)
H, theta_edges, r_edges = np.histogram2d(theta2, r2)
theta_centers, r_centers = centers(theta_edges), centers(r_edges)
ax2.contour(theta_centers, r_centers, H)
```
 |
Python, Flask: How to set response header for all responses | 30,717,152 | 7 | 2015-06-08T19:08:33Z | 30,717,205 | 12 | 2015-06-08T19:11:13Z | [
"python",
"flask",
"http-headers"
] | I want to set all of my http headers responses to something like this:
```
response.headers["X-Frame-Options"] = "SAMEORIGIN"
```
I checked [this question](http://stackoverflow.com/questions/25860304/how-do-i-set-response-headers-in-flask), but it only changes the header for one specific controller. I want to change all of my headers maybe in "before\_request" function similar to the following logic. How can I do that?
```
@app.before_request
def before_request():
# response.headers["X-Frame-Options"] = "SAMEORIGIN"
``` | Set the header in a [`@app.after_request()` hook](http://flask.pocoo.org/docs/0.10/api/#flask.Flask.after_request), at which point you have a response object to set the header on:
```
@app.after_request
def apply_caching(response):
response.headers["X-Frame-Options"] = "SAMEORIGIN"
return response
```
The [`flask.request` context](http://flask.pocoo.org/docs/0.10/api/#flask.request) is still available when this hook runs, so you can still vary the response based on the request at this time. |
Are these two python statements the same? | 30,721,550 | 19 | 2015-06-09T01:18:13Z | 30,721,606 | 22 | 2015-06-09T01:26:37Z | [
"python",
"operators"
] | I have these two statements
```
return self.getData() if self.getData() else ''
```
and
```
return self.getData() or ''
```
I want to know are they same or there is any difference | Maybe, but only if `self.getData()` is a pure function and has no *side effects*. More importantly the object that `self.getData()` returns must also be free of any side effects and consistently return a boolean value.
In the simplest case if `f()` is defined as:
```
def f():
return ["Hello World!"]
```
Then the following:
```
x = f() if f() else ""
```
is logically equivalent to:
```
x = f() or ""
```
Since `f()` is treated as a boolean expression in both cases and `f()` will evaluate to a `True`(ish) or `False`(ly) value both expressions will return the same result.
This is called [Logical Equivalence](http://en.wikipedia.org/wiki/Logical_equivalence)
> In logic, statements p and q are logically equivalent if they have the
> same logical content. This is a semantic concept; two statements are
> equivalent if they have the same truth value in every model (Mendelson
> 1979:56). The logical equivalence of p and q is sometimes expressed as
> p \equiv q, Epq, or p \Leftrightarrow q. However, these symbols are
> also used for material equivalence; the proper interpretation depends
> on the context. Logical equivalence is different from material
> equivalence, although the two concepts are closely related. |
Are these two python statements the same? | 30,721,550 | 19 | 2015-06-09T01:18:13Z | 30,721,620 | 32 | 2015-06-09T01:28:49Z | [
"python",
"operators"
] | I have these two statements
```
return self.getData() if self.getData() else ''
```
and
```
return self.getData() or ''
```
I want to know are they same or there is any difference | I would say No because if `self.getData()` changes something during its operation, then the first statement has the possibility of returning a different result since it will make a 2nd call to it. |
Are these two python statements the same? | 30,721,550 | 19 | 2015-06-09T01:18:13Z | 30,721,645 | 7 | 2015-06-09T01:32:28Z | [
"python",
"operators"
] | I have these two statements
```
return self.getData() if self.getData() else ''
```
and
```
return self.getData() or ''
```
I want to know are they same or there is any difference | The only difference I see is that the first one will call `self.getData()` twice, with the first one being used to evaluate boolean value and the second may be returned(if the first evaluated to True).
The other option will evaluate the function only once, using it both as boolean checking and returning.
This can be crucial if, for example, `self.getData()` deletes or modifies the data after returning it or the function takes long to compute. |
Are these two python statements the same? | 30,721,550 | 19 | 2015-06-09T01:18:13Z | 30,721,652 | 8 | 2015-06-09T01:33:09Z | [
"python",
"operators"
] | I have these two statements
```
return self.getData() if self.getData() else ''
```
and
```
return self.getData() or ''
```
I want to know are they same or there is any difference | They will have the same result, since both treat `self.getData()`'s result in a boolean context, but beware:
1)
`return self.getData() if self.getData() else ''`
will run the function `getData` twice, while
2)
`return self.getData() or ''`
will only run it once. This can be important if `getData()` takes a while to execute, and it means that 1) is *not* the same as 2) if the function `getData()` has any side effects.
Stick with 2). |
Efficient algorithm to find the largest run of zeros in a binary string? | 30,722,732 | 4 | 2015-06-09T03:57:14Z | 30,722,751 | 8 | 2015-06-09T03:59:15Z | [
"python",
"algorithm"
] | I am looking for an efficient algorithm to find the longest run of zeros in a binary string. My implementation is in Python 2.7, but all I require is the idea of the algorithm.
For example, given '0010011010000', the function should return 4. | I don't think there is anything better than a single pass over the string, counting the current sequence length (and updating the maximum) as you go along.
If by "binary string" you mean raw bits, you can read them one byte at a time and extract the eight bits in there (using bit shifting or masking). That does not change the overall algorithm or its complexity. |
How to divide list item by list item from another list using Python? | 30,725,446 | 3 | 2015-06-09T07:23:05Z | 30,725,494 | 8 | 2015-06-09T07:25:07Z | [
"python",
"list",
"division"
] | I would like to divide list items inside two lists.
```
a = [[1, 0, 2], [0, 0, 0], [1], [1]]
b = [[5, 6, 4], [6, 6, 6], [3], [3]]
```
How can I divide a by b to obtain this output:
```
c = [[0.2, 0, 0.5], [0, 0, 0], [0.333], [0.333]]
```
Can anyone help me? | Zip the two lists and use a list comprehension:
```
from __future__ import division # in python2 only
result = [[x/y for x,y in zip(xs, ys)] for xs, ys in zip(a, b)]
```
---
Sample run:
```
In [1]: a = [[1, 0, 2], [0, 0, 0], [1], [1]]
...: b = [[5, 6, 4], [6, 6, 6], [3], [3]]
...:
In [2]: result = [[x/y for x,y in zip(xs, ys)] for xs, ys in zip(a, b)]
In [3]: result
Out[3]: [[0.2, 0.0, 0.5], [0.0, 0.0, 0.0], [0.3333333333333333], [0.3333333333333333]]
``` |
Print into console terminal not into cell output of IPython Notebook | 30,729,318 | 10 | 2015-06-09T10:22:25Z | 30,730,441 | 9 | 2015-06-09T11:14:17Z | [
"python",
"windows",
"ipython",
"ipython-notebook",
"python-3.4"
] | I would like to print into the terminal window that runs IPython Notebook and not into the cell output. Printing into cell output consumes more memory and slows down my system when I issue a substantial number of `print` calls. In essence, I would like [this](http://stackoverflow.com/questions/23306893/ipython-notebook-shows-output-in-terminal) behaviour by design.
I have tried the following:
1. I tried a different permutations of `print` and `sys.stdout.write` calls
2. I looked at the IPython Notebook documentation [here](http://ipython.org/ipython-doc/1/interactive/notebook.html), [here](http://nbviewer.ipython.org/github/ipython/ipython/blob/1.x/examples/notebooks/Part%202%20-%20Basic%20Output.ipynb) and [here](http://nbviewer.ipython.org/github/ipython/ipython/blob/1.x/examples/notebooks/Part%205%20-%20Rich%20Display%20System.ipynb) without help
3. I have tried using [this](http://stackoverflow.com/questions/25494182/print-not-showing-in-ipython-notebook-python) as a workaround but it seems to be only working on Python 2.7 | You have to redirect your output to the systems standard output device. This depends on your OS. On Mac that would be:
```
import sys
sys.stdout = open('/dev/stdout', 'w')
```
Type the above code in an IPython cell and evaluate it. Afterwards all output will show up in terminal. |
seaborn: legend with background color | 30,729,473 | 8 | 2015-06-09T10:29:00Z | 30,736,466 | 11 | 2015-06-09T15:28:47Z | [
"python",
"plot",
"legend",
"seaborn"
] | The following question explains how to change the background color of a legend:
[matplotlib legend background color](http://stackoverflow.com/questions/19863368/matplotlib-legend-background-color). However, if I use seaborn this does not work. Is there a way to do this?
```
import matplotlib.pyplot as plt
import numpy as np
a = np.random.rand(10,1)
plt.plot(a, label='label')
legend = plt.legend()
frame = legend.get_frame()
frame.set_facecolor('green')
plt.show()
import seaborn as sns
plt.plot(a, label='label')
legend = plt.legend()
frame = legend.get_frame()
frame.set_facecolor('green')
plt.show()
```
  | seaborn turns the legend frame off by default, if you want to customize how the frame looks, I think you'll need to add `frameon=True` when you call `plt.legend`. |
Python default/unnamed method | 30,740,886 | 2 | 2015-06-09T19:15:52Z | 30,740,908 | 7 | 2015-06-09T19:17:01Z | [
"python",
"class",
"object"
] | is it possible to create a python object that has the following property:
```
class Foo:
def __default_method__(x):
return x
f = Foo()
f(10)
> 10
```
That is to say, an object that when instantiated allows for a method to be called without the need for an explicit method name? | Yes. It's called [`__call__()`](https://docs.python.org/3/reference/datamodel.html#object.__call__). |
Profiling memory usage on App Engine | 30,742,104 | 9 | 2015-06-09T20:22:28Z | 30,742,988 | 7 | 2015-06-09T21:16:42Z | [
"python",
"google-app-engine",
"memory"
] | How can I profile memory (RAM) usage on my App Engine app? I'm trying to address errors related to exceeding the instance memory limit. I've tried these things and, so far, they don't work or don't provide what I need.
* Appstats. This doesn't provide memory usage details.
* [Apptrace](https://code.google.com/p/apptrace/). It hasn't been updated since 2012 and depends on a deprecated version of the SDK. Doesn't work out of the box.
* [Appengine-profiler](https://code.google.com/p/appengine-profiler/). Doesn't provide memory stats.
* [Gae-mini-profiler](https://github.com/Khan/gae_mini_profiler), which uses [cProfile](https://docs.python.org/2/library/profile.html#module-cProfile). Doesn't provide memory stats.
* [guppy](https://pypi.python.org/pypi/guppy/0.1.10). After downloading and installing the library's code in my app's folder, running `guppy.hpy()` fails with `ImportError: No module named heapyc`
* [resource](https://docs.python.org/2/library/resource.html). Not part of the SDK's version of python, so I can't use it.
Am I wrong about any of the above? The top-rated answer (not the accepted one) on [this](http://stackoverflow.com/questions/11202475/memory-profiling-monitoring-python-on-google-appengine) question says that *there is no way* to monitor memory usage on App Engine. That can't be true. Can it?
**EDIT**
I can confirm that GAE mini profiler does the job. After installation, I could change the settings in the UI to "sampling with memory" and then see this readout:

Thanks to all the [contributors](https://github.com/Khan/gae_mini_profiler/graphs/contributors)! | GAE Mini Profiler does provide memory stats if you use the sampling profiler and set `memory_sample_rate` nonzero; at each snapshot it will tell you the memory that was in use. You will want to turn the sample frequency way down as the memory sample takes a few ms to execute.
Edit: the way it gets the memory stats is from the GAE [runtime API](https://cloud.google.com/appengine/docs/python/refdocs/google.appengine.api.runtime.runtime) which is deprecated, but still worked as of last I knew; I'm not sure if there's a good replacement. |
How to join 2 lists of dicts in python? | 30,746,584 | 5 | 2015-06-10T03:27:40Z | 30,746,653 | 8 | 2015-06-10T03:36:26Z | [
"python",
"list",
"python-2.7"
] | I have 2 lists like this:
```
l1 = [{'a': 1, 'b': 2, 'c': 3, 'd': 4}, {'a': 5, 'b': 6, 'c': 7, 'd': 8}]
l2 = [{'a': 5, 'b': 6, 'e': 100}, {'a': 1, 'b': 2, 'e': 101}]
```
and I want to obtain a list `l3`, which is a join of `l1` and `l2` where values of `'a'` and `'b'` are equal in both `l1` and `l2`
i.e.
```
l3 = [{'a': 1, 'b: 2, 'c': 3, 'd': 4, 'e': 101}, {'a': 5, 'b: 6, 'c': 7, 'd': 8, 'e': 100}]
```
How can I do this? | You should accumulate the results in a dictionary. You should use the values of 'a' and 'b' to form a key of this dictionary
Here, I have used a `defaultdict` to accumulate the entries
```
l1 = [{'a': 1, 'b': 2, 'c': 3, 'd': 4}, {'a': 5, 'b': 6, 'c': 7, 'd': 8}]
l2 = [{'a': 5, 'b': 6, 'e': 100}, {'a': 1, 'b': 2, 'e': 101}]
from collections import defaultdict
D = defaultdict(dict)
for lst in l1, l2:
for item in lst:
key = item['a'], item['b']
D[key].update(item)
l3 = D.values()
print l3
```
output:
```
[{'a': 1, 'c': 3, 'b': 2, 'e': 101, 'd': 4}, {'a': 5, 'c': 7, 'b': 6, 'e': 100, 'd': 8}]
``` |
Create copy of plone installed onto another server with data | 30,749,053 | 3 | 2015-06-10T06:51:06Z | 30,749,377 | 7 | 2015-06-10T07:06:28Z | [
"python",
"linux",
"plone"
] | To create another exact copy of the plone install running along with data, is it sufficient to copy buildout.cfg and Data.fs with same version of Plone on the other install? Does it restore the uploaded pdf and image files that have been done on the first server?
Using plone 4.2.1 standalone install on linux | You are right that you also need to transfer files and images. They are stored as BLOBs on the file system.
I guess that you will find a directory named `blobstorage`, close to the `filestorage` directory where you found `Data.fs`.
You need to transfer this `blobstorage` directory and all its content. |
Python 3 UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d | 30,750,843 | 6 | 2015-06-10T08:18:56Z | 30,751,201 | 9 | 2015-06-10T08:36:33Z | [
"python",
"unicode"
] | I want to make search engine and I follow tutorial in some web.
I want to test parse html
```
from bs4 import BeautifulSoup
def parse_html(filename):
"""Extract the Author, Title and Text from a HTML file
which was produced by pdftotext with the option -htmlmeta."""
with open(filename) as infile:
html = BeautifulSoup(infile, "html.parser", from_encoding='utf-8')
d = {'text': html.pre.text}
if html.title is not None:
d['title'] = html.title.text
for meta in html.findAll('meta'):
try:
if meta['name'] in ('Author', 'Title'):
d[meta['name'].lower()] = meta['content']
except KeyError:
continue
return d
parse_html("C:\\pdf\\pydf\\data\\muellner2011.html")
```
and it getting error
```
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 867: character maps to <undefined>enter code here
```
I saw some solutions on the Web using the encode(). But I don't know how to insert encode() function in code. Can anyone help me? | In Python 3, files are opened text (decoded to Unicode) for you; you don't need to tell BeautifulSoup what codec to decode from.
If decoding of the data fails, that's because you didn't tell the `open()` call what codec to use when reading the file; add the correct codec with an `encoding` argument:
```
with open(filename, encoding='utf8') as infile:
html = BeautifulSoup(infile, "html.parser")
```
otherwise the file will be opened with your system default codec, which is OS dependent. |
How to filter objects for count annotation in Django? | 30,752,268 | 18 | 2015-06-10T09:22:55Z | 30,754,520 | 30 | 2015-06-10T11:07:28Z | [
"python",
"django",
"django-models",
"django-aggregation"
] | Consider simple Django models `Event` and `Participant`:
```
class Event(models.Model):
title = models.CharField(max_length=100)
class Participant(models.Model):
event = models.ForeignKey(Event, db_index=True)
is_paid = models.BooleanField(default=False, db_index=True)
```
It's easy to annotate events query with total number of participants:
```
events = Event.objects.all().annotate(participants=models.Count('participant'))
```
**How to annotate with count of participants filtered by `is_paid=True`?**
I need to query **all events** regardless of number of participants, e.g. I don't need to filter by annotated result. If there are `0` participants, that's ok, I just need `0` in annotated value.
The [example from documentation](https://docs.djangoproject.com/en/1.8/topics/db/aggregation/#order-of-annotate-and-filter-clauses) doesn't work here, because it excludes objects from query instead of annotating them with `0`.
**Update.** Django 1.8 has new [conditional expressions feature](https://docs.djangoproject.com/en/1.8/ref/models/conditional-expressions/), so now we can do like this:
```
events = Event.objects.all().annotate(paid_participants=models.Sum(
models.Case(
models.When(participant__is_paid=True, then=1),
default=0,
output_field=models.IntegerField()
)))
``` | Just discovered that Django 1.8 has new [conditional expressions feature](https://docs.djangoproject.com/en/1.10/ref/models/conditional-expressions/), so now we can do like this:
```
events = Event.objects.all().annotate(paid_participants=models.Sum(
models.Case(
models.When(participant__is_paid=True, then=1),
default=0, output_field=models.IntegerField()
)))
``` |
How to add any new library like spark-csv in Apache Spark prebuilt version | 30,757,439 | 13 | 2015-06-10T13:13:47Z | 30,765,306 | 15 | 2015-06-10T19:14:33Z | [
"python",
"apache-spark",
"apache-spark-sql"
] | I have build the [Spark-csv](https://github.com/databricks/spark-csv) and able to use the same from pyspark shell using the following command
```
bin/spark-shell --packages com.databricks:spark-csv_2.10:1.0.3
```
error getting
```
>>> df_cat.save("k.csv","com.databricks.spark.csv")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/pyspark/sql/dataframe.py", line 209, in save
self._jdf.save(source, jmode, joptions)
File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError
```
Where should I place the jar file in my spark pre-built setup so that I will be able to access `spark-csv` from python editor directly as well. | At the time I used spark-csv, I also had to download `commons-csv` jar (not sure it is still relevant). Both jars where in the spark distribution folder.
1. I downloaded the jars as follow:
```
wget http://search.maven.org/remotecontent?filepath=org/apache/commons/commons-csv/1.1/commons-csv-1.1.jar -O commons-csv-1.1.jar<br/>
wget http://search.maven.org/remotecontent?filepath=com/databricks/spark-csv_2.10/1.0.0/spark-csv_2.10-1.0.0.jar -O spark-csv_2.10-1.0.0.jar
```
2. then started the python spark shell with the arguments:
```
./bin/pyspark --jars "spark-csv_2.10-1.0.0.jar,commons-csv-1.1.jar"
```
3. and read a spark dataframe from a csv file:
```
from pyspark.sql import SQLContext<br/>
sqlContext = SQLContext(sc)<br/>
df = sqlContext.load(source="com.databricks.spark.csv", path = "/path/to/you/file.csv")<br/>
df.show()
``` |
How to add any new library like spark-csv in Apache Spark prebuilt version | 30,757,439 | 13 | 2015-06-10T13:13:47Z | 31,182,969 | 10 | 2015-07-02T11:11:37Z | [
"python",
"apache-spark",
"apache-spark-sql"
] | I have build the [Spark-csv](https://github.com/databricks/spark-csv) and able to use the same from pyspark shell using the following command
```
bin/spark-shell --packages com.databricks:spark-csv_2.10:1.0.3
```
error getting
```
>>> df_cat.save("k.csv","com.databricks.spark.csv")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/pyspark/sql/dataframe.py", line 209, in save
self._jdf.save(source, jmode, joptions)
File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError
```
Where should I place the jar file in my spark pre-built setup so that I will be able to access `spark-csv` from python editor directly as well. | Instead of placing the jars in any specific folder a simple fix would be to start the pyspark shell with the following arguments:
```
bin/pyspark --packages com.databricks:spark-csv_2.10:1.0.3
```
This will automatically load the required spark-csv jars.
Then do the following to read the csv file:
```
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('file.csv')
df.show()
``` |
How to add any new library like spark-csv in Apache Spark prebuilt version | 30,757,439 | 13 | 2015-06-10T13:13:47Z | 35,476,136 | 10 | 2016-02-18T08:18:59Z | [
"python",
"apache-spark",
"apache-spark-sql"
] | I have build the [Spark-csv](https://github.com/databricks/spark-csv) and able to use the same from pyspark shell using the following command
```
bin/spark-shell --packages com.databricks:spark-csv_2.10:1.0.3
```
error getting
```
>>> df_cat.save("k.csv","com.databricks.spark.csv")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/pyspark/sql/dataframe.py", line 209, in save
self._jdf.save(source, jmode, joptions)
File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError
```
Where should I place the jar file in my spark pre-built setup so that I will be able to access `spark-csv` from python editor directly as well. | Another option is to add the following to your spark-defaults.conf:
```
spark.jars.packages com.databricks:spark-csv_2.11:1.2.0
``` |
How to slice middle element from list | 30,757,538 | 2 | 2015-06-10T13:18:08Z | 30,757,595 | 8 | 2015-06-10T13:20:06Z | [
"python",
"slice"
] | Rather simple question. Say I have a list like:
```
a = [3, 4, 54, 8, 96, 2]
```
Can I use slicing to leave out an element around the middle of the list to produce something like this?
```
a[some_slicing]
[3, 4, 8, 96, 2]
```
were the element `54` was left out. I would've guessed this would do the trick:
```
a[:2:]
```
but the result is not what I expected:
```
[3, 4]
``` | You cannot emulate pop with a single slice, since a slice only gives you a single start and end index.
You can, however, use two slices:
```
>>> a = [3, 4, 54, 8, 96, 2]
>>> a[:2] + a[3:]
[3, 4, 8, 96, 2]
```
You could wrap this into a function:
```
>>> def cutout(seq, idx):
"""
Remove element at `idx` from `seq`.
TODO: error checks.
"""
return seq[:idx] + seq[idx + 1:]
>>> cutout([3, 4, 54, 8, 96, 2], 2)
[3, 4, 8, 96, 2]
```
However, [`pop`](https://docs.python.org/3.5/tutorial/datastructures.html#more-on-lists) will be [faster](http://stackoverflow.com/a/30757658/89391). The list pop function is defined in [listobject.c](https://github.com/python/cpython/blob/69276f6f697cc0f6737fb3509b1a769a1d3fbdd0/Objects/listobject.c#L911). |
Extracting specific src attributes from script tags | 30,759,776 | 4 | 2015-06-10T14:46:04Z | 30,759,895 | 7 | 2015-06-10T14:51:18Z | [
"python",
"regex"
] | I want to get JS file names from the input content which contains `jquery` as a substring by RE.
This is my code:
Step 1: Extract JS file from the content.
```
>>> data = """ <script type="text/javascript" src="js/jquery-1.9.1.min.js"/>
... <script type="text/javascript" src="js/jquery-migrate-1.2.1.min.js"/>
... <script type="text/javascript" src="js/jquery-ui.min.js"/>
... <script type="text/javascript" src="js/abc_bsub.js"/>
... <script type="text/javascript" src="js/abc_core.js"/>
... <script type="text/javascript" src="js/abc_explore.js"/>
... <script type="text/javascript" src="js/abc_qaa.js"/>"""
>>> import re
>>> re.findall('src="js/([^"]+)"', data)
['jquery-1.9.1.min.js', 'jquery-migrate-1.2.1.min.js', 'jquery-ui.min.js', 'abc_bsub.js', 'abc_core.js', 'abc_explore.js', 'abc_qaa.js']
```
Step 2: Get JS file which have sub string as `jquery`
```
>>> [ii for ii in re.findall('src="js/([^"]+)"', data) if "jquery" in ii]
['jquery-1.9.1.min.js', 'jquery-migrate-1.2.1.min.js', 'jquery-ui.min.js']
```
Can I do above Step 2 in the Step 1 means RE Pattern to get result? | Sure you can. One way would be to use
```
re.findall('src="js/([^"]*jquery[^"]*)"', data)
```
This will match everything after `"js/` until the nearest `"` if it contains `jquery` anywhere. If you know more about the position of `jquery` (for example, if it's always at the start) you can adjust the regex accordingly.
If you want to make sure that `jquery` is not directly surrounded by other alphanumeric characters, use [word boundary anchors](http://www.regular-expressions.info/wordboundaries.html):
```
re.findall(r'src="js/([^"]*\bjquery\b[^"]*)"', data)
``` |
Installing Twisted through pip broken on one server | 30,763,614 | 5 | 2015-06-10T17:43:20Z | 30,766,325 | 10 | 2015-06-10T20:08:56Z | [
"python",
"pip",
"virtualenv",
"twisted"
] | I am setting up a virtualenv on a new server, and when I used pip on our requirements file, it kept dying on Twisted. I commented the Twisted line out, and everything else installed fine. At the command line, this is the output I see when I try to install Twisted (the same error I see when I run the entire requirements file once it gets to the Twisted line):
```
(foo)briggo@qa01:~$ pip install twisted
Collecting twisted
Could not find a version that satisfies the requirement twisted (from versions: )
No matching distribution found for twisted
```
I can install Twisted fine from my dev machine and other servers, and on this server I seem to be able to install other packages fine.
Case and version do not matter. Same result if I use "twisted", "Twisted", "Twisted==15.2.1".
This is an EC2 instance running Ubuntu 14.04.02. | Ok after struggling with this for several hours, I figured out the problem.
Running `pip install --verbose twisted` helped with the diagnosis.
The error message is misleading. The problem is that I built a custom installation of Python 2.7.10 without having previously installed libbz2-dev. So the steps to fix this were:
1. `sudo apt-get install libbz2-dev`
2. `cd /<untarred python source dir>`
3. `./configure --prefix=<my install path> --enable-ipv6`
4. `make`
5. `make install`
With this done, I can now create virtual environments and pip install Twisted. |
Python numpy: create 2d array of values based on coordinates | 30,764,955 | 6 | 2015-06-10T18:55:19Z | 30,765,484 | 10 | 2015-06-10T19:23:28Z | [
"python",
"arrays",
"numpy"
] | I have a file containing 3 columns, where the first two are coordinates (x,y) and the third is a value (z) corresponding to that position. Here's a short example:
```
x y z
0 1 14
0 2 17
1 0 15
1 1 16
2 1 18
2 2 13
```
I want to create a 2D array of values from the third row based on their x,y coordinates in the file. I read in each column as an individual array, and I created grids of x values and y values using numpy.meshgrid, like this:
```
x = [[0 1 2] and y = [[0 0 0]
[0 1 2] [1 1 1]
[0 1 2]] [2 2 2]]
```
but I'm new to Python and don't know how to produce a third grid of z values that looks like this:
```
z = [[Nan 15 Nan]
[14 16 18]
[17 Nan 13]]
```
Replacing `Nan` with `0` would be fine, too; my main problem is creating the 2D array in the first place. Thanks in advance for your help! | Assuming the `x` and `y` values in your file directly correspond to indices (as they do in your example), you can do something similar to this:
```
import numpy as np
x = [0, 0, 1, 1, 2, 2]
y = [1, 2, 0, 1, 1, 2]
z = [14, 17, 15, 16, 18, 13]
z_array = np.nan * np.empty((3,3))
z_array[y, x] = z
print z_array
```
Which yields:
```
[[ nan 15. nan]
[ 14. 16. 18.]
[ 17. nan 13.]]
```
For large arrays, this will be much faster than the explicit loop over the coordinates.
---
## Dealing with non-uniform x & y input
If you have regularly sampled x & y points, then you can convert them to grid indices by subtracting the "corner" of your grid (i.e. `x0` and `y0`), dividing by the cell spacing, and casting as ints. You can then use the method above or in any of the other answers.
As a general example:
```
i = ((y - y0) / dy).astype(int)
j = ((x - x0) / dx).astype(int)
grid[i,j] = z
```
However, there are a couple of tricks you can use if your data is not regularly spaced.
Let's say that we have the following data:
```
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1977)
x, y, z = np.random.random((3, 10))
fig, ax = plt.subplots()
scat = ax.scatter(x, y, c=z, s=200)
fig.colorbar(scat)
ax.margins(0.05)
```

That we want to put into a regular 10x10 grid:

We can actually use/abuse `np.histogram2d` for this. Instead of counts, we'll have it add the value of each point that falls into a cell. It's easiest to do this through specifying `weights=z, normed=False`.
```
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1977)
x, y, z = np.random.random((3, 10))
# Bin the data onto a 10x10 grid
# Have to reverse x & y due to row-first indexing
zi, yi, xi = np.histogram2d(y, x, bins=(10,10), weights=z, normed=False)
zi = np.ma.masked_equal(zi, 0)
fig, ax = plt.subplots()
ax.pcolormesh(xi, yi, zi, edgecolors='black')
scat = ax.scatter(x, y, c=z, s=200)
fig.colorbar(scat)
ax.margins(0.05)
plt.show()
```

However, if we have a large number of points, some bins will have more than one point. The `weights` argument to `np.histogram` simply *adds* the values. That's probably not what you want in this case. Nonetheless, we can get the mean of the points that fall in each cell by dividing by the counts.
So, for example, let's say we have 50 points:
```
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1977)
x, y, z = np.random.random((3, 50))
# Bin the data onto a 10x10 grid
# Have to reverse x & y due to row-first indexing
zi, yi, xi = np.histogram2d(y, x, bins=(10,10), weights=z, normed=False)
counts, _, _ = np.histogram2d(y, x, bins=(10,10))
zi = zi / counts
zi = np.ma.masked_invalid(zi)
fig, ax = plt.subplots()
ax.pcolormesh(xi, yi, zi, edgecolors='black')
scat = ax.scatter(x, y, c=z, s=200)
fig.colorbar(scat)
ax.margins(0.05)
plt.show()
```

With very large numbers of points, this exact method will become slow (and can be sped up easily), but it's sufficient for anything less than ~1e6 points. |
What's the correct way to clean up after an interrupted event loop? | 30,765,606 | 19 | 2015-06-10T19:30:45Z | 30,766,124 | 17 | 2015-06-10T19:58:05Z | [
"python",
"python-3.4",
"python-asyncio"
] | I have an event loop that runs some co-routines as part of a command line tool. The user may interrupt the tool with the usual `Ctrl` + `C`, at which point I want to clean up properly after the interrupted event loop.
Here's what I tried.
```
import asyncio
@asyncio.coroutine
def shleepy_time(seconds):
print("Shleeping for {s} seconds...".format(s=seconds))
yield from asyncio.sleep(seconds)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
# Side note: Apparently, async() will be deprecated in 3.4.4.
# See: https://docs.python.org/3.4/library/asyncio-task.html#asyncio.async
tasks = [
asyncio.async(shleepy_time(seconds=5)),
asyncio.async(shleepy_time(seconds=10))
]
try:
loop.run_until_complete(asyncio.gather(*tasks))
except KeyboardInterrupt as e:
print("Caught keyboard interrupt. Canceling tasks...")
# This doesn't seem to be the correct solution.
for t in tasks:
t.cancel()
finally:
loop.close()
```
Running this and hitting `Ctrl` + `C` yields:
```
$ python3 asyncio-keyboardinterrupt-example.py
Shleeping for 5 seconds...
Shleeping for 10 seconds...
^CCaught keyboard interrupt. Canceling tasks...
Task was destroyed but it is pending!
task: <Task pending coro=<shleepy_time() running at asyncio-keyboardinterrupt-example.py:7> wait_for=<Future cancelled> cb=[gather.<locals>._done_callback(1)() at /usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/asyncio/tasks.py:587]>
Task was destroyed but it is pending!
task: <Task pending coro=<shleepy_time() running at asyncio-keyboardinterrupt-example.py:7> wait_for=<Future cancelled> cb=[gather.<locals>._done_callback(0)() at /usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/asyncio/tasks.py:587]>
```
Clearly, I didn't clean up correctly. I thought perhaps calling `cancel()` on the tasks would be the way to do it.
What's the correct way to clean up after an interrupted event loop? | When you CTRL+C, the event loop gets stopped, so your calls to `t.cancel()` don't actually take effect. For the tasks to be cancelled, you need to start the loop back up again.
Here's how you can handle it:
```
import asyncio
@asyncio.coroutine
def shleepy_time(seconds):
print("Shleeping for {s} seconds...".format(s=seconds))
yield from asyncio.sleep(seconds)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
# Side note: Apparently, async() will be deprecated in 3.4.4.
# See: https://docs.python.org/3.4/library/asyncio-task.html#asyncio.async
tasks = asyncio.gather(
asyncio.async(shleepy_time(seconds=5)),
asyncio.async(shleepy_time(seconds=10))
)
try:
loop.run_until_complete(tasks)
except KeyboardInterrupt as e:
print("Caught keyboard interrupt. Canceling tasks...")
tasks.cancel()
loop.run_forever()
tasks.exception()
finally:
loop.close()
```
Once we catch `KeyboardInterrupt`, we call `tasks.cancel()` and then start the `loop` up again. `run_forever` will actually exit as soon as `tasks` gets cancelled (note that cancelling the `Future` returned by `asyncio.gather` also cancels all the `Futures` inside of it), because the interrupted `loop.run_until_complete` call added a `done_callback` to `tasks` that stops the loop. So, when we cancel `tasks`, that callback fires, and the loop stops. At that point we call `tasks.exception`, just to avoid getting a warning about not fetching the exception from the `_GatheringFuture`. |
Regular Expression to find brackets in a string | 30,766,151 | 4 | 2015-06-10T19:59:36Z | 30,766,212 | 7 | 2015-06-10T20:02:51Z | [
"python",
"regex"
] | I have a string which has multiple brackets. Let says
```
s="(a(vdwvndw){}]"
```
I want to extract all the brackets as a separate string.
I tried this:
```
>>> brackets=re.search(r"[(){}[]]+",s)
>>> brackets.group()
```
But it is only giving me last two brackets.
```
'}]'
```
Why is that? Shouldn't it fetch one or more of any of the brackets in the character set? | You have to escape the first closing square bracket.
```
r'[(){}[\]]+'
```
To combine all of them into a string, you can search for anything that *doesn't* match and remove it.
```
brackets = re.sub( r'[^(){}[\]]', '', s)
``` |
Error importing scikit-learn modules | 30,766,274 | 5 | 2015-06-10T20:06:39Z | 34,579,309 | 8 | 2016-01-03T17:17:40Z | [
"python",
"scikit-learn"
] | I'm trying to call a function from the cluster module, like so:
```
import sklearn
db = sklearn.cluster.DBSCAN()
```
and I get the following error:
```
AttributeError: 'module' object has no attribute 'cluster'
```
Tab-completing in IPython, I seem to have access to the base, clone, externals, re, setup\_module, sys, and warning modules. Nothing else, though others (including cluster) are in the sklearn directory.
Following pbu's advice below and using
```
from sklearn import cluster
```
I get:
```
Traceback (most recent call last):
File "test.py", line 2, in <module>
from sklearn import cluster
File "C:\Python34\lib\site-packages\sklearn\cluster\__init__.py", line 6, in <module>
from .spectral import spectral_clustering, SpectralClustering
File "C:\Python34\lib\site-packages\sklearn\cluster\spectral.py", line 13, in <module>
from ..utils import check_random_state, as_float_array
File "C:\Python34\lib\site-packages\sklearn\utils\__init__.py", line 16, in <module>
from .class_weight import compute_class_weight, compute_sample_weight
File "C:\Python34\lib\site-packages\sklearn\utils\class_weight.py", line 7, in <module>
from ..utils.fixes import in1d
File "C:\Python34\lib\site-packages\sklearn\utils\fixes.py", line 318, in <module>
from scipy.sparse.linalg import lsqr as sparse_lsqr
File "C:\Python34\lib\site-packages\scipy\sparse\linalg\__init__.py", line 109, in <module>
from .isolve import *
File "C:\Python34\lib\site-packages\scipy\sparse\linalg\isolve\__init__.py", line 6, in <module>
from .iterative import *
File "C:\Python34\lib\site-packages\scipy\sparse\linalg\isolve\iterative.py", line 7, in <module>
from . import _iterative
ImportError: DLL load failed: The specified module could not be found.
```
I'm using Python 3.4 on Windows, scikit-learn 0.16.1. | You probably don't use Numpy+MKL, but only Numpy.
I had the same problem and reinstalling Numpy with MKL
`pip install --upgrade --force-reinstall "numpy-1.10.2+mkl-cp35-none-win32.whl"`
fixed it. |
flask restful: passing parameters to GET request | 30,779,584 | 4 | 2015-06-11T11:31:20Z | 30,779,996 | 16 | 2015-06-11T11:51:17Z | [
"python",
"rest",
"flask",
"flask-restful"
] | I want to create a resource that supports GET request in following way:
```
/bar?key1=val1&key2=val2
```
I tried this code, but it is not working
```
app = Flask(__name__)
api = Api(app)
class BarAPI(Resource):
def get(key1, key2):
return jsonify(dict(data=[key1, key2]))
api.add_resource(BarAPI, '/bar', endpoint='bar')
```
Thanks! | **Edit: This is no longer the recommended way to do this with flask-restful!** The `reqparse` object is deprecated see [docs](http://flask-restful-cn.readthedocs.org/en/0.3.5/reqparse.html) for recommended alternative.
---
Use `reqparse`. You can see another example in the flask-restful [docs](http://flask-restful.readthedocs.org/en/latest/quickstart.html#argument-parsing).
It performs validation on the parameters and does not require `jsonify`.
```
from flask import Flask
from flask_restful import Resource, Api, reqparse
app = Flask(__name__)
api = Api(app)
class BarAPI(Resource):
def get(self):
parser = reqparse.RequestParser()
parser.add_argument('key1', type=str)
parser.add_argument('key2', type=str)
return parser.parse_args()
api.add_resource(BarAPI, '/bar', endpoint='bar')
if __name__ == '__main__':
app.run(debug=True)
``` |
An easier way of referring to the index of a desired dictionary (Python)? | 30,780,027 | 2 | 2015-06-11T11:52:27Z | 30,780,126 | 7 | 2015-06-11T11:57:26Z | [
"python",
"for-loop"
] | Question feels like it's phrased poorly, feel free to adjust it if you agree and know how better to phrase it.
I have the following code:
```
def owned_calendars(cal_items):
"""Returns only the calendars in which the user is marked as "owner"
"""
owner_cals = []
for entry in cal_items:
if entry['accessRole'] == "owner":
owner_cals.append(cal_items[cal_items.index(entry)])
return owner_cals
```
`cal_items` is a `list` of `dictionaries`
In the line where I have written `owner_cals.append(cal_items[cal_items.index(entry)])` I'm trying to append the dictionaries that have the property `accessRole = owner`.
The line just seems super long and clunky, and I'm wondering if there's an easier/more intuitive way to do it? | Try this. You can do this in one line using [list comprehension](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions).
```
owner_cals = [x for x in cal_items if x["access_role"]=="owner"]
```
You can also use [`enumerate`](https://docs.python.org/2/library/functions.html#enumerate) method.
```
owner_cals = [j for i,j in enumerate(cal_items) if j["access_role"]=="owner"]
```
**Also, remember `.index()` returns the lowest index where item is found**.
`["foo", "bar", "baz", "bar"].index("bar")` will always return 1. |
How to use `--foo 1 --foo 2` style arguments with Python argparse? | 30,780,779 | 6 | 2015-06-11T12:28:11Z | 30,780,951 | 7 | 2015-06-11T12:35:19Z | [
"python",
"arguments",
"argparse"
] | `nargs='+'` doesn't work the way I expected:
```
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument("--name", dest='names', nargs='+')
_StoreAction(option_strings=['--name'], dest='names', nargs='+', const=None, default=None, type=None, choices=None, help=None, metavar=None)
>>> parser.parse_args('--name foo --name bar'.split())
Namespace(names=['bar'])
```
I can "fix" this by using `--name foo bar`, but that's unlike other tools I've used, and I'd rather be more explicit. Does `argparse` support this? | You want to use `action='append'` instead of `nargs='+'`:
```
>>> parser.add_argument("--name", dest='names', action='append')
_AppendAction(option_strings=['--name'], dest='names', nargs=None, const=None, default=None, type=None, choices=None, help=None, metavar=None)
>>> parser.parse_args('--name foo --name bar'.split())
Namespace(names=['foo', 'bar'])
```
`nargs` is used if you just want to take a series of positional arguments, while `action='append'` works if you want to be able to take a flag more than once and accumulate the results in a list. |
"Too many indexers" with DataFrame.loc | 30,781,037 | 8 | 2015-06-11T12:38:50Z | 30,781,664 | 8 | 2015-06-11T13:06:10Z | [
"python",
"pandas"
] | I've read [the docs about slicers](http://pandas.pydata.org/pandas-docs/stable/advanced.html#using-slicers) a million times, but have never got my head round it, so I'm still trying to figure out how to use `loc` to slice a `DataFrame` with a `MultiIndex`.
I'll start with the `DataFrame` from [this SO answer](http://stackoverflow.com/a/22987532/2071807):
```
value
first second third fourth
A0 B0 C1 D0 2
D1 3
C3 D0 6
D1 7
B1 C1 D0 10
D1 11
C3 D0 14
D1 15
A1 B0 C1 D0 18
D1 19
C3 D0 22
D1 23
B1 C1 D0 26
D1 27
C3 D0 30
D1 31
A2 B0 C1 D0 34
D1 35
C3 D0 38
D1 39
B1 C1 D0 42
D1 43
C3 D0 46
D1 47
A3 B0 C1 D0 50
D1 51
C3 D0 54
D1 55
B1 C1 D0 58
D1 59
C3 D0 62
D1 63
```
To select just `A0` and `C1` values, I can do:
```
In [26]: df.loc['A0', :, 'C1', :]
Out[26]:
value
first second third fourth
A0 B0 C1 D0 2
D1 3
B1 C1 D0 10
D1 11
```
Which also works selecting from three levels, and even with tuples:
```
In [28]: df.loc['A0', :, ('C1', 'C2'), 'D1']
Out[28]:
value
first second third fourth
A0 B0 C1 D1 3
C2 D1 5
B1 C1 D1 11
C2 D1 13
```
So far, intuitive and brilliant.
So why can't I select all values from the first index level?
```
In [30]: df.loc[:, :, 'C1', :]
---------------------------------------------------------------------------
IndexingError Traceback (most recent call last)
<ipython-input-30-57b56108d941> in <module>()
----> 1 df.loc[:, :, 'C1', :]
/usr/local/lib/python2.7/dist-packages/pandas/core/indexing.pyc in __getitem__(self, key)
1176 def __getitem__(self, key):
1177 if type(key) is tuple:
-> 1178 return self._getitem_tuple(key)
1179 else:
1180 return self._getitem_axis(key, axis=0)
/usr/local/lib/python2.7/dist-packages/pandas/core/indexing.pyc in _getitem_tuple(self, tup)
694
695 # no multi-index, so validate all of the indexers
--> 696 self._has_valid_tuple(tup)
697
698 # ugly hack for GH #836
/usr/local/lib/python2.7/dist-packages/pandas/core/indexing.pyc in _has_valid_tuple(self, key)
125 for i, k in enumerate(key):
126 if i >= self.obj.ndim:
--> 127 raise IndexingError('Too many indexers')
128 if not self._has_valid_type(k, i):
129 raise ValueError("Location based indexing can only have [%s] "
IndexingError: Too many indexers
```
Surely this is not intended behaviour?
*Note: I know this is possible with `df.xs('C1', level='third')` but the current `.loc` behaviour seems inconsistent.* | To be safe (in the sense: this will work in all cases), you need to index both row index and columns, for which you can use `pd.IndexSlice` to do this easily:
```
In [26]: idx = pd.IndexSlice
In [27]: df.loc[idx[:, :, 'C1', :],:]
Out[27]:
value
first second third fourth
A0 B0 C1 D0 2
D1 3
B1 C1 D0 10
D1 11
A1 B0 C1 D0 18
D1 19
B1 C1 D0 26
D1 27
A2 B0 C1 D0 34
D1 35
B1 C1 D0 42
D1 43
A3 B0 C1 D0 50
D1 51
B1 C1 D0 58
D1 59
```
Here `idx[:, :, 'C1', :]` is an easier way to write `[slice(None), slice(None),'C1', slice(None)]`. Instead of `pd.IndexSlice`, you can also use `np.s_` which is a bit shorter.
The reason that the other ones work, I am not fully sure of. But see the note in the documentation here: <http://pandas.pydata.org/pandas-docs/stable/advanced.html#using-slicers> (the first red warning box) where it is stated that:
> You should specify all axes in the `.loc` specifier, meaning the indexer for the index and for the columns. Their are some ambiguous cases where the passed indexer could be mis-interpreted as indexing *both* axes, rather than into say the MuliIndex for the rows. |
Am I using `all` correctly? | 30,783,333 | 5 | 2015-06-11T14:13:32Z | 30,783,369 | 13 | 2015-06-11T14:14:52Z | [
"python"
] | A user asked ([Keyerror while using pandas in PYTHON 2.7](http://stackoverflow.com/questions/30758495/keyerror-while-using-pandas-in-python-2-7)) why he was having a `KeyError` while looking in a dictionary and how he could avoid this exception.
As an answer, I suggested him to check for the keys in the dictionary before. So, if he needed all the keys `['key_a', 'key_b', 'key_c']` in the `dictionary`, he could test it with:
```
if not all([x in dictionary for x in ['key_a', 'key_b', 'key_c']]):
continue
```
This way he could ignore dictionaries that didn't have the expected keys (the list of dictionaries is created out of JSON formatted lines loaded from a file). \**Refer to the original question for more details, if relevant to this question.*
A user more experienced in Python and SO, which I would consider an authority on the matter for its career and gold badges told me I was using `all` incorrectly. I was wondering if this is really the case (for what I can tell, that works as expected) and why, or if there is a better way to check if a couple of keys are all in a dictionary. | Yes that will work fine, but you don't even need the list comprehension
```
if not all(x in dictionary for x in ['key_a', 'key_b', 'key_c']):
continue
```
If you have the surrounding `[]`, it will evaluate all the elements before calling `all`. If you remove them, the inner expression is a generator, and will [short-circuit](https://en.wikipedia.org/wiki/Short-circuit_evaluation) upon the first `False`. |
Can I trust the order of a dict to remain the same each time it is iterated over? | 30,787,056 | 6 | 2015-06-11T17:02:57Z | 30,787,109 | 9 | 2015-06-11T17:05:37Z | [
"python",
"list",
"dictionary",
"iteration"
] | I have the following three strings (they exist independently but are displayed here together for convenience):
```
from mx2.x.org (mx2.x.org. [198.186.238.144])
by mx.google.com with ESMTPS id g34si6312040qgg.122.2015.04.22.14.49.15
(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
Wed, 22 Apr 2015 14:49:16 -0700 (PDT)
from HQPAMAIL08.x.org (10.64.17.33) by HQPAMAIL13.x.x.org
(10.34.25.11) with Microsoft SMTP Server (TLS) id 14.2.347.0; Wed, 22 Apr
2015 17:49:13 -0400
from HQPAMAIL13.x.org ([fe80::7844:1f34:e8b2:e526]) by
HQPAMAIL08.iadb.org ([fe80::20b5:b1cb:9c01:aa86%18]) with mapi id
14.02.0387.000; Wed, 22 Apr 2015 17:49:12 -0400
```
I'm looking to populate a dict with some values based on the reversed (bottom to top) order of the strings. Specifically, for each string, I'm extracting the IP address as an index of sorts, and then the full string as the value.
Given that order is important, I decided to go with lists, and initially did something like this (pseudocode, with the above bunch of text):
```
IPs =[]
fullStrings =[]
for string in strings:
IPs.append[$theIpAddressFoundInTheString]
fullstrings.append[$theWholeString]
```
resulting in the following two lists (again, just an illustration):
```
IPs ['198.186.238.144', '10.64.17.33', 'fe80::7844:1f34:e8b2:e526']
fullstrings ['from mx2.x.org (mx2.x.org. [198.186.238.144])
by mx.google.com with ESMTPS id g34si6312040qgg.122.2015.04.22.14.49.15
(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
Wed, 22 Apr 2015 14:49:16 -0700 (PDT)', 'from HQPAMAIL08.x.org (10.64.17.33) by HQPAMAIL13.x.x.org
(10.34.25.11) with Microsoft SMTP Server (TLS) id 14.2.347.0; Wed, 22 Apr
2015 17:49:13 -0400', 'from HQPAMAIL13.x.org ([fe80::7844:1f34:e8b2:e526]) by
HQPAMAIL08.x.org ([fe80::20b5:b1cb:9c01:aa86%18]) with mapi id
14.02.0387.000; Wed, 22 Apr 2015 17:49:12 -0400']
```
This has worked fine up until a point, but now as I begin populating a **dict** with values in these lists (at hardcoded indices), comparing against values in other lists (again at hardcoded indices) etc., not only does debugging become a pain, the code becomes unsustainable.
I'm beginning to rewrite using a dict (returning a dict where the IP addresses are the keys and the full strings are the values). Then I will perform operations like:
```
for k,v in myDictOfIpsAndStrings:
anotherDict[$someHardcodedText] = k
anotherDict[$otherHardcodedText] = v
```
Here's my concern: **can I be sure that the dict, any time it is iterated over, will always be done in the order in which the dict was created?** If not, is my only option to revert back to lists (and the tedious and **brittle** length comparisons, assignments inherent in doing so) etc.?
I know that a dict is, by its very nature, unsorted. And I know of the `sorted` function, but I'm not looking to sort they keys by any descending/ascending order etc. rather it's about maintaining (somehow) the order in which the dict was created. | > can I be sure that the dict, any time it is iterated over, will always be done in the order in which the dict was created?
No, a `dict` is unordered, and will lay out its ordering however the particular implementation decides to.
```
>>> d = {3: 'c', 2: 'b', 1: 'a'}
>>> d
{1: 'a', 2: 'b', 3: 'c'}
```
See, immediately after I created the `dict` the order changed.
If you want to ensure you have a deterministic, controllable order, you can use a [`collections.OrderedDict`](https://docs.python.org/3/library/collections.html#collections.OrderedDict)
```
>>> from collections import OrderedDict
>>> d = OrderedDict([(3, 'c'), (2, 'b'), (1, 'a')])
>>> d
OrderedDict([(3, 'c'), (2, 'b'), (1, 'a')])
```
You can still access the `OrderedDict` in the conventions you are used to
```
>>> d[3]
'c'
>>> d.get(3)
'c'
```
Note that you do not have to insert all of the elements upon creation. You can insert them one at a time if you want.
```
>>> d = OrderedDict()
>>> d[3] = 'c'
>>> d[2] = 'b'
>>> d[1] = 'a'
>>> d[4] = 'd'
>>> d
OrderedDict([(3, 'c'), (2, 'b'), (1, 'a'), (4, 'd')])
``` |
mandrill template variables not substituting | 30,787,369 | 4 | 2015-06-11T17:19:13Z | 30,930,699 | 9 | 2015-06-19T05:42:07Z | [
"python",
"email",
"templates",
"handlebars.js",
"mandrill"
] | Having issue with template variables not substituting when sending an email. I have a simple template:
`<div class="entry">
Your name is {{firstName}}
</div>`
And my python code to send an email:
```
client = mandrill.Mandrill(apikey=api_key)
my_merge_vars = [{'content': 'Dexter', 'name': 'firstName'}]
message = {'from_email': '[email protected]',
'to': [{'email': '[email protected]',
'name': 'Deborah',
'type': 'to'}
],
'subject': 'example subject',
'global_merge_vars': my_merge_vars
}
result = client.messages.send_template(template_name='test-template',
template_content=[],
message=message)
```
The email sends, however I get:
`Your name is {{firstName}}` | Make sure you specify that the merge type is handlebars. You can either do it in your account settings (Settings > Sending Defaults > Merge Language) or in your API call via the 'merge\_language' parameter. |
takeOrdered descending Pyspark | 30,787,635 | 20 | 2015-06-11T17:34:16Z | 30,788,891 | 44 | 2015-06-11T18:41:07Z | [
"python",
"apache-spark"
] | i would like to sort K/V pairs by values and then take the biggest five values. I managed to do this with reverting K/V with first map, sort in descending order with FALSE, and then reverse key.value to the original (second map) and then take the first 5 that are the bigget, the code is this:
```
RDD.map(lambda x:(x[1],x[0])).sortByKey(False).map(lambda x:(x[1],x[0])).take(5)
```
i know there is a takeOrdered action on pySpark, but i only managed to sort on values (and not on key), i don't know how to get a descending sorting:
```
RDD.takeOrdered(5,key = lambda x: x[1])
``` | Sort by keys (ascending):
```
RDD.takeOrdered(5, key = lambda x: x[0])
```
Sort by keys (descending):
```
RDD.takeOrdered(5, key = lambda x: -x[0])
```
Sort by values (ascending):
```
RDD.takeOrdered(5, key = lambda x: x[1])
```
Sort by values (descending):
```
RDD.takeOrdered(5, key = lambda x: -x[1])
``` |
ValueError: cannot reindex from a duplicate axis using isin with pandas | 30,788,061 | 2 | 2015-06-11T17:55:15Z | 34,018,827 | 7 | 2015-12-01T11:05:45Z | [
"python",
"pandas",
"dataframe",
"gotchas"
] | I am trying to short zipcodes into various files but I keep getting
> ValueError: cannot reindex from a duplicate axis
I've read through other documentation on Stackoverflow, but I haven't been about to figure out why its duplicating axis.
```
import csv
import pandas as pd
from pandas import DataFrame as df
fp = '/Users/User/Development/zipcodes/file.csv'
file1 = open(fp, 'rb').read()
df = pd.read_csv(fp, sep=',')
df = df[['VIN', 'Reg Name', 'Reg Address', 'Reg City', 'Reg ST', 'ZIP',
'ZIP', 'Catagory', 'Phone', 'First Name', 'Last Name', 'Reg NFS',
'MGVW', 'Make', 'Veh Model','E Mfr', 'Engine Model', 'CY2010',
'CY2011', 'CY2012', 'CY2013', 'CY2014', 'CY2015', 'Std Cnt',
]]
#reader.head(1)
df.head(1)
zipBlue = [65355, 65350, 65345, 65326, 65335, 64788, 64780, 64777, 64743,
64742, 64739, 64735, 64723, 64722, 64720]
```
Also contains `zipGreen, zipRed, zipYellow, ipLightBlue`
But did not include in example.
```
def IsInSort():
blue = df[df.ZIP.isin(zipBlue)]
green = df[df.ZIP.isin(zipGreen)]
red = df[df.ZIP.isin(zipRed)]
yellow = df[df.ZIP.isin(zipYellow)]
LightBlue = df[df.ZIP.isin(zipLightBlue)]
def SaveSortedZips():
blue.to_csv('sortedBlue.csv')
green.to_csv('sortedGreen.csv')
red.to_csv('sortedRed.csv')
yellow.to_csv('sortedYellow.csv')
LightBlue.to_csv('SortedLightBlue.csv')
IsInSort()
SaveSortedZips()
```
> 1864 # trying to reindex on an axis with duplicates 1865
> if not self.is\_unique and len(indexer):
> -> 1866 raise ValueError("cannot reindex from a duplicate axis") 1867 1868 def reindex(self, target, method=None,
> level=None, limit=None):
>
> ValueError: cannot reindex from a duplicate axis | I'm pretty sure your problem is related to your mask
```
df = df[['VIN', 'Reg Name', 'Reg Address', 'Reg City', 'Reg ST', 'ZIP',
'ZIP', 'Catagory', 'Phone', 'First Name', 'Last Name', 'Reg NFS',
'MGVW', 'Make', 'Veh Model','E Mfr', 'Engine Model', 'CY2010',
'CY2011', 'CY2012', 'CY2013', 'CY2014', 'CY2015', 'Std Cnt',
]]
```
`'ZIP'` is in there twice. Removing one of them should solve the problem.
The error `ValueError: cannot reindex from a duplicate axis` is one of these **very very cryptic pandas errors** which simply does not tell you what the error is.
The error is often related to two columns being named the same either before or after (internally in) the operation. |
How to filter a pandas dataframe by cells that DO NOT contain a substring? | 30,791,265 | 3 | 2015-06-11T20:53:52Z | 30,791,401 | 7 | 2015-06-11T21:03:34Z | [
"python",
"pandas"
] | I want to filter a dataframe to find rows which do not contain the string 'site'.
I know how to filter for rows which do contain 'site' but have not been able to get the reverse working. Here is what I have so far:
```
def rbs(): #removes blocked sites
frame = fill_rate()
mask = frame[frame['Media'].str.contains('Site')==True]
frame = (frame != mask)
return frame
```
But this returns an error, of course. | Just do `frame[~frame['Media'].str.contains('Site')]`
The `~` negates the boolean condition
So your method becomes:
```
def rbs(): #removes blocked sites
frame = fill_rate()
return frame[~frame['Media'].str.contains('Site')]
```
**EDIT**
it looks like you have `NaN` values judging by your errors so you have to filter these out first so your method becomes:
```
def rbs(): #removes blocked sites
frame = fill_rate()
frame = frame[frame['Media'].notnull()]
return frame[~frame['Media'].str.contains('Site')]
```
the `notnull` will filter out the missing values |
Python multiple condition IN string | 30,798,167 | 9 | 2015-06-12T08:01:32Z | 30,798,228 | 8 | 2015-06-12T08:04:49Z | [
"python"
] | How to merge the condition in array format
```
lines = [line for line in open('text.txt')if
'|E|' in line and
'GetPastPaymentInfo' in line and
'CheckData' not in line and
'UpdatePrintStatus' not in line
]
```
like
```
lines = [line for line in open('text.txt')if
['|E|','GetPastPaymentInfo'] in line and
['CheckData','UpdatePrintStatus'] not in line]
``` | You can use a [generator expression](https://www.python.org/dev/peps/pep-0289/) within [`all`](https://docs.python.org/2/library/functions.html#all) function to check the membership for all elements :
```
lines = [line for line in open('text.txt') if
all(i in line for i in ['|E|','GetPastPaymentInfo'])and
all(j not in line for j in ['CheckData','UpdatePrintStatus'])]
```
Or if you want to check for words you can split the lines and use [`intersection`](https://docs.python.org/2/library/stdtypes.html#set.intersection) method in `set` 1:
```
lines = [line for line in open('text.txt') if
{'|E|','GetPastPaymentInfo'}.intersection(line.split()) and not {'CheckData','UpdatePrintStatus'}.intersection(line.split())]
```
Note that you need to put your words within a `set` instead of list.
---
1) Note that since `set` object use *[hash-table](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB0QFjAAahUKEwi7vraygorGAhXL7RQKHW0_APw&url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FHash_table&ei=Cr56VbuRJsvbU-3-gOAP&usg=AFQjCNFWP9iBvqhV_h73-y04O6NM6tIsNQ&sig2=TamEpxV8twqQPR5XIfk_MA&bvm=bv.95515949,d.ZGU)* for storing its elements and for returning the items as well, checking the membership has O(1) order and it's more efficient than `list` which has O(N) order. |
How to increase the performance for estimating `Pi`in Python | 30,807,010 | 5 | 2015-06-12T15:37:25Z | 30,807,145 | 8 | 2015-06-12T15:44:48Z | [
"python",
"performance",
"pi"
] | I have written the following code in Python, in order to estimate the value of `Pi`. It is called [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_method) method. Obviously by increasing the number of samples the code becomes slower and I assume that the slowest part of the code is in the sampling part.
How can I make it faster?
```
from __future__ import division
import numpy as np
a = 1
n = 1000000
s1 = np.random.uniform(0,a,n)
s2 = np.random.uniform(0,a,n)
ii=0
jj=0
for item in range(n):
if ((s1[item])**2 + (s2[item])**2) < 1:
ii = ii + 1
print float(ii*4/(n))
```
Do you suggest other (presumably faster) codes? | The bottleneck here is actually your `for` loop. Python `for` loops are relatively slow, so if you need to iterate over a million items, you can gain a lot of speed by avoiding them altogether. In this case, it's quite easy. Instead of this:
```
for item in range(n):
if ((s1[item])**2 + (s2[item])**2) < 1:
ii = ii + 1
```
do this:
```
ii = ((s1 ** 2 + s2 ** 2) < 1).sum()
```
This works because `numpy` has built-in support for optimized array operations. The looping occurs in `c` instead of python, so it's much faster. I did a quick test so you can see the difference:
```
>>> def estimate_pi_loop(x, y):
... total = 0
... for i in xrange(len(x)):
... if x[i] ** 2 + y[i] ** 2 < 1:
... total += 1
... return total * 4.0 / len(x)
...
>>> def estimate_pi_numpy(x, y):
... return ((x ** 2 + y ** 2) < 1).sum()
...
>>> %timeit estimate_pi_loop(x, y)
1 loops, best of 3: 3.33 s per loop
>>> %timeit estimate_pi_numpy(x, y)
100 loops, best of 3: 10.4 ms per loop
```
Here are a few examples of the kinds of operations that are possible, just so you have a sense of how this works.
Squaring an array:
```
>>> a = numpy.arange(5)
>>> a ** 2
array([ 0, 1, 4, 9, 16])
```
Adding arrays:
```
>>> a + a
array([0, 2, 4, 6, 8])
```
Comparing arrays:
```
>>> a > 2
array([False, False, False, True, True], dtype=bool)
```
Summing boolean values:
```
>>> (a > 2).sum()
2
```
As you probably realize, there are faster ways to estimate Pi, but I will admit that I've always admired the simplicity and effectiveness of this method. |
Python PEP 273 and Amazon BotoCore | 30,808,297 | 10 | 2015-06-12T16:47:56Z | 30,847,419 | 7 | 2015-06-15T14:12:47Z | [
"python",
"python-2.7",
"python-import",
"pep",
"botocore"
] | On a small embedded Linux device with limited space, I am trying to place the large [10 Mb] Amazon (AWS) BotoCore library (<https://github.com/boto/botocore>) in a zip file to compress it and then import it in my Python Scripts using zipimport as described in PEP273 (<https://www.python.org/dev/peps/pep-0273/>).
I modified my script to have the following lines at the beginning:
```
## Use zip imports
import sys
sys.path.insert(0, '/usr/lib/python2.7/site-packages/site-packages.zip')
```
The site-packages zip file only has botocore in it and site-packages directory itself has the other modules I use, but excluding botocore, in it.
Here is a listing of that directory:
```
/usr/lib/python2.7/site-packages >> ls -rlt
total 1940
-rw-rw-r-- 1 root root 32984 Jun 8 12:22 six.pyc
-rw-r--r-- 1 root root 119 Jun 11 07:43 README
drwxrwxr-x 2 root root 4096 Jun 11 07:43 requests-2.4.3-py2.7.egg-info
drwxrwxr-x 2 root root 4096 Jun 11 07:43 six-1.9.0-py2.7.egg-info
drwxrwxr-x 2 root root 4096 Jun 11 07:43 python_dateutil-2.4.2-py2.7.egg-info
drwxrwxr-x 2 root root 4096 Jun 11 07:43 jmespath-0.7.0-py2.7.egg-info
-rw-rw-r-- 1 root root 2051 Jun 11 07:44 pygtk.pyc
-rw-rw-r-- 1 root root 1755 Jun 11 07:44 pygtk.pyo
-rw-rw-r-- 1 root root 8 Jun 11 07:44 pygtk.pth
drwxrwxr-x 2 root root 4096 Jun 11 07:44 futures-2.2.0-py2.7.egg-info
drwxrwxr-x 3 root root 4096 Jun 11 07:44 gtk-2.0
drwxrwxr-x 3 root root 4096 Jun 11 07:44 requests
drwxrwxr-x 3 root root 4096 Jun 11 07:44 dbus
drwxrwxr-x 3 root root 4096 Jun 11 07:44 dateutil
drwxrwxr-x 2 root root 4096 Jun 11 07:44 jmespath
drwxrwxr-x 3 root root 4096 Jun 11 07:44 concurrent
drwxrwxr-x 2 root root 4096 Jun 11 07:44 futures
drwxrwxr-x 2 root root 4096 Jun 12 10:42 gobject
drwxrwxr-x 2 root root 4096 Jun 12 10:42 glib
-rwxr-xr-x 1 root root 5800 Jun 12 10:42 _dbus_glib_bindings.so
-rwxr-xr-x 1 root root 77680 Jun 12 10:42 _dbus_bindings.so
-rwxr-xr-x 1 root root 1788623 Jun 12 11:39 site-packages.zip
```
And here are the contents of that zipfile:

My problem is that I can import boto3 and import botocore just find, but when I try to use some API methods contained therein, I get exceptions like this:
```
>> Unknown component: enpoint_resolver
```
or
```
>> Unable to load data for: aws/_endpoints!
```
If I remove the zip file after uncompressing it in the site-packages directory and reboot - my script works fine.
How can I leverage zipfile imports to compress this huge library? Thanks! | Unfortunately, this just isn't going to work.
PEP 273 requires library authors to follow certain rules, which this package does not. In particular, it [makes use of `__file__`](https://github.com/boto/botocore/search?utf8=%E2%9C%93&q=__file__) rather than [`pkgutil.get_data()`](https://docs.python.org/3/library/pkgutil.html#pkgutil.get_data) or an equivalent API. As a result, the files must actually exist in the filesystem.
You might try using FUSE to mount the .zip file in the filesystem, so it appears to Python as if it's uncompressed, without actually taking up all that disk space. Just looking through Google, I came up with [fuse-zip](https://bitbucket.org/agalanin/fuse-zip), which looks like it could be suitable. You'll want to run some benchmarks to ensure it performs well on your system. |
How to select columns from dataframe by regex | 30,808,430 | 9 | 2015-06-12T16:55:19Z | 30,808,571 | 17 | 2015-06-12T17:04:16Z | [
"python",
"python-2.7",
"pandas"
] | I have a dataframe in python pandas. The structure of the dataframe is as the following:
```
a b c d1 d2 d3
10 14 12 44 45 78
```
I would like to select the columns which begin with d. Is there a simple way to achieve this in python . | You can use [`DataFrame.filter`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html) this way:
```
import pandas as pd
df = pd.DataFrame(np.array([[2,4,4],[4,3,3],[5,9,1]]),columns=['d','t','didi'])
>>
d t didi
0 2 4 4
1 4 3 3
2 5 9 1
df.filter(regex=("d.*"))
>>
d didi
0 2 4
1 4 3
2 5 1
```
The idea is to select columns by `regex` |
Getting a strange result when comparing 2 dictionaries in python | 30,808,586 | 2 | 2015-06-12T17:05:30Z | 30,808,622 | 7 | 2015-06-12T17:07:58Z | [
"python",
"python-2.7"
] | So I have a pair of dictionaries in python: (both have exactly the same keys)
```
defaults = {'ToAlpha': 4, 'ToRed': 4, 'ToGreen': 4, 'ToBlue': 4,}
bridged = {'ToAlpha': 3, 'ToRed': 0, 'ToGreen': 1, 'ToBlue': 2,}
```
When I iterate through one of the dictionaries I do a quick check to see if the other dict has the same key, if it does then print it.
```
for key, value in defaults.iteritems():
if bridged.get(key):
print key
```
What I would expect to see is:
```
ToAlpha
ToRed
ToGreen
ToBlue
```
But for some reason, 'ToRed' is not printed. I must be missing something really simple here, but have no idea might might be causing this.
```
bridged.get('ToRed')
```
and
```
defaults.get('ToRed')
```
both work independently, but when iterated through the loop... Nothing!
Any idea's? | `0` is false. Use `in` to check for containment.
```
if key in bridged:
``` |
Error when using classify in caffe | 30,808,735 | 4 | 2015-06-12T17:15:04Z | 30,809,008 | 11 | 2015-06-12T17:31:47Z | [
"python",
"python-2.7",
"caffe"
] | I am using caffe in python to classify. I get code from [here](http://www.openu.ac.il/home/hassner/projects/cnn_agegender/). In here, I just use simple code such as
```
plt.rcParams['figure.figsize'] = (10, 10)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
mean_filename='./mean.binaryproto'
proto_data = open(mean_filename, "rb").read()
a = caffe.io.caffe_pb2.BlobProto.FromString(proto_data)
mean = caffe.io.blobproto_to_array(a)[0]
age_net_pretrained='./age_net.caffemodel'
age_net_model_file='./deploy_age.prototxt'
age_net = caffe.Classifier(age_net_model_file, age_net_pretrained,
mean=mean,
channel_swap=(2,1,0),
raw_scale=255,
image_dims=(256, 256))
```
However, I got error such as
```
Traceback (most recent call last):
File "cnn_age_gender_demo.py", line 25, in
image_dims=(256, 256))
File "/home/john/Downloads/caffe/python/caffe/classifier.py", line 34, in init
self.transformer.set_mean(in_, mean)
File "/home/john/Downloads/caffe/python/caffe/io.py", line 255, in set_mean
raise ValueError('Mean shape incompatible with input shape.')
ValueError: Mean shape incompatible with input shape.
```
Could you help me to reslove it? Thanks | Let go to line 253-254 in caffe/python/caffe/io.py
Replace
```
if ms != self.inputs[in_][1:]:
raise ValueError('Mean shape incompatible with input shape.')
```
By
```
if ms != self.inputs[in_][1:]:
print(self.inputs[in_])
in_shape = self.inputs[in_][1:]
m_min, m_max = mean.min(), mean.max()
normal_mean = (mean - m_min) / (m_max - m_min)
mean = resize_image(normal_mean.transpose((1,2,0)),in_shape[1:]).transpose((2,0,1)) * (m_max - m_min) + m_min
#raise ValueError('Mean shape incompatible with input shape.')
```
Rebuild. Hope it help |
How to use Python 3.4's enums without significant slowdown? | 30,812,793 | 7 | 2015-06-12T22:00:28Z | 30,812,818 | 10 | 2015-06-12T22:03:45Z | [
"python",
"performance",
"enums",
"python-3.4"
] | I was writing a tic-tac-toe game and using an Enum to represent the three outcomes -- `lose`, `draw`, and `win`. I thought it would be better style than using the strings `("lose", "win", "draw")` to indicate these values. But using enums gave me a significant performance hit.
Here's a minimal example, where I simply reference either `Result.lose` or the literal string `lose`.
```
import enum
import timeit
class Result(enum.Enum):
lose = -1
draw = 0
win = 1
>>> timeit.timeit('Result.lose', 'from __main__ import Result')
1.705788521998329
>>> timeit.timeit('"lose"', 'from __main__ import Result')
0.024598151998361573
```
This is much slower than simply referencing a global variable.
```
k = 12
>>> timeit.timeit('k', 'from __main__ import k')
0.02403248500195332
```
My questions are:
* I know that global lookups are much slower than local lookups in Python. But why are enum lookups even worse?
* How can enums be used effectively without sacrificing performance? Enum lookup turned out to be completely dominating the runtime of my tic-tac-toe program. We could save local copies of the enum in every function, or wrap everything in a class, but both of those seem awkward. | You are timing the timing loop. A string literal on its own is *ignored entirely*:
```
>>> import dis
>>> def f(): "lose"
...
>>> dis.dis(f)
1 0 LOAD_CONST 1 (None)
3 RETURN_VALUE
```
That's a function that does nothing *at all*. So the timing loop takes `0.024598151998361573` seconds to run 1 million times.
In this case, the string actually became the docstring of the `f` function:
```
>>> f.__doc__
'lose'
```
but CPython generally will omit string literals in code if not assigned or otherwise part of an expression:
```
>>> def f():
... 1 + 1
... "win"
...
>>> dis.dis(f)
2 0 LOAD_CONST 2 (2)
3 POP_TOP
3 4 LOAD_CONST 0 (None)
7 RETURN_VALUE
```
Here the `1 + 1` as folded into a constant (`2`), and the string literal is once again gone.
As such, you cannot compare this to looking up an attribute on an `enum` object. Yes, looking up an attribute takes cycles. But so does looking up another variable. If you really are worried about performance, you can always *cache* the attribute lookup:
```
>>> import timeit
>>> import enum
>>> class Result(enum.Enum):
... lose = -1
... draw = 0
... win = 1
...
>>> timeit.timeit('outcome = Result.lose', 'from __main__ import Result')
1.2259576459764503
>>> timeit.timeit('outcome = lose', 'from __main__ import Result; lose = Result.lose')
0.024848614004440606
```
In `timeit` tests all variables are locals, so both `Result` and `lose` are local lookups.
`enum` attribute lookups do take a little more time than 'regular' attribute lookups:
```
>>> class Foo: bar = 'baz'
...
>>> timeit.timeit('outcome = Foo.bar', 'from __main__ import Foo')
0.04182224802207202
```
That's because the `enum` metaclass includes a [specialised `__getattr__` hook](https://hg.python.org/cpython/file/c16d3f5af23f/Lib/enum.py#l241) that is called each time you look up an attribute; attributes of an `enum` class are looked up in a specialised dictionary rather than the class `__dict__`. Both executing that hook method and the additional attribute lookup (to access the map) take additional time:
```
>>> timeit.timeit('outcome = Result._member_map_["lose"]', 'from __main__ import Result')
0.25198313599685207
>>> timeit.timeit('outcome = map["lose"]', 'from __main__ import Result; map = Result._member_map_')
0.14024519600206986
```
In a game of Tic-Tac-Toe you don't generally worry about what comes down to insignificant timing differences. Not when the *human player* is orders of magnitude slower than your computer. That human player is not going to notice the difference between 1.2 microseconds or 0.024 microseconds. |
How do I set up a virtual environment with Flask using conda? | 30,815,337 | 2 | 2015-06-13T05:11:25Z | 30,815,534 | 7 | 2015-06-13T05:44:08Z | [
"python",
"flask",
"virtualization",
"anaconda",
"conda"
] | I wish to set up a virtual environment that I can use to develop web applications using the Flask framework for Python (3.4.2, Mac OS). I was given the instructions on how to do that [here](http://www.enigmeta.com/2012/08/16/starting-flask/), using the virtualenv. However, trying to follow these instructions I ran into a problem: I have Python installed via [Anaconda](http://continuum.io/downloads#all), and upon trying:
```
sudo easy_install virtualenv
```
I am warned that I should be doing this with the already-installed `conda` package instead. I can't imagine that the conda way of doing things is much harder, but I also want to get bogged down with reading the documentation, because then I might not emerge back out of it again... So my question is, what's a quick way of setting up a virtual environment with Flask using Conda? And how can I then add more dependencies into this mix? | Your mileage may vary, but the docs tends to be where the answers are.
```
conda create -n my_flask_env
source activate my_flask_env
conda install condastuff
pip install otherstuff
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.