title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Loading text file containing both float and string using numpy.loadtxt | 23,546,349 | 12 | 2014-05-08T15:38:46Z | 23,546,432 | 22 | 2014-05-08T15:42:23Z | [
"python",
"python-2.7",
"python-3.x",
"numpy"
] | I have a text file: `data.txt` which contains
```
5.1,3.5,1.4,0.2,Iris-setosa
4.9,3.0,1.4,0.2,Iris-setosa
5.8,2.7,4.1,1.0,Iris-versicolor
6.2,2.2,4.5,1.5,Iris-versicolor
6.4,3.1,5.5,1.8,Iris-virginica
6.0,3.0,4.8,1.8,Iris-virginica
```
How do I load this data using `numpy.loadtxt()` so that I get a numpy array after ... | If you use [np.genfromtxt](http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html), you could specify `dtype=None`, which will tell `genfromtxt` to intelligently guess the dtype of each column. Most conveniently, it relieves you of the burder of specifying the number of bytes required for the string ... |
Python, counter atomic increment | 23,547,604 | 10 | 2014-05-08T16:37:44Z | 23,547,698 | 9 | 2014-05-08T16:41:24Z | [
"python"
] | How can I translate the following code from Java to Python?
```
AtomicInteger cont = new AtomicInteger(0);
int value = cont.getAndIncrement();
``` | Most likely with an `threading.Lock` around any usage of that value. There's no atomic modification in Python unless you use pypy (if you do, have a look at `__pypy__.thread.atomic` in stm version). |
Python, counter atomic increment | 23,547,604 | 10 | 2014-05-08T16:37:44Z | 27,062,830 | 16 | 2014-11-21T13:46:18Z | [
"python"
] | How can I translate the following code from Java to Python?
```
AtomicInteger cont = new AtomicInteger(0);
int value = cont.getAndIncrement();
``` | [`itertools.count`](https://docs.python.org/2/library/itertools.html#itertools.count) returns an iterator which will perform the equivalent to `getAndIncrement()` on each iteration.
Example:
```
import itertools
cont = itertools.count()
value = cont.next()
``` |
How do I compile Python 3.4 with custom OpenSSL? | 23,548,188 | 9 | 2014-05-08T17:04:53Z | 23,550,964 | 20 | 2014-05-08T19:36:04Z | [
"python",
"gcc"
] | I have my own OpenSSL installation in a non-standard location (`/my/path` for the sake of this example) and I want Python 3.4 to build against that when I compile it against source. What I tried is this (directories abbreviated)
```
CPPFLAGS="-I/my/path/include -I/my/path/include/openssl" ./configure --prefix=/my/path... | I managed to figure it out after a lot of hair-pulling. It was a bunch of environment variables... I think I might have done a little overkill, but this basically worked:
```
# OpenSSL 1.0.1g
./config shared --prefix=/my/path --openssldir=/my/path/openssl
make
make install
# Python 3.4
export LDFLAGS="-L/my/path/lib/... |
check if a value exists in pandas dataframe index | 23,549,231 | 23 | 2014-05-08T17:59:49Z | 23,549,599 | 44 | 2014-05-08T18:19:30Z | [
"python",
"pandas",
"ipython"
] | I am sure there is an obvious way to do this but cant think of anything slick right now.
Basically instead of raising exception I would like to get TRUE or FALSE to see if a value exists in pandas `df` index.
```
df = pandas.DataFrame({'test':[1,2,3,4]}, index=['a','b','c','d'])
df.loc['g'] # (should give False)
``... | This should do the trick
```
'g' in df.index
``` |
check if a value exists in pandas dataframe index | 23,549,231 | 23 | 2014-05-08T17:59:49Z | 26,721,808 | 11 | 2014-11-03T19:41:25Z | [
"python",
"pandas",
"ipython"
] | I am sure there is an obvious way to do this but cant think of anything slick right now.
Basically instead of raising exception I would like to get TRUE or FALSE to see if a value exists in pandas `df` index.
```
df = pandas.DataFrame({'test':[1,2,3,4]}, index=['a','b','c','d'])
df.loc['g'] # (should give False)
``... | Just for reference as it was something I was looking for, you can test for presence within the values or the index by appending the ".values" method, e.g.
```
g in df.<your selected field>.values
g in df.index.values
```
I find that adding the ".values" to get a simple list or ndarray out makes exist or "in" checks ... |
FigureCanvasAgg' object has no attribute 'invalidate' ? python plotting | 23,550,056 | 14 | 2014-05-08T18:43:40Z | 25,060,811 | 18 | 2014-07-31T13:58:30Z | [
"python",
"matplotlib",
"pandas",
"financial"
] | I've been following 'python for data analysis'. On pg. 345, you get to this code to plot returns across a variety of stocks. However, the plotting function does not work for me. I get
FigureCanvasAgg' object has no attribute 'invalidate' ?
```
names = ['AAPL','MSFT', 'DELL', 'MS', 'BAC', 'C'] #goog and SF did not work... | I found this error to be due to a combination of:
* using pandas plotting with a series or dataframe member method
* plotting with a date index
* using `%matplotlib inline` magic in ipython
* importing the pylab module **before** the matplotlib magic
So the following will fail on a **newly started kernel** in an ipyt... |
Convert SqlAlchemy orm result to dict | 23,554,119 | 6 | 2014-05-08T23:11:22Z | 23,620,884 | 8 | 2014-05-13T00:10:45Z | [
"python",
"json",
"sqlalchemy"
] | How to convert SQLAlchemy orm object result to JSON format?
Currently I am using sqlalchemy reflection to reflect tables from the DB.
Consider I have a User table and a Address table I am reflecting fro the DB.
The user entity has one to one relationship with the address entity.
Below is the code to reflect the table ... | You can use the relationships property of the mapper. The code choices depend on how you want to map your data and how your relationships look. If you have a lot of recursive relationships, you may want to use a max\_depth counter. My example below uses a set of relationships to prevent a recursive loop. You could elim... |
How do I terminate a flask app that's running as a service? | 23,554,644 | 3 | 2014-05-09T00:00:38Z | 23,575,591 | 8 | 2014-05-09T23:44:33Z | [
"python",
"windows",
"service",
"flask"
] | I was able to get my flask app running as a service thanks to [Is it possible to run a Python script as a service in Windows? If possible, how?](http://stackoverflow.com/questions/32404/is-it-possible-to-run-a-python-script-as-a-service-in-windows-if-possible-how), but when it comes to stopping it i cannot. I have to t... | You can stop the Werkzeug web server gracefully before you stop the Win32 server. Example:
```
from flask import request
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
@app.route('/sh... |
Pandas date offset and conversion | 23,554,660 | 4 | 2014-05-09T00:03:21Z | 23,554,817 | 7 | 2014-05-09T00:23:14Z | [
"python",
"datetime",
"pandas"
] | I am trying to interpret a field as a date, change the date to represent the month the date appears in, offset the date by a month and then represent it as a date without a timestamps. I have ended up with this which looks and feels too unwieldy:
```
df['DATE'].apply( lambda d: pd.to_datetime(pd.to_datetime(d).to_... | Well, you can avoid the apply and do it vectorized (I think that makes it a bit nicer):
```
print df
date x1
0 2010-01-01 00:00:00 10
1 2010-02-01 00:00:00 10
2 2010-03-01 00:00:00 10
3 2010-04-01 00:00:00 10
4 2010-04-01 00:00:00 5
5 2010-05-01 00:00:00 5
df['date'] = (pd.to_datetim... |
Why does pycharm propose to change method to static | 23,554,872 | 35 | 2014-05-09T00:31:16Z | 23,555,313 | 28 | 2014-05-09T01:35:18Z | [
"python",
"pycharm"
] | The new pycharm release (3.1.3 community edition) proposes to convert the methods that don't work with the current object's state to static.

What is the practical reason for that? Some kind of micro-performance(-or-memory)-optimization? | PyCharm "thinks" that you might have *wanted* to have a static method, but you forgot to declare it to be static.
PyCharm proposes this because your function does not change the classes state - e.g. you don't *use* `self` in its body - and might be intended to be callable without object creation. |
Why does pycharm propose to change method to static | 23,554,872 | 35 | 2014-05-09T00:31:16Z | 31,788,601 | 7 | 2015-08-03T13:27:07Z | [
"python",
"pycharm"
] | The new pycharm release (3.1.3 community edition) proposes to convert the methods that don't work with the current object's state to static.

What is the practical reason for that? Some kind of micro-performance(-or-memory)-optimization? | Agreed with @jolvi, @ArundasR, and others, the warning happens on a member function that doesn't use `self`.
Workaround: if you're sure PyCharm is wrong, that the function should not be a `@staticmethod`, and if you value zero warnings, you can make this one go away like so:
```
def bar(self):
self.is_not_used()
... |
how to explain the decision tree from scikit-learn | 23,557,545 | 10 | 2014-05-09T05:58:01Z | 23,557,610 | 15 | 2014-05-09T06:02:27Z | [
"python",
"numpy",
"scipy",
"scikit-learn",
"decision-tree"
] | I have two problems with understanding the result of decision tree from scikit-learn. For example, this is one of my decision trees:

My question is that how I can use the tree?
The first question is that: if a sample satisfied the condition, then it ... | The `value` line in each box is telling you how many samples at that node fall into each category, in order. That's why, in each box, the numbers in `value` add up to the number shown in `sample`. For instance, in your red box, 91+212+113=416. So this means if you reach this node, there were 91 data points in category ... |
Split a string into a list in Python | 23,559,626 | 3 | 2014-05-09T08:10:08Z | 23,559,780 | 10 | 2014-05-09T08:19:28Z | [
"python",
"list",
"split"
] | I have a textfile that I want to put into lists.
The textfile looks like this:
```
New Distribution Votes Rank Title
0000000125 1196672 9.2 The Shawshank Redemption (1994)
0000000125 829707 9.2 The Godfather (1972)
0000000124 547511 9.0 The Godfather: Part II (1974)
0000000124 1160... | `split()` takes an optional `maxsplit` argument:
[In Python 3](https://docs.python.org/3/library/stdtypes.html#str.split):
```
>>> s = " 0000000125 1196672 9.2 The Shawshank Redemption (1994)"
>>> s.split()
['0000000125', '1196672', '9.2', 'The', 'Shawshank', 'Redemption', '(1994)']
>>> s.split(maxsplit=3)
['0... |
Websockets with Tornado: Get access from the "outside" to send messages to clients | 23,562,465 | 3 | 2014-05-09T10:35:08Z | 23,567,870 | 8 | 2014-05-09T15:00:11Z | [
"python",
"websocket",
"webserver",
"tornado"
] | I'm starting to get into WebSockets as way to push data from a server to connected clients. Since I use python to program any kind of logic, I looked at Tornado so far. The snippet below shows the most basic example one can find everywhere on the Web:
```
import tornado.httpserver
import tornado.websocket
import torna... | This is building on Hans Then's example. Hopefully it helps you understand how you can have your server initiate communication with your clients without the clients triggering the interaction.
Here's the server:
```
#!/usr/bin/python
import datetime
import tornado.httpserver
import tornado.websocket
import tornado.i... |
Chartit is not a valid tag library:Django | 23,564,529 | 3 | 2014-05-09T12:19:59Z | 25,839,210 | 13 | 2014-09-15T00:03:18Z | [
"python",
"django",
"django-templates"
] | My **views.py** file is as follows:
```
from django.shortcuts import render, render_to_response
from chartit import DataPool, Chart
from chartit.chartdata import DataPool
from weather.models import MonthlyWeatherByCity
import simplejson
from chartit import DataPool, Chart
def weather_chart_view(request):
ds=DataP... | There is a fix for the problem on the github page of the project. Do a `pip install simplejson` and then locate the file `chartit/templatetags/chartit.py` in the chartit module and replace the line of simplejson import as shown below.
```
from django import template
-from django.utils import simplejson
+import simplej... |
PyQt4 crashed on exit | 23,565,702 | 3 | 2014-05-09T13:20:32Z | 23,576,609 | 7 | 2014-05-10T02:50:35Z | [
"python",
"qt4",
"pyqt4"
] | The window will show, but crashed on exit. The coredump shows that some error happened in QtGui4.dll.
My environment is PyQt4.10.4(Qt4.8.5) 32bit version, Windows 7(64bit). Have any clue or any suggestion to find out what happened? Thanks very much.
Remove one or more controls then the crash will not happen. So weird... | sip.setdestroyonexit(False) solved my problem. This would be an ultimate answer for crash-on-exit of pyqt. The [pyqt document](http://pyqt.sourceforge.net/Docs/PyQt5/pyqt4_differences.html) said:
> When the Python interpreter exits PyQt4 (by default) calls the C++
> destructor of all wrapped instances that it owns. Th... |
Multiplication of 1d arrays in numpy | 23,566,515 | 4 | 2014-05-09T13:58:31Z | 23,566,751 | 11 | 2014-05-09T14:09:18Z | [
"python",
"numpy",
"vector",
"matrix",
"dot"
] | I have two 1d vectors(they can also be 2d matrices in some circumstances).
I found the dot function for dot product but if i want to multiply a.dot(b) with these shapes:
```
a = [1,0.2,...]
a.shape = (10,)
b = [2.3,4,...]
b.shape = (21,)
a.dot(b) and I get ValueError: matrices not aligned.
```
and i want to do
```
c... | Lets start with two arrays:
```
>>> a
array([0, 1, 2, 3, 4])
>>> b
array([5, 6, 7])
```
Transposing either array does not work because it is only 1D- there is nothing to transpose, instead you need to add a new axis:
```
>>> b.T
array([5, 6, 7])
>>> b[:,None]
array([[5],
[6],
[7]])
```
To get the dot ... |
XLRD/Python: Reading Excel file into dict with for-loops | 23,568,409 | 5 | 2014-05-09T15:25:46Z | 23,568,655 | 16 | 2014-05-09T15:37:45Z | [
"python",
"excel",
"dictionary",
"xlrd"
] | I'm looking to read in an Excel workbook with 15 fields and about 2000 rows, and convert each row to a dictionary in Python. I then want to append each dictionary to a list. I'd like each field in the top row of the workbook to be a key within each dictionary, and have the corresponding cell value be the value within t... | The idea is to, first, read the header into the list. Then, iterate over the sheet rows (starting from the next after the header), create new dictionary based on header keys and appropriate cell values and append it to the list of dictionaries:
```
from xlrd import open_workbook
book = open_workbook('forum.xlsx')
she... |
Store datetimes in HDF5 with H5Py | 23,570,632 | 3 | 2014-05-09T17:26:05Z | 23,572,492 | 8 | 2014-05-09T19:28:33Z | [
"python",
"datetime",
"hdf5",
"h5py"
] | How can I store NumPy datetime objects in HDF5 using `h5py`?
```
In [1]: import h5py
In [2]: import numpy as np
In [3]: f = h5py.File('foo.hdfs', 'w')
In [4]: d = f.create_dataset('data', shape=(2, 2), dtype=np.datetime64)
TypeError: No conversion path for dtype: dtype('<M8')
``` | Currently HDF5 does not provide a time type (H5T\_TIME is now unsupported), so there is no obvious mapping for datetime64.
One of the design goals for h5py was to stick within the base HDF5 feature set. This allows people to write data to their files and know that it will round-trip and be retrievable by people using ... |
Pandas DataFrame, How do I remove all columns and rows that sum to 0 | 23,573,052 | 2 | 2014-05-09T20:03:39Z | 23,573,084 | 7 | 2014-05-09T20:05:47Z | [
"python",
"pandas"
] | I have a dataFrame with rows and columns that sum to 0.
```
A B C D
0 1 1 0 1
1 0 0 0 0
2 1 0 0 1
3 0 1 0 0
4 1 1 0 1
```
The end result should be
```
A B D
0 1 1 1
2 1 0 1
3 0 1 0
4 1 1 1
```
Notice the rows and colum... | `df.loc[row_indexer, column_indexer]` allows you to select rows and columns using boolean masks:
```
In [88]: df.loc[(df.sum(axis=1) != 0), (df.sum(axis=0) != 0)]
Out[88]:
A B D
0 1 1 1
2 1 0 1
3 0 1 0
4 1 1 1
[4 rows x 3 columns]
```
`df.sum(axis=1) != 0` is True if and only if the row does not su... |
scrapy from script output in json | 23,574,636 | 5 | 2014-05-09T22:02:11Z | 23,574,703 | 10 | 2014-05-09T22:09:33Z | [
"python",
"json",
"web-scraping",
"scrapy",
"scrapy-spider"
] | I am running `scrapy` in a python script
```
def setup_crawler(domain):
dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = ArgosSpider(domain=domain)
settings = get_project_settings()
crawler = Crawler(settings)
crawler.configure()
crawler.crawl(spider)
crawler.start()
... | You need to set [`FEED_FORMAT`](http://doc.scrapy.org/en/latest/topics/feed-exports.html#std%3asetting-FEED_FORMAT) and [`FEED_URI`](http://doc.scrapy.org/en/latest/topics/feed-exports.html#feed-uri) settings manually:
```
settings.overrides['FEED_FORMAT'] = 'json'
settings.overrides['FEED_URI'] = 'result.json'
```
I... |
What is the difference between a Ruby Hash and a Python dictionary? | 23,575,603 | 6 | 2014-05-09T23:46:28Z | 23,575,714 | 7 | 2014-05-09T23:58:43Z | [
"python",
"ruby",
"dictionary",
"hashmap"
] | In Python, there are dictionaries:
```
residents = {'Puffin' : 104, 'Sloth' : 105, 'Burmese Python' : 106}
```
In Ruby, there are Hashes:
```
residents = {'Puffin' => 104, 'Sloth' => 105, 'Burmese Python' => 106}
```
The only difference is the `:` versus `=>` syntax. (Note that if the example were using variables i... | Both [Ruby's Hash](http://www.ruby-doc.org/core-2.1.0/Hash.html) and [Python's dictionary](https://docs.python.org/3/library/stdtypes.html#typesmapping) represent a [Map Abstract Data Type (ADT)](http://en.wikipedia.org/wiki/Associative_array)
> .. an associative array, map, symbol table, or dictionary is an abstract ... |
How to install Pyside for Python 2.7? | 23,576,028 | 9 | 2014-05-10T00:49:10Z | 23,576,544 | 14 | 2014-05-10T02:38:21Z | [
"python",
"qt",
"qt4",
"qt5",
"pyside"
] | Hey I am pretty new to Pyside and am not sure how to download it or Qt? I have a 64-bit Windows computer and am wondering what I need to download once I have downloaded Qt because there is a checklist for like Qt 5.3, Qt 5.2.1, etc. and I am not sure which to download because under those there are multiple options unde... | Install pip:
1. Download pip installer from [here](https://raw.githubusercontent.com/pypa/pip/master/contrib/get-pip.py) and save it in a directory other than C.
2. If you haven't set your python path: follow [this](http://stackoverflow.com/questions/6318156/adding-python-path-on-windows-7).
3. Open Command Prompt and... |
Any Python Library Produces Publication Style Regression Tables | 23,576,328 | 9 | 2014-05-10T01:49:01Z | 23,576,491 | 14 | 2014-05-10T02:28:34Z | [
"python",
"latex",
"regression",
"stata",
"statsmodels"
] | I've been using Python for regression analysis. After getting the regression results, I need to summarize all the results into one single table and convert them to LaTex (for publication). Is there any package that does this in Python? Something like [estout](http://repec.org/bocode/e/estout/esttab.html#esttab012) in S... | Well, there is `summary_col` in `statsmodels`; it doesn't have all the bells and whistles of `estout`, but it does have the basic functionality you are looking for (including export to LaTeX):
```
import statsmodels.api as sm
from statsmodels.iolib.summary2 import summary_col
p['const'] = 1
reg0 = sm.OLS(p['p0'],p[['... |
How to avoid overlapping of labels & autopct in a matplotlib pie chart? | 23,577,505 | 4 | 2014-05-10T05:37:21Z | 23,578,860 | 9 | 2014-05-10T08:33:23Z | [
"python",
"numpy",
"matplotlib",
"plot",
"pie-chart"
] | My Python code is:
```
values = [234, 64, 54,10, 0, 1, 0, 9, 2, 1, 7, 7]
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul','Aug','Sep','Oct', 'Nov','Dec']
colors = ['yellowgreen', 'red', 'gold', 'lightskyblue',
'white','lightcoral','blue','pink', 'darkgreen',
'yellow','grey','v... | Alternatively you can put the legends beside the pie graph:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.char.array(['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct', 'Nov','Dec'])
y = np.array([234, 64, 54,10, 0, 1, 0, 9, 2, 1, 7, 7])
colors = ['yellowgreen','red','gold','lightskyblue','w... |
Type error Unhashable type:set | 23,577,724 | 7 | 2014-05-10T06:08:53Z | 23,577,777 | 8 | 2014-05-10T06:17:02Z | [
"python",
"python-2.7",
"python-3.x",
"set"
] | The below code has an error in function U=set(p.enum()) which a type error of unhashable type : 'set' actually if you can see the class method enum am returning 'L' which is list of sets and the U in function should be a set so can you please help me to resolve the issue or How can I convert list of sets to set of sets... | The individual items that you put into a set can't be mutable, because if they changed, the effective hash would change and thus the ability to check for inclusion would break down.
Instead, you need to put immutable objects into a set - e.g. `frozenset`s.
If you change the return statement from your `enum` method to... |
How to format date string via multiple formats in python | 23,581,128 | 6 | 2014-05-10T12:35:28Z | 23,581,184 | 25 | 2014-05-10T12:41:56Z | [
"python",
"date",
"datetime",
"format",
"strptime"
] | I have three date formats: 'YYYY-MM-DD', 'DD.MM.YYYY', 'DD/MM/YYYY'
Is it possible to validate and parse strings as '2014-05-18' or '18.5.2014' or '18/05/2019' ?
Thanks | Try each format and see if it works:
```
from datetime import datetime
def try_parsing_date(text):
for fmt in ('%Y-%m-%d', '%d.%m.%Y', '%d/%m/%y'):
try:
return datetime.strptime(text, fmt)
except ValueError:
pass
raise ValueError('no valid date format found')
``` |
Python: Selenium with PhantomJS empty page source | 23,581,291 | 15 | 2014-05-10T12:54:08Z | 24,385,267 | 8 | 2014-06-24T11:17:32Z | [
"python",
"selenium",
"phantomjs"
] | I'm having trouble with Selenium and PhantomJS on Windows7 when I want to get the source of the page of an URL.
`browser.page_source` returns only `<html><head></head></html>`. I've put a sleep before `browser.page_source` but it didn't help.
This is my code:
```
from selenium import webdriver
browser = webdriver.Pha... | Using `service_args=['--ignore-ssl-errors=true']` did the trick !
```
browser = webdriver.PhantomJS('phantomjs-1.9.7-windows\phantomjs.exe', service_args=['--ignore-ssl-errors=true'])
``` |
Python: Selenium with PhantomJS empty page source | 23,581,291 | 15 | 2014-05-10T12:54:08Z | 34,257,642 | 17 | 2015-12-13T23:23:19Z | [
"python",
"selenium",
"phantomjs"
] | I'm having trouble with Selenium and PhantomJS on Windows7 when I want to get the source of the page of an URL.
`browser.page_source` returns only `<html><head></head></html>`. I've put a sleep before `browser.page_source` but it didn't help.
This is my code:
```
from selenium import webdriver
browser = webdriver.Pha... | by default phantomjs use SSLv3, but many sites after bug in ssl migrate to tls. That's why you has blank page.
use `service_args=['--ignore-ssl-errors=true', '--ssl-protocol=any']`
```
browser = webdriver.PhantomJS('phantomjs-1.9.7-windows\phantomjs.exe', service_args=['--ignore-ssl-errors=true', '--ssl-protocol=any']... |
Python pickle protocol choice? | 23,582,489 | 25 | 2014-05-10T14:50:54Z | 23,582,505 | 20 | 2014-05-10T14:52:38Z | [
"python",
"python-2.7",
"numpy",
"pickle"
] | I an using python 2.7 and trying to pickle an object. I am wondering what the real difference is between the pickle protocols.
```
import numpy as np
import pickle
class data(object):
def __init__(self):
self.a = np.zeros((100, 37000, 3), dtype=np.float32)
d = data()
print "data size: ", d.a.nbytes/100000... | From the [`pickle` module data format documentation](https://docs.python.org/2/library/pickle.html#data-stream-format):
> There are currently 3 different protocols which can be used for pickling.
>
> * Protocol version 0 is the original ASCII protocol and is backwards compatible with earlier versions of Python.
> * Pr... |
How do I get interactive plots again in Spyder/IPython/matplotlib? | 23,585,126 | 12 | 2014-05-10T19:04:18Z | 31,488,324 | 8 | 2015-07-18T06:05:54Z | [
"python",
"matplotlib",
"ipython",
"spyder"
] | I upgraded from Python(x,y) 2.7.2.3 to [2.7.6.0](http://code.google.com/p/pythonxy/wiki/Downloads) in Windows 7 (and was happy to see that I can finally type `function_name?` and see the docstring in the Object Inspector again) but now the plotting doesn't work as it used to.
Previously (Spyder 2.1.9, IPython 0.10.2, ... | After selection the "Automatic" option in the Preferences window, I was able to make interactive plots by closing and opening Spyder. |
Return multiple columns from apply pandas | 23,586,510 | 5 | 2014-05-10T21:27:50Z | 35,208,597 | 8 | 2016-02-04T18:17:42Z | [
"python",
"pandas"
] | I have a pandas DataFrame, df. It contains a column 'size' which represents size in bytes. I've calculated KB, MB, and GB using the following code:
```
df_test = pd.DataFrame(columns=('dir', 'size'))
rows_list = []
dict1 = {'dir': '/Users/uname1', 'size': 994933}
dict2 = {'dir': '/Users/uname2', 'size': 109338711}
ro... | This is an old question, but for completeness, you can return a Series from the applied function that contains the new data, preventing the need to iterate three times. Passing `axis=1` to the apply function applies the function `sizes` to each row of the dataframe, returning a series to add to a new dataframe. This se... |
PIL open() method not working with BytesIO | 23,587,426 | 8 | 2014-05-10T23:34:19Z | 23,587,474 | 14 | 2014-05-10T23:42:17Z | [
"python",
"python-imaging-library",
"pillow"
] | For some reason, when I try to make an image from a BytesIO steam, it can't identify the image. Here is my code:
```
from PIL import Image, ImageGrab
from io import BytesIO
i = ImageGrab.grab()
i.resize((1280, 720))
output = BytesIO()
i.save(output, format = "JPEG")
output.flush()
print(isinstance(Image.open(output),... | Think of BytesIO as a file object, after you finish writing the image, the file's cursor is at the end of the file, so when `Image.open()` tries to call `output.read()`, it immediately gets an EOF.
You need to add a `output.seek(0)` before passing `output` to `Image.open()`. |
Efficient algorithm for determining values not as frequent in a list | 23,587,568 | 4 | 2014-05-10T23:57:59Z | 23,587,677 | 7 | 2014-05-11T00:21:29Z | [
"python",
"algorithm",
"list",
"frequency"
] | I am building a quiz application which pulls questions randomly from a pool of questions. However, there is a requirement that the pool of questions be limited to questions that the user has not already seen. If, however, the user has seen all the questions, then the algorithm should "reset" and only show questions the... | Rewritten with the database stuff in fill-in pseudocode.
If I understand the problem correctly, I'd treat the questions (or their IDs as proxies) like a physical deck of cards: for each user, shuffle the deck and deal them a question at a time; if they want more than `len(deck)` questions, just start over: shuffle the... |
Python pandas / matplotlib annotating labels above bar chart columns | 23,591,254 | 5 | 2014-05-11T10:03:32Z | 23,591,415 | 7 | 2014-05-11T10:20:04Z | [
"python",
"matplotlib",
"pandas"
] | How do I add the label for the value to display above the bars in the bargraph here:
```
import pandas as pd
import matplotlib.pyplot as plt
df=pd.DataFrame({'Users': [ 'Bob', 'Jim', 'Ted', 'Jesus', 'James'],
'Score': [10,2,5,6,7],})
df = df.set_index('Users')
df.plot(kind='bar', title='Scores')
p... | Capture the axis where the plot is drawn into, then manipulate it as a usual `matplotlib` object. Putting the value above the bar will be something like this:
```
ax = df.plot(kind='bar', title='Scores')
ax.set_ylim(0, 12)
for i, label in enumerate(list(df.index)):
score = df.ix[label]['Score']
ax.annotate(st... |
Python pandas / matplotlib annotating labels above bar chart columns | 23,591,254 | 5 | 2014-05-11T10:03:32Z | 34,598,688 | 7 | 2016-01-04T19:49:00Z | [
"python",
"matplotlib",
"pandas"
] | How do I add the label for the value to display above the bars in the bargraph here:
```
import pandas as pd
import matplotlib.pyplot as plt
df=pd.DataFrame({'Users': [ 'Bob', 'Jim', 'Ted', 'Jesus', 'James'],
'Score': [10,2,5,6,7],})
df = df.set_index('Users')
df.plot(kind='bar', title='Scores')
p... | A solution without accessing the DataFrame is to use the patches attribute:
```
ax = df.plot.bar(title="Scores")
for p in ax.patches:
ax.annotate(str(p.get_height()), xy=(p.get_x(), p.get_height()))
```
Note you have to play around with the xy kwarg (2nd arg) to get the label position you desire.
### Vertical Ba... |
django 1.5 select_for_update considered fragile design | 23,591,365 | 2 | 2014-05-11T10:15:14Z | 23,745,229 | 8 | 2014-05-19T18:58:53Z | [
"python",
"django",
"django-database"
] | The [Django documentation](https://docs.djangoproject.com/en/dev/topics/db/transactions/#select-for-update) states
> If you were relying on âautomatic transactionsâ to provide locking
> between select\_for\_update() and a subsequent write operation â an
> extremely fragile design, but nonetheless possible â yo... | `select_for_update` isn't fragile.
I wrote that "if you were relying on "automatic transactions"" then you need to review your code when you upgrade from 1.5 from 1.6.
If you weren't relying on "automatic transaction", and even more if the concept doesn't ring a bell, then you don't need to do anything.
As pointed o... |
Python pip says: pkg_resources.DistributionNotFound: pip==1.4.1 | 23,591,705 | 10 | 2014-05-11T10:53:24Z | 24,065,270 | 16 | 2014-06-05T16:18:16Z | [
"python",
"python-2.7",
"ubuntu",
"pip"
] | I just did a clean install of Ubuntu 14.04 and also installed pycharm. Pycharm said setuptools and pip weren't installed and offered to install it. I simply clicked "Ÿes" and it seemed to install it. A bit after that I wanted to install Flask (which is awesome btw) using pip, so I did `sudo pip install flask`. To my s... | Had the same problem under 12.04.
Did `sudo easy_install pip==1.4.1` and it worked. |
Vectorising an equation using numpy | 23,592,323 | 4 | 2014-05-11T11:59:18Z | 23,592,642 | 8 | 2014-05-11T12:29:46Z | [
"python",
"python-2.7",
"python-3.x",
"numpy",
"scipy"
] | 

I am trying to implement the above formula as a vectorised form.
`K=3` here, `X` is `150x4` numpy array. `mu` is `3x4` numpy array. `Gamma` is a `150x3` numpy array. `Sigma` is a `kx... | A sum of products? sounds like a job for [np.einsum](http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html):
```
import numpy as np
N = 150
K = 3
M = 4
x = np.random.random((N,M))
mu = np.random.random((K,M))
gamma = np.random.random((N,K))
xbar = x-mu[:,None,:] # shape (3, 150, 4)
sigma = np.einsum('... |
Plotting with a transparent marker but non-transparent edge | 23,596,575 | 12 | 2014-05-11T18:49:31Z | 23,596,637 | 13 | 2014-05-11T18:56:45Z | [
"python",
"numpy",
"matplotlib",
"plot",
"transparency"
] | I'm trying to make a plot in matplotlib with transparent markers which have a fixed color edge . However, I can't seem to achieve a marker with transparent fill.
I have a minimum working example here:
```
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(10)
y1 = 2*x + 1
y2 = 3*x - 5
plt.plot(x,y1, '... | This is tricky in Matplotlib... you have to use a string `"None"` instead of the value `None`, then you can just do:
```
plt.plot(x,y2, 'o', ms=14, markerfacecolor="None",
markeredgecolor='red', markeredgewidth=5)
``` |
What is the difference between id(obj) and ctypes.addressof(obj) in CPython | 23,600,052 | 9 | 2014-05-12T02:09:01Z | 23,600,061 | 16 | 2014-05-12T02:10:19Z | [
"python",
"memory-management",
"ctypes"
] | Say I define the following variable using [ctypes module](https://docs.python.org/2/library/ctypes.html)
```
i = c_int(4)
```
and afterwards I try to find out the memory address of i using:
```
id(i)
```
or
```
ctypes.addressof(i)
```
which, at the moment, yield different values. Why is that? | What you are suggesting should be the case is an implementation detail of CPython.
The [`id()`](https://docs.python.org/3.3/library/functions.html#id) function:
> Return the âidentityâ of an object. This is an integer which is guaranteed to be unique and constant for this object during its lifetime.
>
> **CPython... |
Concatenate Pandas columns under new multi-index level | 23,600,582 | 9 | 2014-05-12T03:23:03Z | 23,600,844 | 17 | 2014-05-12T04:00:20Z | [
"python",
"pandas",
"multi-index"
] | Given a dictionary of data frames like:
```
dict = {'ABC': df1, 'XYZ' : df2} # of any length...
```
where each data frame has the same columns and similar index, for example:
```
data Open High Low Close Volume
Date
2002-01-17 0.18077 0.1... | You can do it with `concat` (the `keys` argument will create the hierarchical columns index):
```
dict = {'ABC' : df1, 'XYZ' : df2}
print pd.concat(dict.values(),axis=1,keys=dict.keys())
XYZ ABC \
Open High Low Close Volu... |
How to use flake8 for Python 3 ? | 23,600,614 | 14 | 2014-05-12T03:27:23Z | 23,600,877 | 9 | 2014-05-12T04:04:01Z | [
"python",
"python-3.x",
"flake8"
] | In this code snippet,
```
def add(x:int, y:int) -> int:
return x + y
```
there are function annotations that are only supported after python 3.0
When I execute flake8 for this python code:
```
$ flake8 7.3.py -vv
checking 7.3.py
def add(x: int, y: int) -> int:
return x + y
7.3.py:1:11: E901 SyntaxError: invalid... | See: <https://bugs.launchpad.net/pyflakes/+bug/989203>
**NB:** Whilst this bug report indicates some level of resolution, testing the latest version of pyflakes `0.8.1` this lack of Python 3 Annotations still exists.
I guess you'd have to file a separate new feature request to pyflakes.
[pyflakes Bugs](https://bugs.... |
Changing the grammar of Python | 23,601,571 | 2 | 2014-05-12T05:22:35Z | 23,601,639 | 9 | 2014-05-12T05:29:37Z | [
"python"
] | Under the Grammar folder in Cpython there is a file Grammar which specifies the main grammar of the Python programming language. Does modifying this alter the main grammar of the programming language? | The Grammar file controls the grammar of the language, but just modifying that is only one step of the process of changing the grammar. Many other files work closely with the grammar and need to be changed as well, such as `Parser/Python.asdl`, `Python/ast.c`, and `Python/compile.c`. For the full process of changing th... |
How to restart tweepy script in case of error? | 23,601,634 | 3 | 2014-05-12T05:29:20Z | 23,603,395 | 8 | 2014-05-12T07:34:30Z | [
"python",
"restart",
"tweepy"
] | I have a python script that continuously stores tweets related to tracked keywords to a file. However, the script tends to crash repeatedly due to an error appended below. How do I edit the script so that it automatically restarts? I've seen numerous solutions including this ([Restarting a program after exception](http... | Figured out how to incorporate the while/try loop by writing a new function for the stream:
```
def start_stream():
while True:
try:
sapi = tweepy.streaming.Stream(auth, CustomStreamListener(api))
sapi.filter(track=["Samsung", "s4", "s5", "note" "3", "HTC", "Sony", "Xperia", "Blackb... |
Python: how to batch rename mixed case to lower case with underscores | 23,607,374 | 6 | 2014-05-12T11:08:25Z | 23,655,669 | 7 | 2014-05-14T13:07:29Z | [
"python",
"variables",
"ide",
"refactoring",
"naming-conventions"
] | I've written a fair bit of my first significant Python script. I just finished reading PEP 8, and I learned that lower\_case\_with\_underscores is preferred for instance variable names. I've been using mixedCase for variable names throughout, and I'd like my code to be make more Pythonic by changing those to lower\_cas... | I was able to accomplish what I wanted using GNU sed:
```
sed -i -e :loop -re 's/(^|[^A-Za-z_])([a-z0-9_]+)([A-Z])([A-Za-z0-9_]*)'\
'([^A-Za-z0-9_]|$)/\1\2_\l\3\4\5/' -e 't loop' myFile.py
```
It finds every instance of mixedCase -- but not CapitalWords, so class names are left intact -- and replaces it with lower\_c... |
Python: execute cat subprocess in parallel | 23,611,396 | 5 | 2014-05-12T14:13:14Z | 23,616,229 | 18 | 2014-05-12T18:29:04Z | [
"python",
"shell",
"subprocess",
"python-multithreading"
] | I am running several `cat | zgrep` commands on a remote server and gathering their output individually for further processing:
```
class MainProcessor(mp.Process):
def __init__(self, peaks_array):
super(MainProcessor, self).__init__()
self.peaks_array = peaks_array
def run(self):
for p... | You don't need neither `multiprocessing` nor `threading` to run subprocesses in parallel e.g.:
```
#!/usr/bin/env python
from subprocess import Popen
# run commands in parallel
processes = [Popen("echo {i:d}; sleep 2; echo {i:d}".format(i=i), shell=True)
for i in range(5)]
# collect statuses
exitcodes = ... |
Numpy Indexing of 2 Arrays | 23,612,699 | 10 | 2014-05-12T15:11:19Z | 23,612,993 | 7 | 2014-05-12T15:24:37Z | [
"python",
"arrays",
"numpy"
] | Consider two numpy arrays
```
a = np.array(['john', 'bill', 'greg', 'bill', 'bill', 'greg', 'bill'])
b = np.array(['john', 'bill', 'greg'])
```
How would I be able to produce a third array
```
c = np.array([0,1,2,1,1,2,1])
```
The same length as `a` representing the index of each entry of `a` in the array `b`?
I c... | Here is one option:
```
import numpy as np
a = np.array(['john', 'bill', 'greg', 'bill', 'bill', 'greg', 'bill'])
b = np.array(['john', 'bill', 'greg'])
my_dict = dict(zip(b, range(len(b))))
result = np.vectorize(my_dict.get)(a)
```
Result:
```
>>> result
array([0, 1, 2, 1, 1, 2, 1])
``` |
Write dictionary of lists to a CSV file | 23,613,426 | 4 | 2014-05-12T15:47:17Z | 23,613,603 | 12 | 2014-05-12T15:56:12Z | [
"python",
"csv",
"dictionary"
] | I'm struggling with writing a dictionary of lists to a .csv file.
This is how my dictionary looks like:
```
dict[key1]=[1,2,3]
dict[key2]=[4,5,6]
dict[key3]=[7,8,9]
```
I want the .csv file to look like:
```
key1 key2 key3
1 4 7
2 5 8
3 6 9
```
At first I write the header:
```
outputfi... | If you don't care about the order of your columns (since dictionaries are unordered), you can simply use `zip()`:
```
d = {"key1": [1,2,3], "key2": [4,5,6], "key3": [7,8,9]}
with open("test.csv", "wb") as outfile:
writer = csv.writer(outfile)
writer.writerow(d.keys())
writer.writerows(zip(*d.values()))
```
R... |
Python Module Error on Linux | 23,617,898 | 4 | 2014-05-12T20:06:55Z | 23,618,754 | 10 | 2014-05-12T20:58:12Z | [
"python",
"linux",
"pycharm",
"linuxmint",
"spyder"
] | I am using python 2.7 on Linux Mint 16. I am facing an error, if I run my IDE (tried it on Spyder and Pycharm) from a program launcher (eg. from the prompt at Alt F2 or an icon shortcut on my desktop) the modules do not load and I get the following error
```
File "/usr/local/lib/python2.7/dist-packages/gurobipy/__init... | You don't set the path to the gurobipy.
Download then untar to `/opt`.
```
cd to `/opt/gurobi562/linux64` and run `python setup.py install`
```
Add following to `~/.bashrc`.
```
export GUROBI_HOME="/opt/gurobi562/linux64"
export PATH="${PATH}:${GUROBI_HOME}/bin"
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:$... |
Delete cell ipython 2.0 | 23,619,989 | 18 | 2014-05-12T22:30:12Z | 23,620,346 | 35 | 2014-05-12T23:05:18Z | [
"python",
"keyboard-shortcuts",
"ipython"
] | There is probably a very easy solution to this but I'm note sure what it is. Delete cell command in ipython 1.2 used to be:
```
ctrl-m d
```
according to the help menu in version 2.0 it says,
```
Shift-d: delete cell(press twice)
```
I've tried holding shift then pressing d twice and various combinations with shift... | In the new IPython cells could have two states: when it has a green selection rectangle around it you can edit what's inside; when it has a grey rectangle around it you edit the cell itself (copy/paste/delete). Enter/Return makes it go green, Esc makes it go grey. When it is gray, 'dd' will delete it. |
Is cube root integer? | 23,621,833 | 4 | 2014-05-13T02:17:55Z | 23,622,115 | 8 | 2014-05-13T02:58:14Z | [
"python",
"python-3.x"
] | This seems to be simple but I cannot find a way to do it. I need to show whether the cube root of an integer is integer or not. I used `is_integer()` float method in Python 3.4 but that wasn't successful. As
```
x = (3**3)**(1/3.0)
is_integer(x)
True
```
but
```
x = (4**3)**(1/3.0)
is_integer(x)
False
```
... | For small numbers (<~1013 or so), you can use the following approach:
```
def is_perfect_cube(n):
c = int(n**(1/3.))
return (c**3 == n) or ((c+1)**3 == n)
```
This truncates the floating-point cuberoot, then tests the two nearest integers.
For larger numbers, one way to do it is to do a binary search for the... |
Calculating cumulative minimum with numpy arrays | 23,622,157 | 6 | 2014-05-13T03:04:28Z | 23,622,528 | 11 | 2014-05-13T03:56:06Z | [
"python",
"arrays",
"numpy"
] | I'd like to calculate the "cumulative minimum" array--basically, the minimum value of an array up to each index such as:
```
import numpy as np
nums = np.array([5.,3.,4.,2.,1.,1.,2.,0.])
cumulative_min = np.zeros(nums.size, dtype=float)
for i,num in enumerate(nums):
cumulative_min[i] = np.min(nums[0:i+1])
```
Thi... | For any NumPy universal function, its `accumulate` method is the cumulative version of that function. Thus, `numpy.minimum.accumulate` is what you're looking for:
```
>>> numpy.minimum.accumulate([5,4,6,10,3])
array([5, 4, 4, 4, 3])
``` |
Is it acceptable practice to unit-test a program in a different language? | 23,622,923 | 23 | 2014-05-13T04:40:07Z | 23,623,088 | 25 | 2014-05-13T04:55:46Z | [
"python",
"c++",
"unit-testing"
] | I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I reali... | I'd say that it's best to test the API that your users will be exposed to. Other tests are good to have as well, but that's the most important aspect.
If your users are going to write C/C++ code linking to your library, then it would be good to have tests making use of your library the same way.
If you are going to s... |
Is it acceptable practice to unit-test a program in a different language? | 23,622,923 | 23 | 2014-05-13T04:40:07Z | 23,623,093 | 8 | 2014-05-13T04:56:20Z | [
"python",
"c++",
"unit-testing"
] | I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I reali... | A few things to keep in mind:
1. If you are writing tests as you code, then, by all means, use whatever language works best to give you rapid feedback. This enables fast test-code cycles (and is fun as well). **BUT**.
2. Always have well-written tests in the language of the ***consumer***. How is your client/consumer ... |
How to convert a float into hex | 23,624,212 | 3 | 2014-05-13T06:32:32Z | 23,624,284 | 16 | 2014-05-13T06:36:54Z | [
"python",
"python-2.7",
"floating-point-conversion"
] | In Python I need to convert a bunch of floats into hexadecimal. It needs to be zero padded (for instance, 0x00000010 instead of 0x10). Just like <http://gregstoll.dyndns.org/~gregstoll/floattohex/> does. (sadly i can't use external libs on my platform so i can't use the one provided on that website)
What is the most e... | This is a bit tricky in python, because aren't looking to convert the floating-point *value* to a (hex) integer. Instead, you're trying to *interpret* the [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point) binary representation of the floating-point value as hex.
We'll use the `pack` and `unpack` functions f... |
Generate random numbers replicating arbitrary distribution | 23,626,009 | 13 | 2014-05-13T08:16:26Z | 23,626,238 | 7 | 2014-05-13T08:29:30Z | [
"python",
"numpy",
"random",
"normal-distribution"
] | I have data wherein I have a variable `z` that contains around 4000 values **(from 0.0 to 1.0)** for which the histogram looks like this.

Now I need to generate a random variable, call it `random_z` which should replicate the above distribution.
Wha... | If you want to bootstrap you could use `random.choice()` on your observed series.
Here I'll assume you'd like to smooth a bit more than that and you aren't concerned with generating new extreme values.
Use `pandas.Series.quantile()` and a uniform [0,1] random number generator, as follows.
Training
* Put your random... |
How do I pass variables to other methods using Python's click (Command Line Interface Creation Kit) package | 23,626,972 | 3 | 2014-05-13T09:05:44Z | 23,656,775 | 7 | 2014-05-14T13:57:02Z | [
"python",
"python-click"
] | I know it's new, but I like the look of [click](http://click.pocoo.org/) a lot and would love to use it, but I can't work out how to pass variables from the main method to other methods. Am I using it incorrectly, or is this functionality just not available yet? Seems pretty fundamental, so I'm sure it will be in there... | Thanks to @nathj07 for pointing my in the right direction. Here's the answer:
```
import click
class User(object):
def __init__(self, username=None, password=None):
self.username = username
self.password = password
@click.group()
@click.option('--username', default='Naomi McName', help='Usernam... |
cv2.imread: checking if image is being read | 23,628,325 | 4 | 2014-05-13T10:07:15Z | 23,628,409 | 12 | 2014-05-13T10:12:11Z | [
"python",
"opencv",
"numpy"
] | I'm writing an OpenCV program in python, and at some point I have something like
```
import cv2
import numpy as np
...
img = cv2.imread("myImage.jpg")
# do stuff with image here
```
The problem is that I have to detect if the image file is being correctly read before continuing. `cv2.imread` returns `False` if not ... | You can simply use `if not img is None`, or, equivallently, `if img is not None`. You don't need to check the type explicitly.
Note that `None` and `False` are **not** the same value. However, `bool(None)==False`, which is why `if None` fails. |
numerically stable way to multiply log probability matrices in numpy | 23,630,277 | 27 | 2014-05-13T11:40:22Z | 24,063,715 | 13 | 2014-06-05T15:03:56Z | [
"python",
"numpy",
"matrix",
"matrix-multiplication",
"logarithm"
] | I need to take the matrix product of two NumPy matrices (or other 2d arrays) containing log probabilities. The naive way `np.log(np.dot(np.exp(a), np.exp(b)))` is not preferred for obvious reasons.
Using
```
from scipy.misc import logsumexp
res = np.zeros((a.shape[0], b.shape[1]))
for n in range(b.shape[1]):
# br... | `logsumexp` works by evaluating the right-hand side of the equation
```
log(â exp[a]) = max(a) + log(â exp[a - max(a)])
```
I.e., it pulls out the max before starting to sum, to prevent overflow in `exp`. The same can be applied before doing vector dot products:
```
log(exp[a] â
exp[b])
= log(â exp[a] Ã ex... |
How to create the negative of a sentence in nltk | 23,634,759 | 3 | 2014-05-13T14:56:47Z | 23,634,933 | 7 | 2014-05-13T15:04:07Z | [
"python",
"nlp",
"nltk"
] | I am new to NLTK. I would like to create the negative of a sentence (which will usually be in the present tense). For example, is there a function to allow me to convert:
'I run' to 'I do not run'
or
'She runs' to 'She does not run'.
I suppose I could use POS to detect the verb and its preceding pronoun but I just w... | No there is not. What is more important it is quite a complex problem, which can be a topic of research, and not something that "simple built in function" could solve. Such operation requires semantic analysis of the sentence, think about for example "I think that I could run faster" which of the 3 verbs should be nega... |
pyCharm and django admin | 23,637,197 | 4 | 2014-05-13T16:49:30Z | 23,637,212 | 9 | 2014-05-13T16:50:36Z | [
"python",
"django",
"pycharm"
] | I'm trying to run a Django admin page in PyCharm.
I've created a test project with an empty test app and a simple model.
```
class User(models.Model):
name = models.CharField(max_length=15)
email = models.EmailField()
```
As I open `127.0.0.1:8000/admin/` I can see the admin page. However, not only I don't k... | In a terminal, `cd` into your project root and type `./manage.py syncdb`. This will create the initial tables.
When it gets around to creating the `auth_user` table, it will ask you to create an admin user, and set a password on it. These are the credentials you need to log in to the admin interface.
Edit: For those ... |
django change default runserver port | 23,639,085 | 16 | 2014-05-13T18:39:10Z | 23,639,676 | 29 | 2014-05-13T19:14:35Z | [
"python",
"django",
"django-manage.py",
"manage.py"
] | I would like to make the default port that `manage.py runserver` listens on specifiable in an extraneous `config.ini`. Is there an easier fix than parsing `sys.argv` inside `manage.py` and inserting the configured port?
The goal is to run `./manage.py runserver` without having to specify address and port every time bu... | create a bash script with the following:
```
#!/bin/bash
exec ./manage.py runserver 0.0.0.0:<your_port>
```
save it as runserver in the same dir as manage.py
```
chmod +x runserver
```
and run it as
```
./runserver
``` |
Disable a method in a ViewSet, django-rest-framework | 23,639,113 | 28 | 2014-05-13T18:40:35Z | 23,649,843 | 69 | 2014-05-14T08:50:39Z | [
"python",
"django",
"django-views",
"django-rest-framework"
] | `ViewSets` have automatic methods to list, retrieve, create, update, delete, ...
I would like to disable some of those, and the solution I came up with is probably not a good one, since `OPTIONS` still states those as allowed.
Any idea on how to do this the right way?
```
class SampleViewSet(viewsets.ModelViewSet):
... | The definition of `ModelViewSet` is:
```
class ModelViewSet(mixins.CreateModelMixin,
mixins.RetrieveModelMixin,
mixins.UpdateModelMixin,
mixins.DestroyModelMixin,
mixins.ListModelMixin,
GenericViewSet)
```
So rather than ... |
Disable a method in a ViewSet, django-rest-framework | 23,639,113 | 28 | 2014-05-13T18:40:35Z | 31,450,643 | 19 | 2015-07-16T09:40:39Z | [
"python",
"django",
"django-views",
"django-rest-framework"
] | `ViewSets` have automatic methods to list, retrieve, create, update, delete, ...
I would like to disable some of those, and the solution I came up with is probably not a good one, since `OPTIONS` still states those as allowed.
Any idea on how to do this the right way?
```
class SampleViewSet(viewsets.ModelViewSet):
... | You could keep using `viewsets.ModelViewSet` and define `http_method_names` on your ViewSet.
Example
```
class SampleViewSet(viewsets.ModelViewSet):
queryset = api_models.Sample.objects.all()
serializer_class = api_serializers.SampleSerializer
http_method_names = ['get', 'post', 'head']
```
Once you add ... |
Extremely long wait time when loading REST resource from angularjs | 23,639,355 | 10 | 2014-05-13T18:54:09Z | 25,835,028 | 7 | 2014-09-14T15:44:27Z | [
"python",
"angularjs",
"flask",
"google-chrome-devtools",
"flask-restful"
] | I'm building a front-end in angular that is accessing a flask/python RESTful API. **I'm using AngularJS v1.2.16.**
For some reason, it takes an insane amount of time before the REST resource is loaded, with most of the time just waiting. It's my understanding that 'waiting' is measuring the time to first byte - all my... | I've run into the same issue (Angular frontend running in Chrome and Flask backend). After trying both Angular 1.2.x and 1.3.x, and a billion other permutations, the only "solution" I've found is to run the Flask backend with the Tornado web server (<http://www.tornadoweb.org/en/stable/>). Other WSGI containers may wor... |
Extremely long wait time when loading REST resource from angularjs | 23,639,355 | 10 | 2014-05-13T18:54:09Z | 30,670,626 | 11 | 2015-06-05T15:42:50Z | [
"python",
"angularjs",
"flask",
"google-chrome-devtools",
"flask-restful"
] | I'm building a front-end in angular that is accessing a flask/python RESTful API. **I'm using AngularJS v1.2.16.**
For some reason, it takes an insane amount of time before the REST resource is loaded, with most of the time just waiting. It's my understanding that 'waiting' is measuring the time to first byte - all my... | We had the same problem and after some research the problem is related with the way Chrome uses connections and the default configuration of flask as mono threaded.
Added threaded=true when flask app is started solved the problem:
app.run(threaded=True) |
Python: Log in a website using urllib | 23,639,538 | 8 | 2014-05-13T19:05:52Z | 23,639,671 | 7 | 2014-05-13T19:14:14Z | [
"python",
"urllib2",
"urllib"
] | I want to log in to this website: <https://www.fitbit.com/login>
This is my code I use:
```
import urllib2
import urllib
import cookielib
login_url = 'https://www.fitbit.com/login'
acc_pwd = {'login':'Log In','email':'username','password':'pwd'}
cj = cookielib.CookieJar() ## add cookies
opener = urllib2.build_opener(... | You're forgetting the hidden fields of the form:
```
<form id="loginForm" class="validate-enabled failure form" method="post" action="https://www.fitbit.com/login" name="login">
<input type="hidden" value="Log In" name="login">
<input type="hidden" value="" name="includeWorkflow">
<input id="loginRedirect"... |
Python if any() does not work | 23,640,710 | 2 | 2014-05-13T20:11:02Z | 23,640,815 | 10 | 2014-05-13T20:18:12Z | [
"python",
"if-statement",
"numpy",
"ipython"
] | I want to check if any string elements in a list `phrases` contains certain keywords in a set `phd_words`. I want to use `any` but it doesn't work.
```
In[19]:
import pandas as pd
import psycopg2 as pg
def test():
phd_words = set(['doctor', 'phd'])
phrases = ['master of science','mechanical engine... | That may happen if you use IPython's `%pylab` magic:
```
In [1]: %pylab
Using matplotlib backend: Qt4Agg
Populating the interactive namespace from numpy and matplotlib
In [2]: if any('b' in w for w in ['a', 'c']):
...: print('What?')
...:
What?
```
Here's why:
```
In [3]: any('b' in w for w in ['a', 'c'])... |
regular expression - string replacement "as is" | 23,641,623 | 5 | 2014-05-13T21:09:16Z | 23,641,723 | 7 | 2014-05-13T21:16:05Z | [
"python",
"regex"
] | I am trying to do this:
word `test` should be found in some text and be replaced with `<strong>test</strong>`. but the thing is, `Test` should be also catched and be replaced with `<strong>Test</strong>`.
I tried this:
```
word = "someword"
text = "Someword and many words with someword"
pattern = re.compile(word, re... | You need to use a [capturing group](https://docs.python.org/2/howto/regex.html#grouping):
```
>>> import re
>>> word = "someword"
>>> text = "Someword and many words with someword"
>>> pattern = re.compile('(%s)' % word, re.IGNORECASE)
>>> pattern.sub(r'<strong>\1</strong>',text)
'<strong>Someword</strong> and many wo... |
Python set datetime hour to be a specific time | 23,642,676 | 7 | 2014-05-13T22:29:24Z | 23,642,695 | 11 | 2014-05-13T22:31:46Z | [
"python",
"datetime"
] | I am trying to get the date to be yesterday at 11.30 PM.
Here is my code:
```
import datetime
yesterday = datetime.date.today () - datetime.timedelta (days=1)
PERIOD=yesterday.strftime ('%Y-%m-%d')
new_period=PERIOD.replace(hour=23, minute=30)
print new_period
```
however i am getting this erro... | First, change `datetime.date.today()` to `datetime.datetime.today()` so that you can manipulate the time of the day.
Then call `replace` before turning the time into a string.
So instead of:
```
PERIOD=yesterday.strftime ('%Y-%m-%d')
new_period=PERIOD.replace(hour=23, minute=30)
```
Do this:
```
new_period=yester... |
RuntimeError: cannot access configuration outside request | 23,650,544 | 10 | 2014-05-14T09:23:19Z | 23,651,042 | 14 | 2014-05-14T09:44:43Z | [
"python",
"flask"
] | I got following error:
```
RuntimeError: cannot access configuration outside request
```
from executing following code:
```
# -*- coding: utf-8 -*-
from flask import Flask, request, render_template, redirect, url_for
from flaskext.uploads import UploadSet, configure_uploads, patch_request_class
app = Flask(__name_... | You haven't [configured the Flask-Uploads extension](http://pythonhosted.org/Flask-Uploads/#app-configuration). Use the [`configure_uploads()` function](http://pythonhosted.org/Flask-Uploads/#flaskext.uploads.configure_uploads) to attach your upload sets to your app:
```
from flaskext.uploads import UploadSet, configu... |
CodeAcademy - Python - Student Becomes Teacher 8/9 - Part of the Whole | 23,652,629 | 3 | 2014-05-14T10:55:28Z | 23,652,673 | 8 | 2014-05-14T10:57:25Z | [
"python"
] | I assume that many of you are familiar with the CodeAcademy Python class. As the title says i am at the point where i have to get the average score of the class. Here it is what i did :
```
def get_class_average(students):
results = []
for student in students:
results.append(get_average(student))
... | The indentation of your `return` statement is wrong. Currently, it is returning after the first iteration of the loop. Here is the proper indentation:
```
def get_class_average(students):
results = []
for student in students:
results.append(get_average(student))
return average(results)
```
You can... |
How to find all possible regex matches in python? | 23,654,329 | 12 | 2014-05-14T12:08:41Z | 23,654,544 | 30 | 2014-05-14T12:18:01Z | [
"python",
"regex",
"parentheses"
] | I am trying to find all possible word/tag pairs or other nested combinations with python and its regular expressions.
```
sent = '(NP (NNP Hoi) (NN Hallo) (NN Hey) (NNP (NN Ciao) (NN Adios)))'
def checkBinary(sentence):
n = re.findall("\([A-Za-z-0-9\s\)\(]*\)", sentence)
print(n)
checkBinary(sent)
Output:
[... | it's actually not possible to do this by using regular expressions, because regular expressions express a language defined by a ***regular*** grammar that can be solved by a non finite deterministic automaton, where matching is represented by states ; then to match nested parenthesis, you'd need to be able to match an ... |
Printing boolean values True/False with the format() method in Python | 23,655,005 | 17 | 2014-05-14T12:37:59Z | 23,666,923 | 8 | 2014-05-14T23:36:06Z | [
"python",
"python-2.7",
"boolean",
"string-formatting",
"string.format"
] | I was trying to print a truth table for Boolean expressions. While doing this, I stumbled upon the following:
```
>>> format(True, "") # shows True in a string representation, same as str(True)
'True'
>>> format(True, "^") # centers True in the middle of the output string
'1'
```
As soon as I specify a format specifi... | Excellent question! I believe I have the answer. This requires digging around through the Python source code in C, so bear with me.
First, `format(obj, format_spec)` is just syntactic sugar for `obj.__format__(format_spec)`. For specifically where this occurs, you'd have to look in [abstract.c](https://github.com/pyth... |
Classes with exception | 23,657,545 | 4 | 2014-05-14T14:28:22Z | 23,657,642 | 8 | 2014-05-14T14:32:18Z | [
"python",
"exception"
] | I was going through the Documentation of exceptions in python :-(<https://docs.python.org/2/tutorial/classes.html#exceptions-are-classes-too>)
I can't seem to find how this code works
```
class B:
pass
class C(B):
pass
class D(C):
pass
for c in [B, C, D]:
try:
raise c()
except D:
... | Since `C` and `D` are subclasses of `B`, they are all caught by the `except B` clause. When catching exceptions, you need to always list the `except` clauses from most to least specific, because your exception is caught by the first one that applies.
From the [documentation](https://docs.python.org/2/reference/compoun... |
Access Github API using Personal Access Token with Python urllib2 | 23,659,744 | 6 | 2014-05-14T16:03:46Z | 23,874,995 | 13 | 2014-05-26T17:19:08Z | [
"python",
"api",
"github",
"authorization",
"urllib2"
] | I am accessing the Github API v3, it was working fine until I hit the rate limit, so I created a Personal Access Token from the Github settings page. I am trying to use the token with urllib2 and the following code:
```
from urllib2 import urlopen, Request
url = "https://api.github.com/users/vhf/repos"
token = "my_pe... | I don't know why this question was marked down. Anyway, I found an answer:
```
from urllib2 import urlopen, Request
url = "https://api.github.com/users/vhf/repos"
token = "my_personal_access_token"
request = Request(url)
request.add_header('Authorization', 'token %s' % token)
response = urlopen(request)
print(respons... |
Variance inflation factor in ridge regression in python | 23,660,120 | 5 | 2014-05-14T16:22:12Z | 23,665,644 | 7 | 2014-05-14T21:42:16Z | [
"python",
"scikit-learn",
"statsmodels"
] | I'm running a ridge regression on somewhat collinear data. One of the methods used to identify a stable fit is a ridge trace and thanks to the great example on [scikit-learn](http://scikit-learn.org/stable/auto_examples/linear_model/plot_ridge_path.html#example-linear-model-plot-ridge-path-py), I'm able to do that. Ano... | Variance inflation factor for Ridge regression is just three lines. I checked it with the example on the UCLA statistics page.
A variation of this will make it into the next statsmodels release. Here is my current function:
```
def vif_ridge(corr_x, pen_factors, is_corr=True):
"""variance inflation factor for Rid... |
How can I sort a list of Python classes by inheritance depth? | 23,660,447 | 5 | 2014-05-14T16:40:07Z | 23,660,922 | 7 | 2014-05-14T17:06:11Z | [
"python",
"class",
"oop",
"method-resolution-order"
] | How would I sort a list of classes in Python, making sure that any child class is before any of its parent classes in the list?
I ask because I want to see what type out of a list of types, an object belongs to, but find the most specific type, if it belongs to multiple. | Simply sort by `len(cls.mro())`.
If `C2` is a subclass of `C1`, it must hold that `len(C1.mro()) < len(C2.mro())` (because each class in `C1.mro()` must also appear in `C2.mro()`). Therefor, you can simply sort by the length of the [mro list](https://stackoverflow.com/questions/2010692/what-does-mro-do-in-python):
``... |
Conditional numpy.cumsum? | 23,662,704 | 9 | 2014-05-14T18:47:07Z | 23,664,049 | 11 | 2014-05-14T20:02:31Z | [
"python",
"arrays",
"numpy",
"gis",
"cumsum"
] | I'm very new to python and numpy, so sorry if I misuse some terminology.
I have converted a raster to a 2D numpy array in the hopes of doing calculations on it quickly and efficiently.
* I need to get the cumulative sum across a numpy array such that, for
each value, I generate the sum of all values that are less t... | Depending on how you want to handle repeats, this could work:
```
In [40]: a
Out[40]: array([4, 4, 2, 1, 0, 3, 3, 1, 0, 2])
In [41]: a_unq, a_inv = np.unique(a, return_inverse=True)
In [42]: a_cnt = np.bincount(a_inv)
In [44]: np.cumsum(a_unq * a_cnt)[a_inv]
Out[44]: array([20, 20, 6, 2, 0, 12, 12, 2, 0, 6], ... |
Does enumerate() produce a generator object? | 23,663,231 | 26 | 2014-05-14T19:14:43Z | 23,663,453 | 25 | 2014-05-14T19:26:53Z | [
"python"
] | As a complete Python newbie, it certainly looks that way. Running the
following...
```
x = enumerate(['fee', 'fie', 'foe'])
x.next()
# Out[1]: (0, 'fee')
list(x)
# Out[2]: [(1, 'fie'), (2, 'foe')]
list(x)
# Out[3]: []
```
... I notice that: (a) `x` does have a `next` method, as seems to be
required for generators, ... | While the Python documentation says that `enumerate` is functionally equivalent to:
```
def enumerate(sequence, start=0):
n = start
for elem in sequence:
yield n, elem
n += 1
```
The real `enumerate` function returns an **iterator**, but not an actual generator. You can see this if you call `h... |
Does enumerate() produce a generator object? | 23,663,231 | 26 | 2014-05-14T19:14:43Z | 23,667,735 | 8 | 2014-05-15T01:19:57Z | [
"python"
] | As a complete Python newbie, it certainly looks that way. Running the
following...
```
x = enumerate(['fee', 'fie', 'foe'])
x.next()
# Out[1]: (0, 'fee')
list(x)
# Out[2]: [(1, 'fie'), (2, 'foe')]
list(x)
# Out[3]: []
```
... I notice that: (a) `x` does have a `next` method, as seems to be
required for generators, ... | ### Testing for enumerate types:
I would include this important test in an exploration of the enumerate type and how it fits into the Python language:
```
>>> import collections
>>> e = enumerate('abc')
>>> isinstance(e, enumerate)
True
>>> isinstance(e, collections.Iterable)
True
>>> isinstance(e, collections.Iterat... |
How to update DjangoItem in Scrapy | 23,663,459 | 5 | 2014-05-14T19:27:07Z | 23,703,081 | 7 | 2014-05-16T19:20:43Z | [
"python",
"django",
"scrapy"
] | I've been working with Scrapy but run into a bit of a problem.
`DjangoItem` has a `save` method to persist items using the Django ORM. This is great, except that if I run a scraper multiple times, new items will be created in the database even though I may just want to update a previous value.
After looking at the do... | Unfortunately, the best way that I found to accomplish this is to do exactly what was stated: Check if the item exists in the database using `django_model.objects.get`, then update it if it does.
In my settings file, I added the new pipeline:
```
ITEM_PIPELINES = {
# ...
# Last pipeline, because further chang... |
pandas: conditional count across row | 23,663,623 | 2 | 2014-05-14T19:37:05Z | 23,663,735 | 7 | 2014-05-14T19:43:21Z | [
"python",
"pandas"
] | I have a dataframe that has months for columns, and various departments for rows.
```
2013April 2013May 2013June
Dep1 0 10 15
Dep2 10 15 20
```
I'm looking to add a column that counts the number of months that have a value greater than 0. Ex... | first, pick your columns, `cols`
```
df[cols].apply(lambda s: (s > 0).sum(), axis=1)
```
this takes advantage of the fact that `True` and `False` are `1` and `0` respectively in python.
## actually, there's a better way:
```
(df[cols] > 0).sum(1)
```
because this takes advantage of numpy vectorization
```
%timeit... |
Pandas equivalent of Oracle Lead/Lag function | 23,664,877 | 4 | 2014-05-14T20:51:34Z | 23,664,988 | 10 | 2014-05-14T20:58:21Z | [
"python",
"pandas"
] | First I'm new to pandas, but I'm already falling in love with it. I'm trying to implement the equivalent of the Lag function from Oracle.
Let's suppose you have this DataFrame:
```
Date Group Data
2014-05-14 09:10:00 A 1
2014-05-14 09:20:00 A 2
2014-05-14 09:30:00 ... | You could perform a [groupby/apply (shift) operation](http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine):
```
In [15]: df['Data_lagged'] = df.groupby(['Group'])['Data'].shift(1)
In [16]: df
Out[16]:
Date Group Data Data_lagged
2014-05-14 09:10:00 A 1 ... |
Use Python to write CSV output to STDOUT | 23,665,264 | 14 | 2014-05-14T21:14:51Z | 23,665,338 | 31 | 2014-05-14T21:19:33Z | [
"python",
"python-3.x",
"stdout"
] | I know I can write a CSV file with something like:
```
with open('some.csv', 'w', newline='') as f:
```
How would I instead write that output to STDOUT? | [`sys.stdout`](https://docs.python.org/3/library/sys.html#sys.stdout) is a file object corresponding to the program's standard output. You can use its `write()` method. Note that it's probably not necessary to use the `with` statement, because `stdout` does not have to be opened or closed.
So, if you need to create a ... |
Difference between setUpClass and setUp in Python unittest | 23,667,610 | 15 | 2014-05-15T01:04:26Z | 23,670,844 | 23 | 2014-05-15T06:32:33Z | [
"python",
"python-2.7",
"vs-unit-testing-framework",
"python-unittest"
] | Difference between setUpClass vs. setUp in python Unittest framework, why not do set up part in setUp instead of setUpClass?
I want to understand what part of setup is done in setUp and setUpClass functions as well with tearDown and tearDownClass. | The difference manifests itself when you have more than one test method in your class. `setUpClass` and `tearDownClass` are run once for the whole class; `setUp` and `tearDown` are run before and after each test method.
For example:
```
class Example(unittest.TestCase):
@classmethod
def setUpClass(self):
... |
pandas joining multiple dataframes on columns | 23,668,427 | 16 | 2014-05-15T02:51:40Z | 23,671,390 | 19 | 2014-05-15T07:04:25Z | [
"python",
"pandas"
] | I have 3 CSV files. Each has the first column as the (string) names of people, while all the other columns in each dataframe are attributes of that person.
How can I "join" together all three CSV documents to create a single CSV with each row having all the attributes for each unique value of the person's string name?... | You could try this if you have 3 dataframes
```
# Merge multiple dataframes
df1 = pd.DataFrame(np.array([
['a', 5, 9],
['b', 4, 61],
['c', 24, 9]]),
columns=['name', 'attr11', 'attr12'])
df2 = pd.DataFrame(np.array([
['a', 5, 19],
['b', 14, 16],
['c', 4, 9]]),
columns=['name', 'attr21',... |
pandas joining multiple dataframes on columns | 23,668,427 | 16 | 2014-05-15T02:51:40Z | 30,512,931 | 50 | 2015-05-28T17:08:50Z | [
"python",
"pandas"
] | I have 3 CSV files. Each has the first column as the (string) names of people, while all the other columns in each dataframe are attributes of that person.
How can I "join" together all three CSV documents to create a single CSV with each row having all the attributes for each unique value of the person's string name?... | Assumed imports:
```
import pandas as pd
```
[John Galt's answer](http://stackoverflow.com/a/23671390/366309) is basically a `reduce` operation. If I have more than a handful of dataframes, I'd put them in a list like this (generated via list comprehensions or loops or whatnot):
```
dfs = [df0, df1, df2, dfN]
```
A... |
Automatically load a virtualenv when running a script | 23,678,993 | 10 | 2014-05-15T13:00:43Z | 23,762,093 | 9 | 2014-05-20T14:07:21Z | [
"python",
"virtualenv"
] | I have a python script that needs dependencies from a virtualenv. I was wondering if there was some way I could add it to my path and have it auto start it's virtualenv, run and then go back to the system's python.
I've try playing around with autoenv and `.env` but that doesn't seem to do exactly what I'm looking for... | There are two ways to do this:
1. Put the name of the virtual env python into first line of the script. Like this
#!/your/virtual/env/path/bin/python
2. Add virtual environment directories to the sys.path. Note that you need to import sys library. Like this
import sys
sys.path.append('/path/to/virtual/env/... |
Python Removing Non Latin Characters | 23,680,976 | 4 | 2014-05-15T14:20:11Z | 23,681,038 | 10 | 2014-05-15T14:22:33Z | [
"python",
"string",
"python-2.7",
"unicode"
] | How can I delete all the non latin characters from a string? More specifically, is there a way to find out Non Latin characters from unicode data? | Using the third-party [regex module](https://pypi.python.org/pypi/regex), you could remove all non-Latin characters with
```
import regex
result = regex.sub(ur'[^\p{Latin}]', u'', text)
```
If you don't want to use the regex module, [this page](http://www.regular-expressions.info/unicode.html) lists Latin unicode blo... |
Python Pymongo auth failed | 23,682,933 | 3 | 2014-05-15T15:42:21Z | 29,590,420 | 8 | 2015-04-12T14:17:50Z | [
"python",
"mongodb",
"pymongo"
] | Pymongo keep failing to login MongoDB. I typed right password for "root" account.
```
Traceback (most recent call last):
File "index.py", line 3, in <module>
from apis import app
File "/home/app/apis/__init__.py", line 16, in <module>
import apis.call
File "/home/app/apis/call.py", line 12, in <module>
... | In my case, I upgraded pymongo
```
pip install --upgrade pymongo
``` |
Iterate a to zzz in python | 23,686,398 | 3 | 2014-05-15T18:50:05Z | 23,686,531 | 7 | 2014-05-15T18:57:10Z | [
"python",
"increment",
"alphabet"
] | So I need to get a function that generates a list of letters that increase from a, and end in zzz.
Should look like this:
```
a
b
c
...
aa
ab
ac
...
zzx
zzy
zzz
```
The code I currently have is this:
```
for combo in product(ascii_lowercase, repeat=3):
print(''.join(combo))
```
However, this does only ... | Add another loop:
```
for x in range(1, 4):
for combo in product(ascii_lowercase, repeat=x):
print(''.join(combo))
```
Output is as follows:
```
a
...
aa
...
aaa
...
zzz
```
Where `...` is a huge number of combinations. |
IPython Notebook Tab-Complete -- Show Docstring | 23,687,011 | 8 | 2014-05-15T19:27:16Z | 23,702,175 | 10 | 2014-05-16T18:22:13Z | [
"python",
"ipython",
"ipython-notebook"
] | I just upgraded to IPython 2.0.0 and the behavior of tab-complete seems to have changed for the worse. (Using pyreadline 2.0, which according to [this question](http://stackoverflow.com/questions/2603798/ipython-tab-completion-not-working) might matter).
Formerly, if I hit TAB after `function_name(`, IPython would sho... | Apparently it's now Shift-Tab. Thanks @Thomas K. |
Why does PyCharm give unresolved reference errors on some Numpy imports? | 23,689,183 | 7 | 2014-05-15T21:41:19Z | 23,700,566 | 9 | 2014-05-16T16:47:57Z | [
"python",
"python-2.7",
"numpy",
"pycharm"
] | The following line in PyCharm is flagged by on-the-fly inspection with unresolved reference errors for each import. (They are underlined red.)
```
from numpy import tan, arcsin, arccos, arctan
```
However the following imports do not cause any error/warning:
```
from numpy import sin, cos, arctan2, sqrt, cross, pi
`... | The reason you are getting this is because of PyCharm's static analysis. Now, what Python does is use static skeletons (some are pre-generated and some are generated) to give you the analysis. Take a look at the pre-generated skeletons here -> <https://github.com/JetBrains/python-skeletons>
This might be solved, by en... |
Can't open file "django-admin.py": No such file or directory -- Mac OSX | 23,690,944 | 2 | 2014-05-16T00:47:58Z | 23,690,982 | 12 | 2014-05-16T00:52:15Z | [
"python",
"django",
"osx",
"python-2.7"
] | I have just done a fresh install of Python 2.7 and Django 1.6.5.
Entering into python in the terminal and using `import Django` and `django.VERSION` yields:
`(1, 6, 5, 'final', 0)`
But when I attempt to run `python django-admin.py startproject myproject`, I get the following:
`/Library/Frameworks/Python.framework/V... | Try running without `python`
```
django-admin.py startproject myproject
```
**Explanation:**
It's because `django-admin.py` is added in one of the bin folders. Also, `python` expects a file path. |
Split pandas dataframe based on groupby | 23,691,133 | 5 | 2014-05-16T01:10:06Z | 23,691,168 | 17 | 2014-05-16T01:15:12Z | [
"python",
"pandas"
] | I want to split the following dataframe based on column ZZ
```
df =
N0_YLDF ZZ MAT
0 6.286333 2 11.669069
1 6.317000 6 11.669069
2 6.324889 6 11.516454
3 6.320667 5 11.516454
4 6.325556 5 11.516454
5 6.359000 6 11.516454
6 6.359000 6 11.516454
... | ```
gb = df.groupby('ZZ')
[gb.get_group(x) for x in gb.groups]
``` |
Running Cython in Windows x64 - fatal error C1083: Cannot open include file: 'basetsd.h': No such file or directory | 23,691,564 | 10 | 2014-05-16T02:08:01Z | 25,734,953 | 7 | 2014-09-09T00:16:23Z | [
"python",
"windows",
"python-2.7",
"cython",
"python-extensions"
] | I have been trying to install Cython for Python 2.7 on my Window 7 system. In particular, I prefer everything in 64 bits. (In case you wonder, I need Cython because Cython is one of the components I need for another package for some specialized numerical analysis. And x64 is potentially an advantage for storage of larg... | I encountered this problem while trying to install `pandas` in 'develop' mode. I'm up & running now. My environment:
* Windows XP Pro x64 SP2
* [WinPython](http://winpython.sourceforge.net) 64bit 2.7.5.3 (Python 2.7.5.amd64) `<--` "registered" as system's python distro using WinPython Control Panel application
Proble... |
Adjusting Text background transparency | 23,696,898 | 6 | 2014-05-16T13:50:07Z | 23,698,794 | 8 | 2014-05-16T15:12:57Z | [
"python",
"text",
"matplotlib",
"plot",
"transparency"
] | I'm trying to put some text with a background on a matplotlib figure, with the text and background both transparent. The following code
```
import numpy as np
import matplotlib.pyplot as plt
plt.figure()
ax = plt.subplot(111)
plt.plot(np.linspace(1,0,1000))
t = plt.text(0.03,.95,'text',transform=ax.transAxes,backgroun... | The `alpha` passed to `plt.text()` will change the transparency of the text font. To change the background you have to change the `alpha` using `Text.set_bbox()`:
```
t = plt.text(0.5, 0.5, 'text', transform=ax.transAxes, fontsize=30)
t.set_bbox(dict(color='red', alpha=0.5, edgecolor='red'))
```
![enter image descrip... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.