title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Pandas group by and sum two columns | 25,536,032 | 2 | 2014-08-27T20:05:21Z | 25,536,576 | 7 | 2014-08-27T20:42:57Z | [
"python",
"pandas"
] | Beginner question. This seems like it should be a straightforward operation, but I can't figure it out from reading the docs.
I have a df with this structure:
```
|integer_id|int_field_1|int_field_2|
```
The integer\_id column is non-unique, so I'd like to group the df by integer\_id and sum the two fields.
The equ... | You just need to call `sum` on a `groupby` object:
```
df.groupby('integer_id').sum()
```
See the [docs](http://pandas.pydata.org/pandas-docs/stable/groupby.html#dataframe-column-selection-in-groupby) for further examples |
pandas dataframe selecting the nan indexes | 25,536,133 | 5 | 2014-08-27T20:11:38Z | 25,536,283 | 7 | 2014-08-27T20:21:58Z | [
"python",
"pandas"
] | I have a dataframe `df` with the following:
```
In [10]: df.index.unique()
Out[10]: array([u'DC', nan, u'BS', u'AB', u'OA'], dtype=object)
```
I can easily select out df.ix["DC"], df.ix["BS"], etc. But I'm having trouble selecting the `nan` indexes.
```
df.ix[nan], df.ix["nan"], df.ix[np.nan] all won't work.
```
Ho... | One way would be to use `pd.isnull` to identify the location of the NaNs:
```
In [218]: df
Out[218]:
Date Name val
DC 0 A 0
NaN 1 B 1
BS 2 C 2
AB 0 A 3
OA 1 B 4
NaN 2 C 5
In [219]: pd.isnull(df.index)
Out[219]: array([False, True, False, False, F... |
Better way, of writing a Long "if" statements? | 25,537,309 | 5 | 2014-08-27T21:34:22Z | 25,537,337 | 7 | 2014-08-27T21:35:54Z | [
"python",
"if-statement"
] | Hello this might be a very basic question, but because I am new to programming and python and I really want to learn, am asking.
I am making a program that takes input from the user, of the "Playing Card" Suit he has. And the program only accepts the correct suit.
For example;
> Diamonds, Hearts, Clubs, Spades
if t... | ```
if suit in ("Diamonds","Hearts","Clubs","Spades"):
return "Accepted"
else:
return "Wrong input"
```
Just use `in` to check for membership, if `suit` is not in the tuple, your `else` clause will be executed.
[membership operators](http://www.tutorialspoint.com/python/membership_operators_example.htm)
You ... |
Better way, of writing a Long "if" statements? | 25,537,309 | 5 | 2014-08-27T21:34:22Z | 25,537,348 | 8 | 2014-08-27T21:36:42Z | [
"python",
"if-statement"
] | Hello this might be a very basic question, but because I am new to programming and python and I really want to learn, am asking.
I am making a program that takes input from the user, of the "Playing Card" Suit he has. And the program only accepts the correct suit.
For example;
> Diamonds, Hearts, Clubs, Spades
if t... | You can use `in` to check if `suit` is in a list, tuple or set of accepted suits:
```
if suit in {"Diamonds", "Hearts", "Clubs", "Spades"}:
return "Accepted"
else:
return "Wrong input"
```
You can also use a `tuple` (with `(...)`), `list` (`[...]`) or `frozenset` instead of the `set` (`{...}`). |
ANOVA in python using pandas dataframe with statsmodels or scipy? | 25,537,399 | 18 | 2014-08-27T21:41:10Z | 28,756,099 | 7 | 2015-02-27T01:40:31Z | [
"python",
"pandas",
"scipy",
"statsmodels",
"anova"
] | I want to use the Pandas dataframe to breakdown the variance in one variable.
For example, if I have a column called 'Degrees', and I have this indexed for various dates, cities, and night vs. day, I want to find out what fraction of the variation in this series is coming from cross-sectional city variation, how much ... | I set up a direct comparison to test them, found that their assumptions can [differ slightly](http://stackoverflow.com/questions/28755617/why-do-r-and-statsmodels-give-slightly-different-anova-results "differ slightly") , got a hint from a statistician, and here is an example of ANOVA on a pandas dataframe matching R's... |
Django 1.7 throws django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet | 25,537,905 | 128 | 2014-08-27T22:24:57Z | 25,648,427 | 50 | 2014-09-03T15:38:05Z | [
"python",
"django"
] | This is the traceback on my windows system.
```
Traceback (most recent call last):
File "D:\AMD\workspace\steelrumors\manage.py", line 9, in <module>
django.setup()
File "D:\AMD\Django\django-django-4c85a0d\django\__init__.py", line 21, in setup
apps.populate(settings.INSTALLED_APPS)
File "D:\AMD\Django\... | The issue is in your registration app. It seems django-registration calls `get_user_module()` in `models.py` at a module level (when models are still being loaded by the application registration process). This will no longer work:
```
try:
from django.contrib.auth import get_user_model
User = get_user_model()
... |
Django 1.7 throws django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet | 25,537,905 | 128 | 2014-08-27T22:24:57Z | 26,215,548 | 189 | 2014-10-06T11:44:23Z | [
"python",
"django"
] | This is the traceback on my windows system.
```
Traceback (most recent call last):
File "D:\AMD\workspace\steelrumors\manage.py", line 9, in <module>
django.setup()
File "D:\AMD\Django\django-django-4c85a0d\django\__init__.py", line 21, in setup
apps.populate(settings.INSTALLED_APPS)
File "D:\AMD\Django\... | Running these commands solved my problem (credit to [this answer](http://stackoverflow.com/a/25244833/977116)):
```
import django
django.setup()
```
However I'm not sure why I need this. Comments would be appreciated. |
Django 1.7 throws django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet | 25,537,905 | 128 | 2014-08-27T22:24:57Z | 27,426,582 | 38 | 2014-12-11T15:35:46Z | [
"python",
"django"
] | This is the traceback on my windows system.
```
Traceback (most recent call last):
File "D:\AMD\workspace\steelrumors\manage.py", line 9, in <module>
django.setup()
File "D:\AMD\Django\django-django-4c85a0d\django\__init__.py", line 21, in setup
apps.populate(settings.INSTALLED_APPS)
File "D:\AMD\Django\... | This is what solved it for us and [these folks](http://stackoverflow.com/a/26636758/48735):
Our project started with Django 1.4, we went to 1.5 and then to 1.7. Our wsgi.py looked like this:
```
import os
from django.core.handlers.wsgi import WSGIHandler
os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings'
appli... |
Django 1.7 throws django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet | 25,537,905 | 128 | 2014-08-27T22:24:57Z | 27,845,576 | 18 | 2015-01-08T17:01:40Z | [
"python",
"django"
] | This is the traceback on my windows system.
```
Traceback (most recent call last):
File "D:\AMD\workspace\steelrumors\manage.py", line 9, in <module>
django.setup()
File "D:\AMD\Django\django-django-4c85a0d\django\__init__.py", line 21, in setup
apps.populate(settings.INSTALLED_APPS)
File "D:\AMD\Django\... | Just encountered the same issue. The problem is because of `django-registration` incompatible with django 1.7 user model.
A simple fix is to change these lines of code, at your installed `django-registration` module::
```
try:
from django.contrib.auth import get_user_model
User = get_user_model()
except Impor... |
Multiple histograms in Pandas | 25,539,195 | 6 | 2014-08-28T01:02:58Z | 25,539,531 | 8 | 2014-08-28T01:53:13Z | [
"python",
"matplotlib",
"pandas",
"histogram"
] | I would like to create the following histogram (see image below) taken from the book "Think Stats". However, I cannot get them on the same plot. Each DataFrame takes its own subplot.
I have the following code:
```
import nsfg
import matplotlib.pyplot as plt
df = nsfg.ReadFemPreg()
preg = nsfg.ReadFemPreg()
live = pre... | As far as I can tell, pandas can't handle this situation. That's ok since all of their plotting methods are for convenience only. You'll need to use matplotlib directly. Here's how I do it:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas
#import seaborn
#seaborn.set(style='ticks... |
Remove or adapt border of frame of legend using matplotlib | 25,540,259 | 21 | 2014-08-28T03:29:11Z | 25,540,279 | 43 | 2014-08-28T03:31:17Z | [
"python",
"matplotlib"
] | When plotting a plot using matplotlib:
1. How to remove the box of the legend?
2. How to change the color of the border of the legend box?
3. How to remove only the border of the box of the legend? | When plotting a plot using matplotlib:
How to remove the box of the legend?
```
plt.legend(frameon=False)
```
How to change the color of the border of the legend box?
```
leg = plt.legend()
leg.get_frame().set_edgecolor('b')
```
How to remove only the border of the box of the legend?
```
leg = plt.legend()
leg.ge... |
typeerror in basic example for legend handles in matplotlib | 25,551,467 | 3 | 2014-08-28T14:34:27Z | 25,551,618 | 7 | 2014-08-28T14:40:56Z | [
"python",
"matplotlib"
] | I have difficulties to understand the legend handling. The more, the basic example from the official [matplotlib legend guide](http://matplotlib.org/users/legend_guide.html)
```
import matplotlib.pyplot as plt
line_up, = plt.plot([1,2,3], label='Line 2')
line_down, = plt.plot([3,2,1], label='Line 1')
plt.legend(handle... | Just remove `handles` keyword
Use it like that:
```
import matplotlib.pyplot as plt
line_up, = plt.plot([1,2,3], label='Line 2')
line_down, = plt.plot([3,2,1], label='Line 1')
plt.legend([line_up, line_down])
``` |
How to find the difference between two lists of dictionaries? | 25,552,972 | 8 | 2014-08-28T15:48:05Z | 25,552,993 | 10 | 2014-08-28T15:49:37Z | [
"python",
"list",
"dictionary",
"set"
] | I have two lists of dictionaries and I'd like to find the difference between them (i.e. what exists in the first list but not the second, and what exists in the second list but not the first list).
The issue is that it is a list of dictionaries
```
a = [{'a': '1'}, {'c': '2'}]
b = [{'a': '1'}, {'b': '2'}]
set(a) - s... | You can use the `in` operator to see if it is in the list
```
a = [{'a': '1'}, {'c': '2'}]
b = [{'a': '1'}, {'b': '2'}]
>>> {'a':'1'} in a
True
>>> {'a':'1'} in b
True
>>> [i for i in a if i not in b]
[{'c': '2'}]
``` |
How to paste from clipboard on Heroku iPython? | 25,554,648 | 5 | 2014-08-28T17:27:12Z | 25,733,880 | 10 | 2014-09-08T22:11:29Z | [
"python",
"django",
"heroku",
"ipython"
] | Basically what I'm trying to do is get the `%paste` function to work when I run iPython on the heroku one-off dyno using the django-extension app: `heroku run python manage.py shell_plus`
The problem is when I try to paste into the console, I get the following error:
```
ERROR: Getting text from the clipboard on this... | Reposting as an answer:
`%paste` probably won't work on Heroku, because it will look at the server clipboard, not your local clipboard.
You can instead use `%cpaste`. This lets you paste in text using your regular terminal mechanisms (without auto-indent getting in the way, as it would if you tried to paste into IPyt... |
Python sharing a lock between processes | 25,557,686 | 11 | 2014-08-28T20:43:01Z | 25,558,333 | 21 | 2014-08-28T21:29:05Z | [
"python",
"locking",
"multiprocessing",
"share"
] | I am attempting to use a partial function so that pool.map() can target a function that has more than one parameter (in this case a Lock() object).
Here is example code (taken from an answer to a previous question of mine):
```
from functools import partial
def target(lock, iterable_item):
for item in items:
... | Sorry, I should have caught this in my answer to your other question. You can't pass normal `multiprocessing.Lock` objects to `Pool` methods, because they can't be pickled. There are two ways to get around this. One is to create [`Manager()`](https://docs.python.org/2.7/library/multiprocessing.html#managers) and pass a... |
pytest fixture is always returning a function | 25,559,265 | 4 | 2014-08-28T22:55:52Z | 25,560,086 | 11 | 2014-08-29T00:45:52Z | [
"python",
"testing",
"fixtures",
"py.test"
] | I want to be able to return a value from a fixture to multiple tests/test classes, but the value that gets passed is a function.
Here's my code:
```
import pytest
@pytest.fixture()
def user_setup():
user = {
'name': 'chad',
'id': 1
}
return user
@pytest.mark.usefixtures('user_setup')
cla... | When you use the `@pytest.mark.usefixtures` marker you still need to provide a similarly named input argument if you want that fixture to be injected in to your test function.
As described in the [py.test docs for fixtures](http://pytest.org/latest/builtin.html#fixtures-and-requests):
> The name of the fixture functi... |
Scrapy Installation Fails with error 'cannot open include: 'openssl/aes.h ' | 25,568,566 | 12 | 2014-08-29T12:44:18Z | 27,757,454 | 29 | 2015-01-03T17:19:55Z | [
"python",
"installation",
"scrapy",
"easy-install"
] | I am trying to install Scrapy with `easy_install -U Scrapy` but it ends up in a strange error "Can not open include file " while trying to install it. Does any one know what is going on? Here is my complete traceback:
```
C:\Users\Mubashar Kamran>easy_install -U Scrapy
Searching for Scrapy
Reading https://pypi.python.... | I got same error installing different python app. I was missing OpenSSL dev package, solved by:
```
sudo apt-get install libssl-dev
``` |
Scrapy Installation Fails with error 'cannot open include: 'openssl/aes.h ' | 25,568,566 | 12 | 2014-08-29T12:44:18Z | 33,093,489 | 9 | 2015-10-13T03:12:10Z | [
"python",
"installation",
"scrapy",
"easy-install"
] | I am trying to install Scrapy with `easy_install -U Scrapy` but it ends up in a strange error "Can not open include file " while trying to install it. Does any one know what is going on? Here is my complete traceback:
```
C:\Users\Mubashar Kamran>easy_install -U Scrapy
Searching for Scrapy
Reading https://pypi.python.... | On OSX
`brew install openssl` and then possibly `brew link openssl --force` if you are informed that links were not created.
Install Scrapy using the following command
`env CRYPTOGRAPHY_OSX_NO_LINK_FLAGS=1 LDFLAGS="$(brew --prefix openssl)/lib/libssl.a $(brew --prefix openssl)/lib/libcrypto.a" CFLAGS="-I$(brew --pre... |
scipy signal find_peaks_cwt not finding the peaks accurately? | 25,571,260 | 10 | 2014-08-29T15:16:12Z | 25,666,951 | 11 | 2014-09-04T13:36:21Z | [
"python",
"scipy",
"signal-processing",
"image-segmentation"
] | I've got a 1-D signal in which I'm trying to find the peaks. I'm looking to find them perfectly.
I'm currently doing:
```
import scipy.signal as signal
peaks = signal.find_peaks_cwt(data, np.arange(100,200))
```
The following is a graph with red spots which show the location of the peaks as found by `find_peaks_cwt(... | Solved, solution:
Filter data first:
```
window = signal.general_gaussian(51, p=0.5, sig=20)
filtered = signal.fftconvolve(window, data)
filtered = (np.average(data) / np.average(filtered)) * filtered
filtered = np.roll(filtered, -25)
```
Then use angrelextrema as per rapelpy's answer.
Result:
![enter imag... |
Mysterious for loop in python | 25,573,715 | 3 | 2014-08-29T17:56:45Z | 25,573,908 | 12 | 2014-08-29T18:10:39Z | [
"python",
"for-loop",
"sequence"
] | Let exp = [1,2,3,4,5]
If I then execute `x in exp`, it will give me false. But if I execute :
```
for x in exp:
if x==3:
print('True')
```
Then execute `x in exp`, it returns True. What's happening here? I didn't assign anything to x. Did I? I am really confused.
\*\*EDIT:\*\*Sorry if I didn't say this ... | Seems like you stumbled about `in` being somewhat overloaded in Python.
* with [`x in exp`](https://docs.python.org/3/reference/expressions.html#not-in), you are asking "Is `x` **in** `exp`?"
* with [`for x in exp: ...`](https://docs.python.org/3/reference/compound_stmts.html#the-for-statement), you tell Python "**For... |
AttributeError: 'module' object has no attribute 'tests' | 25,575,073 | 42 | 2014-08-29T19:34:59Z | 25,575,664 | 97 | 2014-08-29T20:22:01Z | [
"python",
"django",
"python-2.7",
"python-unittest"
] | I'm running this command:
```
python manage.py test project.apps.app1.tests
```
and it causes this error:
> AttributeError: 'module' object has no attribute 'tests'
Below is my directory structure. I've also added app1 to my installed apps config.
```
Traceback (most recent call last):
File "manage.py", line 1... | I finally figured it out working on another problem. The problem was that my test couldn't find an import.
It looks like you get the above error if your test fails to import. This makes sense because the test suite can't import a broken test. At least I think this is what is going on because I fixed the import within ... |
AttributeError: 'module' object has no attribute 'tests' | 25,575,073 | 42 | 2014-08-29T19:34:59Z | 29,347,638 | 19 | 2015-03-30T13:34:40Z | [
"python",
"django",
"python-2.7",
"python-unittest"
] | I'm running this command:
```
python manage.py test project.apps.app1.tests
```
and it causes this error:
> AttributeError: 'module' object has no attribute 'tests'
Below is my directory structure. I've also added app1 to my installed apps config.
```
Traceback (most recent call last):
File "manage.py", line 1... | Use:
`./manage.py shell`
followed by
`import myapp.tests`
to find the nature of the import error. |
Plotting CDF of a pandas series in python | 25,577,352 | 9 | 2014-08-29T23:05:46Z | 26,394,108 | 14 | 2014-10-15T23:57:58Z | [
"python",
"pandas",
"series",
"cdf"
] | Is there a way to do this? I cannot seem an easy way to interface pandas series with plotting a CDF. | I believe the functionality you're looking for is in the hist method of a Series object which wraps the hist() function in matplotlib
Here's the relevant documentation
```
In [10]: import matplotlib.pyplot as plt
In [11]: plt.hist?
...
Plot a histogram.
Compute and draw the histogram of *x*. The return value is a
t... |
Plotting CDF of a pandas series in python | 25,577,352 | 9 | 2014-08-29T23:05:46Z | 31,971,245 | 8 | 2015-08-12T16:57:35Z | [
"python",
"pandas",
"series",
"cdf"
] | Is there a way to do this? I cannot seem an easy way to interface pandas series with plotting a CDF. | A CDF or cumulative distribution function plot is basically a graph with on the X-axis the sorted values and on the Y-axis the cumulative distribution. So, I would create a new series with the sorted values as index and the cumulative distribution as values.
First create an example series:
```
import pandas as pd
imp... |
Python - Access class variable from instance | 25,577,578 | 5 | 2014-08-29T23:40:35Z | 25,577,642 | 9 | 2014-08-29T23:49:58Z | [
"python"
] | I have this class:
```
class ReallyLongClassName:
static_var = 5
def instance_method(self):
ReallyLongClassName.static_var += 1
```
Is there some way to access the static variable using the self variable? I'd rather do something like `class(self).static_var += 1`, because long names are unreadable. | The answer is "yes, butâ¦"
The best way to understand is to actually try it:
```
>>> class RLCN:
... static_var = 5
... def method1(self):
... RLCN.static_var += 1
... def method2(self):
... self.static_var += 1
>>> rlcn = RLCN()
>>> RLCN.static_var, rlcn.static_var
(5, 5)
>>> rlcn.static... |
How to rename ForeignKey set field in Django Rest Framework | 25,577,875 | 5 | 2014-08-30T00:29:24Z | 25,577,876 | 7 | 2014-08-30T00:29:24Z | [
"python",
"django",
"django-rest-framework"
] | I am serializing Foreign key set using Django Rest Framework, I have following models:
```
class Transaction(models.Model):
...
class TransactionStatus(models.Model):
transaction = models.ForeignKey(Transaction)
...
```
I have a serializer for both of these models, one of them looks like this:
```
class ... | After a quick experimenting I have discovered that this will do the trick:
```
class TransactionSerializer(serializers.ModelSerializer):
changes = TransactionStatusSerializer(many=True, read_only=True, source='transactionstatus_set')
class Meta:
model = Transaction
depth = 1
fields = (... |
Get value of an input box using Selenium (Python) | 25,580,569 | 11 | 2014-08-30T08:36:04Z | 25,972,869 | 12 | 2014-09-22T11:20:52Z | [
"python",
"selenium",
"selenium-webdriver"
] | I am trying to extract the text in an input box,
```
<input type="text" name="inputbox" value="name" class="box">
```
I started with
```
input = driver.find_element_by_name("inputbox")
```
I tried input.getText() but I got
```
AttributeError: 'WebElement' object has no attribute 'getText'
``` | Use this to get the value of the input element:
```
input.get_attribute('value')
``` |
ipython: how to set terminal width | 25,583,428 | 11 | 2014-08-30T14:28:11Z | 25,619,121 | 10 | 2014-09-02T08:34:23Z | [
"python",
"numpy",
"ipython"
] | When I use `ipython terminal` and want to print a `numpy.ndarray` which has many columns, the lines are automatically broken somewhere around 80 characters (i.e. the width of the lines is cca 80 chars):
```
z = zeros((2,20))
print z
```
Presumably, ipython expects that my terminal has 80 columns. In fact however, my ... | After some digging through the code, it appears that the variable you're looking for is `numpy.core.arrayprint._line_width`, which is 75 by default. Setting it to 160 worked for me:
```
>>> numpy.zeros((2, 20))
array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
... |
ipython: how to set terminal width | 25,583,428 | 11 | 2014-08-30T14:28:11Z | 26,188,657 | 10 | 2014-10-04T01:09:15Z | [
"python",
"numpy",
"ipython"
] | When I use `ipython terminal` and want to print a `numpy.ndarray` which has many columns, the lines are automatically broken somewhere around 80 characters (i.e. the width of the lines is cca 80 chars):
```
z = zeros((2,20))
print z
```
Presumably, ipython expects that my terminal has 80 columns. In fact however, my ... | You can see your current line width with
```
numpy.get_printoptions()['linewidth']
```
and set it with
```
numpy.set_printoptions(linewidth=160)
```
# Automatically set printing width
If you'd like the terminal width to be set automatically, you can have Python execute a startup script. So create a file `~/.python... |
Running Python from Atom | 25,585,500 | 32 | 2014-08-30T18:23:35Z | 25,586,345 | 42 | 2014-08-30T20:10:58Z | [
"python",
"atom-editor"
] | In Sublime, we have an easy and convent way to run Python or almost any language for that matter using `â` + `b` (or `ctrl` + `b`)
Where the code will run in a small window below the source code and can easily be closed with the escape key when no longer needed.
Is there a way to replicate this functionally with Gi... | The *script* package does exactly what you're looking for: <https://atom.io/packages/script>
The package's documentation also contains the key mappings, which you can easily customize. |
PyCharm false syntax error using turtle | 25,588,642 | 3 | 2014-08-31T03:15:08Z | 25,588,673 | 8 | 2014-08-31T03:21:03Z | [
"python",
"pycharm",
"turtle-graphics",
"code-inspection"
] | The code below works perfectly, however, PyCharm complains about syntax error in `forward(100)`
```
#!/usr/bin/python
from turtle import *
forward(100)
done()
```
Since `turtle` is a stanrd library I don't think that I need to do additional configuration, am I right?
` function is made available for importing by specifying [`__all__`](https://docs.python.org/2/tutorial/modules.html#importing-from-a-package) in the `turtle` module, relevant part from the [source code](https://github.com/python/cpython/blob/c7688b44387d116522ff53c0927169db45969f0e/Lib/turtle.py#L127-144... |
How to freeze entire header row in openpyxl? | 25,588,918 | 4 | 2014-08-31T04:21:21Z | 25,589,350 | 9 | 2014-08-31T06:00:08Z | [
"python",
"openpyxl"
] | How to freeze entire header row in openpyxl?
So far I can only freeze the column:
```
# only freeze the column (freeze vertically)
cell = ws.cell('{}{}'.format(col, row_idx+1))
worksheet.freeze_panes = cell
``` | Make sure `cell` isn't on row one - `freeze_panes` will freeze rows above the given cell and columns to the left.
---
### Example:
```
from openpyxl import Workbook
wb = Workbook()
ws = wb.active
c = ws['B2']
ws.freeze_panes = c
wb.save('test.xlsx')
```
---
This will give you a blank worksheet with both row 1 and... |
python argparse - add action to subparser with no arguments? | 25,594,491 | 9 | 2014-08-31T17:28:49Z | 25,596,386 | 9 | 2014-08-31T21:18:56Z | [
"python",
"python-2.7",
"argparse"
] | I am adding subparsers to my parser to simulate subcommands functionality (for example code see: [Simple command line application in python - parse user input?](http://stackoverflow.com/questions/25332925/simple-command-line-application-in-python-parse-user-input/25368374#25368374)). Now I want to add a `quit` subparse... | The documentation for `subcommands` gives two examples of how to identify the subparser.
<https://docs.python.org/dev/library/argparse.html#sub-commands>
One is to give the `add_subparsers` a `dest`:
```
def do_quit(args):
# action
quit()
parser = ArgumentParser()
subparser = parser.add_subparsers(dest='cmd... |
How to enable CORS in flask and heroku | 25,594,893 | 8 | 2014-08-31T18:15:59Z | 26,395,623 | 14 | 2014-10-16T03:11:48Z | [
"jquery",
"python",
"heroku",
"flask",
"cors"
] | I am trying to make a cross origin request using jquery but it keeps being reject with the message
> XMLHttpRequest cannot load http://... No 'Access-Control-Allow-Origin'
> header is present on the requested resource. Origin ... is therefore
> not allowed access.
I am using flask, heroku, and jquery
the client code... | Here is what worked for me when I deployed to Heroku.
<http://flask-cors.readthedocs.org/en/latest/>
$ pip install -U flask-cors
```
from flask import Flask
from flask.ext.cors import CORS, cross_origin
app = Flask(__name__)
cors = CORS(app)
app.config['CORS_HEADERS'] = 'Content-Type'
@app.route("/")
@cross_origin(... |
Why is OrderedDict named in camel case while defaultdict is lower case? | 25,597,121 | 3 | 2014-08-31T23:15:24Z | 25,597,247 | 8 | 2014-08-31T23:38:24Z | [
"python"
] | Looking at the [source code](http://hg.python.org/cpython/file/default/Lib/collections/), it seems the only "reason" is that `OrderedDict` is written in Python, while `defaultdict` is in C. But it seems this is changing as Python 3.5 should have a cOrderedDict (see [Python Bugs](http://bugs.python.org/issue16991)), whi... | Based on what I can find on the python-dev archives, this is just a case of the devs not following their own guidelines.
Guido actually suggested [renaming `defaultdict` to `DefaultDict`](https://mail.python.org/pipermail/python-dev/2009-March/086646.html) to fix this inconsistency during the discussion of the PEP tha... |
python exception <type 'exceptions.ImportError'> No module named gdb: | 25,597,445 | 5 | 2014-09-01T00:15:11Z | 25,601,632 | 7 | 2014-09-01T08:21:56Z | [
"python",
"gdb",
"cgdb"
] | I've just compiled `gdb 7.8` from source in my home directory on a server machine running linux. I had previously been using `gdb 7.6`, and aside from stability issues with `gdb` itself (the reason for the upgrade) everything worked fine.
Since the upgrade of `gdb`, when I run `cgdb 0.6.7` I immediately get the follow... | You should specify the value of "`--data-directory`". For example, if you load `gdb` from the build directory, the command should be:
```
./gdb -data-directory ./data-directory
```
Then `gdb` can know where to find `python` module.
You can refer this [discussion](https://sourceware.org/ml/gdb/2012-10/msg00102.html). |
Celerybeat schedule executing task multiple times? | 25,597,898 | 4 | 2014-09-01T01:44:00Z | 25,687,107 | 7 | 2014-09-05T13:32:13Z | [
"python",
"django",
"database",
"celery",
"celerybeat"
] | I have a task *calculate\_common\_locations* which runs once via `CELERYBEAT_SCHEDULE`.
The task simply calls a function in the database:
```
@app.task
def calculate_common_locations():
db.execute("SELECT * FROM calculate_centroids('b')")
```
This is the entry in `CELERYBEAT_SCHEDULE`:
```
CELERYBEAT_SCHEDULE = ... | This may make more sense if you consider what `crontab(hour=23, day_of_week='sun')` does:
```
>>> crontab(hour=23, day_of_week='sun')
<crontab: * 23 sun * * (m/h/d/dM/MY)>
```
So what this means is that the task will execute **every minute** at 11pm every sunday.
If you want it to execute only at the first minute yo... |
Upload to Amazon S3 using tinys3 | 25,602,415 | 4 | 2014-09-01T09:09:42Z | 25,605,147 | 11 | 2014-09-01T11:44:49Z | [
"python",
"amazon-s3"
] | I'm using Python and tinys3 to write files to S3, but it's not working. Here's my code:
```
import tinys3
conn = tinys3.Connection('xxxxxxx','xxxxxxxx',tls=True)
f = open('testing_s3.txt','rb')
print conn.upload('testing_data/testing_s3.txt',f,'testing-bucket')
print conn.get('testing_data/testing_s3.txt','testing-bu... | I finally figured this out. Here is the correct code:
```
import tinys3
conn = tinys3.Connection('xxxxxxx','xxxxxxxx',tls=True,endpoint='s3-us-west-1.amazonaws.com')
f = open('testing_s3.txt','rb')
print conn.upload('testing_data/testing_s3.txt',f,'testing-bucket')
print conn.get('testing_data/testing_s3.txt','testin... |
How to migrate back from initial migration in Django 1.7? | 25,606,879 | 40 | 2014-09-01T13:24:42Z | 25,607,968 | 65 | 2014-09-01T14:25:50Z | [
"python",
"django",
"django-1.7",
"django-migrations"
] | I created a new app with some models and now I noticed that some of the models are poorly thought out. As I haven't committed the code the sensible thing would be to migrate the database to last good state and redo the migration with better models. In this case the last good state is database where the new app doesn't ... | You can do the same with Django 1.7 also.
```
python manage.py migrate <app> zero
```
This clears `<app>` from migration history and drops all tables of `<app>`
See [django docs](https://docs.djangoproject.com/en/1.7/ref/django-admin/#django-admin-migrate) for more info. |
How to migrate back from initial migration in Django 1.7? | 25,606,879 | 40 | 2014-09-01T13:24:42Z | 31,682,398 | 10 | 2015-07-28T16:42:44Z | [
"python",
"django",
"django-1.7",
"django-migrations"
] | I created a new app with some models and now I noticed that some of the models are poorly thought out. As I haven't committed the code the sensible thing would be to migrate the database to last good state and redo the migration with better models. In this case the last good state is database where the new app doesn't ... | you can also use the version number:
```
python manage.py migrate <app> 0002
```
Source: <https://docs.djangoproject.com/en/1.7/ref/django-admin/#django-admin-migrate> |
Pip Install not installing into correct directory? | 25,607,837 | 11 | 2014-09-01T14:19:26Z | 25,907,213 | 10 | 2014-09-18T07:57:46Z | [
"python",
"bash",
"installation",
"pip"
] | I can't seem to use sudo pip install correctly so that it installs into the following directory:
```
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/
```
so that I can then import the module using python
I've run
```
sudo pip install scikit-learn --upgrade
```
Result
```
Requirement ... | From the comments to the original question, it seems that you have multiple versions of Python installed, and that pip just goes to the wrong version.
First, to know which version of python you're using, just type `which python`. You should either see:
```
which python
/Library/Frameworks/Python.framework/Versions/2.... |
Syntax error installing gunicorn | 25,611,140 | 33 | 2014-09-01T18:17:49Z | 25,611,194 | 67 | 2014-09-01T18:22:42Z | [
"python",
"heroku",
"gunicorn"
] | I am following this Heroku tutorial: <https://devcenter.heroku.com/articles/getting-started-with-python-o> and when I am trying to install gunicorn in a virtualenv I am getting this error:
```
(venv)jabuntu14@ubuntu:~/Desktop/helloflask$ pip install gunicorn
Downloading/unpacking gunicorn
Downloading gunicorn-19.1.1-p... | The error can be ignored, your `gunicorn` package installed successfully.
The error is thrown by a bit of code that'd only work on Python 3.3 or newer, but isn't used by older Python versions that Gunicorn supports.
See <https://github.com/benoitc/gunicorn/issues/788>:
> The error is a syntax error happening during ... |
How to read in one character at a time from a file in python? | 25,611,796 | 9 | 2014-09-01T19:15:35Z | 25,611,913 | 46 | 2014-09-01T19:25:58Z | [
"python",
"file",
"floating-point"
] | I want to read in a list of numbers from a file as chars one char at a time to check what that char is, whether it is a digit, a period, a + or -, an e or E, or some other char...and then perform whatever operation I want based on that. How can I do this using the existing code I already have? This is an example that I... | Here is a technique to make a one-character-at-a-time file iterator:
```
from functools import partial
with open("file.data") as f:
for char in iter(partial(f.read, 1), ''):
# now do something interesting with the characters
...
```
* The [with-statement](http://preshing.com/20110920/the-python-w... |
Differences between BaseHttpServer and wsgiref.simple_server | 25,612,290 | 5 | 2014-09-01T20:01:23Z | 33,244,760 | 7 | 2015-10-20T19:00:56Z | [
"python",
"basehttpserver",
"wsgiref"
] | I'm looking for a module that provides me a basic http server capabilities for local access. It seems like Python has two methods to implement simple http servers in the standard library: [wsgiref.simple\_server](https://docs.python.org/2/library/wsgiref.html#module-wsgiref.simple_server) and [BaseHttpServer](https://d... | **Short answer:** `wsgiref.simple_server` is a WSGI adapter over `BaseHTTPServer`.
**Longer answer:**
`BaseHTTPServer` is the module that actually implements the HTTP server part. It can accept requests and return responses, but it has to know how to handle those requests. When you are using pure `BaseHTTPServer`, yo... |
Python: write.csv adding extra carriage return | 25,613,698 | 2 | 2014-09-01T22:47:38Z | 25,615,833 | 9 | 2014-09-02T04:23:38Z | [
"python",
"python-2.7",
"csv",
"carriage-return"
] | I am writing an Excl to CSV converter using python.
I'm running in Linux and my Python version is:
Python 2.7.1 (r271:86832, Dec 4 2012, 17:16:32)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] on linux2
In the code as below, when I comment the 5 "csvFile.write" lines, the csv file is generated all fine. However, with the co... | Default line terminator for `csv.writer` is `'\r\n'`. Explicitly specify [`lineterminator`](https://docs.python.org/2/library/csv.html#csv.Dialect.lineterminator) argument if you want only `'\n'`:
```
wr = csv.writer(csvFile, delimiter=';', lineterminator='\n')
``` |
Get contents of div by id with BeautifulSoup | 25,614,702 | 5 | 2014-09-02T01:37:53Z | 25,614,774 | 11 | 2014-09-02T01:49:28Z | [
"python",
"html",
"python-2.7",
"beautifulsoup",
"html-parsing"
] | I am using python2.7.6, urllib2, and BeautifulSoup
to extract html from a website and store in a variable.
How can I show just the html contents of a `div` with an id by using beautifulsoup?
```
<div id='theDiv'>
<p>div content</p>
<p>div stuff</p>
<p>div thing</p>
```
would be
```
<p>div content</p>
<p>div stuff<... | Join the elements of div tag's [`.contents`](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#contents-and-children):
```
from bs4 import BeautifulSoup
data = """
<div id='theDiv'>
<p>div content</p>
<p>div stuff</p>
<p>div thing</p>
</div>
"""
soup = BeautifulSoup(data)
div = soup.find('div', id='t... |
Python Argparse conditionally required arguments | 25,626,109 | 15 | 2014-09-02T14:41:21Z | 25,626,676 | 8 | 2014-09-02T15:08:33Z | [
"python",
"argparse"
] | I have done as much research as possible but I haven't found the best way to make certain cmdline arguments necessary only under certain conditions, in this case only if other arguments have been given. Here's what I want to do at a very basic level:
```
p = argparse.ArgumentParser(description='...')
p.add_argument('-... | You can implement a check by providing a custom action for `--argument`, which will take an additional keyword argument to specify which other action(s) should become required if `--argument` is used.
```
import argparse
class CondAction(argparse.Action):
def __init__(self, option_strings, dest, nargs=None, **kwa... |
Getting wider output in PyCharm's built-in console | 25,628,496 | 6 | 2014-09-02T16:53:04Z | 25,630,681 | 9 | 2014-09-02T19:15:26Z | [
"python",
"pandas",
"ipython",
"pycharm"
] | I'm relatively new to using the PyCharm IDE, and have been unable to find a way to better shape the output when in a built-in console session. I'm typically working with pretty wide dataframes, that would fit easily across my monitor, but the display is cutting and wrapping them much sooner than needed.
Does anyone kn... | It appears I was mistaken in thinking that the issue was one in PyCharm (that could be solved, for example, in a setting or preference.) It actually has to do with the console session itself. The console attempts to auto-detect the width of the display area, but when that fails it defaults to 80 characters. This behavi... |
How to replace all occurrences of specific words in Python | 25,631,695 | 5 | 2014-09-02T20:19:44Z | 25,631,768 | 10 | 2014-09-02T20:24:11Z | [
"python",
"regex",
"python-2.7"
] | How can I achieve the following behavior in Python in the more elegant way?
Suppose that I have the following sentence:
> bean likes to sell his beans
and I want to replace all occurrences of specific words with other words (for example, 'bean' -- 'robert', 'beans' -- 'cars'). I can't just use replaceall because in ... | If you use regex, you can specify word boundaries with `\b`:
```
import re
sentence = 'bean likes to sell his beans'
sentence = re.sub(r'\bbean\b', 'robert', sentence)
# 'robert likes to sell his beans'
```
Here 'beans' is not changed (to 'roberts') because the 's' on the end is not a boundary between words: `\b` m... |
Python ignores default values of arguments supplied to tuple in inherited class | 25,634,469 | 5 | 2014-09-03T00:35:15Z | 25,634,513 | 10 | 2014-09-03T00:43:08Z | [
"python",
"inheritance",
"constructor",
"tuples"
] | Here is some code to demonstrate what I'm talking about.
```
class Foo(tuple):
def __init__(self, initialValue=(0,0)):
super(tuple, self).__init__(initialValue)
print Foo()
print Foo((0, 0))
```
I would expect both expressions to yield the exact same result, but the output of this program is:
```
()
(0, 0)... | That's because the `tuple` type does not care about the arguments to `__init__`, but only about those to `__new__`. This will make it work:
```
class Bar(tuple):
@staticmethod
def __new__(cls, initialValue=(0,0)):
return tuple.__new__(cls, initialValue)
```
The basic reason for this is that, since tup... |
Get get all combinations of elements from two lists? | 25,634,489 | 2 | 2014-09-03T00:39:08Z | 25,634,533 | 8 | 2014-09-03T00:45:22Z | [
"python",
"pandas"
] | If I have two lists
```
l1 = [ 'A', 'B' ]
l2 = [ 1, 2 ]
```
what is the most elegant way to get a pandas data frame which looks like:
```
+-----+-----+-----+
| | l1 | l2 |
+-----+-----+-----+
| 0 | A | 1 |
+-----+-----+-----+
| 1 | A | 2 |
+-----+-----+-----+
| 2 | B | 1 |
+-----+-----+-----... | use [`product`](https://docs.python.org/2/library/itertools.html#itertools.product) from `itertools`:
```
>>> from itertools import product
>>> pd.DataFrame(list(product(l1, l2)), columns=['l1', 'l2'])
l1 l2
0 A 1
1 A 2
2 B 1
3 B 2
``` |
In python, is 'foo == (8 or 9)' or 'foo == 8 or foo == 9' more correct? | 25,635,116 | 3 | 2014-09-03T02:05:09Z | 25,635,138 | 13 | 2014-09-03T02:07:19Z | [
"python",
"python-2.7",
"if-statement",
"boolean"
] | When programming in python, when you are checking if a statement is true, would it be more correct to use `foo == (8 or 9)` or `foo == 8 or foo == 9`? Is it just a matter of what the programmer chooses to do? I am wondering about python 2.7, in case it is different in python 3. | You probably want `foo == 8 or foo == 9`, since:
```
In [411]: foo = 9
In [412]: foo == (8 or 9)
Out[412]: False
In [413]: foo == 8 or foo == 9
Out[413]: True
```
After all, `(8 or 9)` is equal to `8`:
```
In [414]: (8 or 9)
Out[414]: 8
```
Alternatively, you could also write
```
foo in (8, 9)
```
This holds fo... |
Coloring JSON output in python | 25,638,905 | 7 | 2014-09-03T07:43:54Z | 32,166,163 | 11 | 2015-08-23T11:30:26Z | [
"python",
"json"
] | In python, If I have a JSON object `obj`, then I can
```
print json.dumps(obj, sort_keys=True, indent=4)
```
in order to get a pretty printout of the object. Is it possible to prettify the output even further: *add some colors* in particular? Something like the result of [1]
```
cat foo.json | jq '.'
```
[1] `jq` t... | You can use [Pygments](http://pygments.org/) to color your JSON output. Based on what you have:
```
formatted_json = json.dumps(obj, sort_keys=True, indent=4)
from pygments import highlight, lexers, formatters
colorful_json = highlight(unicode(formatted_json, 'UTF-8'), lexers.JsonLexer(), formatters.TerminalFormatter... |
How to run recurring task in the Python Flask framework? | 25,639,221 | 13 | 2014-09-03T08:01:48Z | 25,639,981 | 17 | 2014-09-03T08:45:39Z | [
"python",
"cron",
"flask",
"apscheduler"
] | I'm building a website which provides some information to the visitors. This information is aggregated in the background by polling a couple external APIs every 5 seconds. The way I have it working now is that I use [APScheduler](http://apscheduler.readthedocs.org/en/latest/) jobs. I initially preferred APScheduler bec... | (1)
You can use the `app.app_context()` context manager to set the application context. I imagine usage would go something like this:
```
from apscheduler.scheduler import Scheduler
def checkSecondApi():
with app.app_context():
# Do whatever you were doing to check the second API
@app.before_first_reque... |
Python adding lists of numbers with other lists of numbers | 25,640,628 | 3 | 2014-09-03T09:20:23Z | 25,640,671 | 7 | 2014-09-03T09:22:04Z | [
"python",
"list",
"sum",
"add"
] | In Python is there a simple way of adding the individual numbers of lists to the individual numbers of other lists? In my code I need to add about 10 long lists in a similar fashion to this:
```
listOne = [1,5,3,2,7]
listTwo = [6,2,4,8,5]
listThree = [3,2,9,1,1]
```
Therefore I want the result to be:
```
listSum = [... | Using [zip](https://docs.python.org/2/library/functions.html#zip), [sum](https://docs.python.org/2/library/functions.html#sum) and [list comprehension](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions):
```
>>> lists = (listOne, listTwo, listThree)
>>> [sum(values) for values in zip(*lists)]
... |
Do I need to sudo when running pip/easy_install? | 25,644,791 | 5 | 2014-09-03T12:48:58Z | 25,644,983 | 8 | 2014-09-03T12:58:28Z | [
"python",
"python-2.7",
"sudo"
] | All of the python tuts I've been reading lately tell me to do things like this:
```
pip install tornado
pip install requests
```
And every time I do this I get hit with a `permission denied` warning. But everything usually works when I `sudo`.
Is sudo required? And if so, why do so many tutorial instructions fail to... | `pip` requires permission to write the libraries to whichever directory it is using. This problem occurs when you do not have permission as a user and so `pip` fails. Using `sudo` gets around this problem but is not ideal.
**You should not ever run code using sudo as you don't know what is inside the library, if it co... |
Changing python sequence size during iteration | 25,645,443 | 3 | 2014-09-03T13:18:31Z | 25,732,259 | 7 | 2014-09-08T20:03:17Z | [
"python",
"iteration",
"sequence"
] | Implementation of `bytes.join` method, implemented [here](https://searchcode.com/codesearch/view/58086058/#l-8) includes code that protects from size changes during iteration:
```
if (seqlen != PySequence_Fast_GET_SIZE(seq)) {
PyErr_SetString(PyExc_RuntimeError,
"sequence changed si... | If you pass in a list object to `bytes.join()`, you could potentially add elements to the the list in another thread while the `bytes.join()` call is iterating.
The `bytes.join()` method has to make two passes over the sequence; once to calculate the total length of the `bytes` objects contained, a second time to then... |
Python: Convert timedelta to int in a dataframe | 25,646,200 | 17 | 2014-09-03T13:53:18Z | 25,646,414 | 25 | 2014-09-03T14:03:46Z | [
"python",
"pandas",
"timedelta"
] | I would like to create a column in a pandas data frame that is an integer representation of the number of days in a timedelta column. Is it possible to use 'datetime.days' or do I need to do something more manual?
**timedelta column**
> 7 days, 23:29:00
**day integer column**
> 7 | You could do this, where `td` is your series of timedeltas. The division converts the nanosecond deltas into day deltas, and the conversion to int drops to whole days.
```
import numpy as np
(td / np.timedelta64(1, 'D')).astype(int)
``` |
Upgrading to Django 1.7. Getting error: Cannot serialize: <storages.backends.s3boto.S3BotoStorage object | 25,648,335 | 12 | 2014-09-03T15:32:57Z | 26,546,257 | 21 | 2014-10-24T10:35:03Z | [
"python",
"django",
"python-2.7",
"boto",
"django-storage"
] | I am trying to upgrade a django app from django 1.6.6 to 1.7 and am using python 2.7.8. When I run `python manage.py makemigrations`, I get the following error:
```
ValueError: Cannot serialize: <storages.backends.s3boto.S3BotoStorage object at 0x11116eed0>
There are some values Django cannot serialize into migration ... | Just make a deconstructible subclass and use it instead.
```
from django.utils.deconstruct import deconstructible
@deconstructible
class MyS3BotoStorage(S3BotoStorage):
pass
``` |
How to move a model between two Django apps (Django 1.7) | 25,648,393 | 42 | 2014-09-03T15:36:36Z | 26,472,482 | 136 | 2014-10-20T18:30:12Z | [
"python",
"mysql",
"database",
"django",
"schema-migration"
] | So about a year ago I started a project and like all new developers I didn't really focus to much on the structure, however now I am further along with Django it has started to appear that my project layout mainly my models are horrible in structure.
I have models mainly held in a single app and really most of these m... | This can be done fairly easily using `migrations.SeparateDatabaseAndState`. Basically, we use a database operation to rename the table concurrently with two state operations to remove the model from one app's history and create it in another's.
## Remove from old app
```
python manage.py makemigrations old_app --empt... |
Exponential Decay on Python Pandas DataFrame | 25,649,412 | 5 | 2014-09-03T16:32:12Z | 25,650,745 | 7 | 2014-09-03T17:57:51Z | [
"python",
"numpy",
"pandas"
] | I'm trying to efficiently compute a running sum, with exponential decay, of each column of a Pandas DataFrame. The DataFrame contains a daily score for each country in the world. The DataFrame looks like this:
```
AF UK US
2014-07-01 0.998042 0.595720 0.524698
2014-07-02 0.380649 0.8... | You can use the fact that when exponentials multiply their exponents add:
eg:
```
N(2) = N(2) + N(1) * exp(-0.05)
N(3) = N(3) + (N(2) + N(1) * exp(-0.05))*exp(-0.05)
N(3) = N(3) + N(2)*exp(-0.05) + N(1)*exp(-0.1)
N(4) = ...and so on
```
This can then be vectorized using numpy:
```
dataset = pd.DataFrame(np.random.r... |
Where is Python's shutdown procedure documented? | 25,649,676 | 43 | 2014-09-03T16:49:00Z | 25,649,713 | 55 | 2014-09-03T16:51:19Z | [
"python",
"multithreading",
"python-2.7",
"python-internals"
] | CPython has a strange behaviour where it sets modules to None during shutdown. This screws up error logging during shutdown of some multithreading code I've written.
I can't find any documentation of this behaviour. It's mentioned in passing in [PEP 432](http://www.python.org/dev/peps/pep-0432/):
> [...] significantl... | The behaviour is *not* well documented, and is present in all versions of Python from about 1.5-ish *until [Python 3.4](https://docs.python.org/3/whatsnew/3.4.html#whatsnew-pep-442)*:
> As part of this change, module globals are no longer forcibly set to `None` during interpreter shutdown in most cases, instead relyin... |
Python regex Find All Numbers Between 6 or 7 Digits | 25,651,028 | 2 | 2014-09-03T18:16:12Z | 25,651,064 | 8 | 2014-09-03T18:18:17Z | [
"python",
"regex"
] | I am using this regex using the re module in python and getting this result:
```
In [156]: re.findall(r'.*\D(\d{6,7})\D.*', ' f123456 f1234567 ')
Out[156]: ['1234567']
```
...but I'm wanting this result:
```
Out[156]: ['123456', '1234567']
```
I can't figure out how to form this regex. Can you please help? | Simplify regex
```
In [5]: re.findall(r'\d{6,7}', ' f123456 f1234567 ')
Out[5]: ['123456', '1234567']
``` |
from django.db import models, migrations ImportError: cannot import name migrations | 25,651,716 | 15 | 2014-09-03T18:56:31Z | 25,652,230 | 20 | 2014-09-03T19:26:45Z | [
"python",
"django",
"django-south",
"migrate"
] | So I've started to experience some issues with south on my Django web server.
Migrate command is failing with this output everytime:
```
from django.db import models, migrations
ImportError: cannot import name migrations
```
(Above this the error displays the rout to the file that failed to be migrated)
My Django v... | **Migrations were introduced in Django 1.7; you are using 1.5.**
Here is a [link to the docs explaining this](https://docs.djangoproject.com/en/dev/topics/migrations/#a-brief-history). If you're using an older version of Django, [South](http://south.readthedocs.org/en/latest/) is the most popular option for data migra... |
OSError: [WinError 193] %1 is not a valid Win32 application | 25,651,990 | 4 | 2014-09-03T19:12:23Z | 25,652,030 | 8 | 2014-09-03T19:14:23Z | [
"python",
"subprocess",
"python-3.4"
] | I am trying to call a python file "hello.py" from within the python interpreter with subprocess. But I am unable to resolve this error. [Python 3.4.1].
```
import subprocess
subprocess.call(['hello.py', 'htmlfilename.htm'])
Traceback (most recent call last):
File "<pyshell#42>", line 1, in <module>
subproces... | The error is pretty clear. The file `hello.py` is not an executable file. You need to specify the executable:
```
subprocess.call(['python.exe', 'hello.py', 'htmlfilename.htm'])
```
You'll need `python.exe` to be visible on the search path, or you could pass the full path to the executable file that is running the ca... |
Converting timezones from pandas Timestamps | 25,653,529 | 7 | 2014-09-03T20:57:58Z | 25,654,328 | 11 | 2014-09-03T22:00:38Z | [
"python",
"pandas",
"timezone"
] | I have the following in a dataframe:
```
> df['timestamps'].loc[0]
Timestamp('2014-09-02 20:24:00')
```
I know the **timezone** (I think it is **GMT**) it uses and would like to convert the entire column to **EST**. How can I do that in Pandas?
For reference, I found these other threads:
* [Changing a unix timestam... | Just use `tz_convert` method.
Lets say you have a Timestamp object:
```
stamp = Timestamp('1/1/2014 16:20', tz='America/Sao_Paulo')
new_stamp = stamp.tz_convert('US/Eastern')
```
If you are interested in converting date ranges:
```
range = date_range('1/1/2014', '1/1/2015', freq='S', tz='America/Sao_Paulo'... |
Initialising a large dict with unknown keys? Is there a better way than this? | 25,656,238 | 2 | 2014-09-04T02:02:01Z | 25,656,298 | 7 | 2014-09-04T02:08:48Z | [
"python",
"performance",
"dictionary"
] | So I have a list of around 75000 tuples that I want to push into a dict. It seems like after around 20,000 entries, the whole program slows down and I assume this is because the dict is being dynamically resized as it is filled.
The key value used for the dict is in a different position in the tuple depending on the d... | > So I have a list of around 75000 tuples that I want to push into a dict.
Just call `dict` on the list. Like this:
```
>>> list_o_tuples = [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')]
>>> d = dict(list_o_tuples)
>>> d
{1: 'a', 2: 'b', 3: 'c', 4: 'd'}
```
---
> The key value used for the dict is in a different position... |
PySide / Qt Import Error | 25,656,307 | 14 | 2014-09-04T02:10:21Z | 25,671,015 | 10 | 2014-09-04T16:53:33Z | [
"python",
"osx",
"qt",
"pyside"
] | I'm trying to import PySide / Qt into Python like so and get the follow error:
```
from PySide import QtCore
ImportError: dlopen(/usr/local/lib/python2.7/site-packages/PySide/QtCore.so, 2): Library not loaded: libpyside-python2.7.1.2.dylib
Referenced from: /usr/local/lib/python2.7/site-packages/PySide/QtCore.so
R... | Well, the installer is somewhat broken, because the output from oTool should report a full path to the library (the path should be changed by the Pyside installer using install\_name\_tool).
Instead of going mad understanding what part of the installer is broken, I suggest you define:
```
LD_LIBRARY_PATH=/your/path/t... |
PySide / Qt Import Error | 25,656,307 | 14 | 2014-09-04T02:10:21Z | 26,903,874 | 11 | 2014-11-13T07:55:27Z | [
"python",
"osx",
"qt",
"pyside"
] | I'm trying to import PySide / Qt into Python like so and get the follow error:
```
from PySide import QtCore
ImportError: dlopen(/usr/local/lib/python2.7/site-packages/PySide/QtCore.so, 2): Library not loaded: libpyside-python2.7.1.2.dylib
Referenced from: /usr/local/lib/python2.7/site-packages/PySide/QtCore.so
R... | if you watch this , you're question will be fixed:
<https://github.com/PySide/pyside-setup/blob/master/pyside_postinstall.py>
`pyside_postinstall.py -install` |
make os.listdir() list complete paths | 25,657,705 | 2 | 2014-09-04T05:11:52Z | 25,657,758 | 7 | 2014-09-04T05:17:11Z | [
"python",
"os.path",
"listdir"
] | Consider the following piece of code:
```
files = sorted(os.listdir('dumps'), key=os.path.getctime)
```
The objective is to sort the listed files based on the creation time. However since the the os.listdir gives only the filename and not the absolute path the key ie, the os.path.getctime throws an exception saying
... | You can use [glob](https://docs.python.org/3.1/library/glob.html).
```
import os
from glob import glob
glob_pattern = os.path.join('dumps', '*')
files = sorted(glob(glob_pattern), key=os.path.getctime)
``` |
Get data from pandas into a SQL server with PYODBC | 25,661,754 | 5 | 2014-09-04T09:23:10Z | 25,662,997 | 9 | 2014-09-04T10:22:13Z | [
"python",
"sql",
"pandas",
"pyodbc"
] | I am trying to understand how python could pull data from an FTP server into pandas then move this into SQL server. My code here is very rudimentary to say the least and I am looking for any advice or help at all. I have tried to load the data from the FTP server first which works fine.... If I then remove this code an... | For the 'write to sql server' part, you can use the convenient `to_sql` method of pandas (so no need to iterate over the rows and do the insert manually). See the docs on interacting with SQL databases with pandas: <http://pandas.pydata.org/pandas-docs/stable/io.html#io-sql>
You will need at least pandas 0.14 to have ... |
Dynamically filter ListView CBV in Django 1.7 | 25,662,374 | 8 | 2014-09-04T09:51:57Z | 25,663,055 | 10 | 2014-09-04T10:25:14Z | [
"python",
"django",
"listview",
"get",
"django-views"
] | I've read the [official documentation on dynamically filtering](https://docs.djangoproject.com/en/1.7/topics/class-based-views/generic-display/#dynamic-filtering) ListView, but am still confused about how to actually use it.
I currently have a simple model, let's call it `Scholarship`:
```
class Scholarship(models.Mo... | First of all you need to change your urls.py so that it'll pass the experience as a parameter. Something like this:
```
urlpatterns = patterns('',
url(r'^(?P<exp>[ASG])$', ScholarshipDirectoryView.as_view(), name='scholarship_directory'),
)
```
(the above will return 404 if /A or /S or /G is not passed)
Now, in ... |
How to restrict object creation | 25,664,060 | 4 | 2014-09-04T11:13:42Z | 25,664,148 | 7 | 2014-09-04T11:18:11Z | [
"python",
"python-2.7"
] | Consider following example
```
class Key:
def __init__(self, s):
self.s = s
d = {}
for x in range(1, 10000):
t = Key(x)
d[t] = x
```
This will create 10000 keys. Is it possible to control the object creation of class key, for example we cannot create more than 5 objects of key class. The loop sh... | You can control how, or how many objects are created by giving your class a [`__new__` method](https://docs.python.org/2/reference/datamodel.html#object.__new__):
```
class Key(object):
_count = 0
def __new__(cls, s):
if cls._count == 5:
raise TypeError('Too many keys created')
cl... |
Issues in initial setup for django project | 25,664,654 | 2 | 2014-09-04T11:45:50Z | 28,357,654 | 9 | 2015-02-06T02:35:32Z | [
"python",
"django"
] | I am learning Django from the official documentation and while going through the tutorial at <https://docs.djangoproject.com/en/1.7/intro/tutorial01/>, I am stuck at creating a project part.
When I run `django-admin.py startproject mysite` I am getting following error
```
C:\Python34>django-admin.py startproject mysi... | You can just run `django-admin startproject mysite`ï¼Note: not `django-admin.py`ï¼, because if you install django by pip, a executable program named 'django-admin.exe' will be added to *'C:\Python34\Scripts*', which is already in your environment variable 'PATH' normally(if not, add it to PATH). |
Flask sqlalchemy many-to-many insert data | 25,668,092 | 19 | 2014-09-04T14:28:12Z | 25,669,256 | 33 | 2014-09-04T15:21:03Z | [
"python",
"flask",
"sqlalchemy",
"flask-sqlalchemy"
] | **Greetings**,
I am trying to make a many to many relation here in **Flask-SQLAlchemy**, but it seems that I don't know how to fill the *"many to many identifier database"*. Could you please help me understand what I am doing wrong and how it is supposed to look?
```
class User(db.Model):
__tablename__ = 'users'
... | You don't need to add anything directly to your association table, SQLAlchemy will do that. This is more or less from [SQLAlchemy documentations](http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#many-to-many):
```
association_table = db.Table('association', db.Model.metadata,
db.Column('left_id', db.In... |
why does python new york time zone display 4:56 instead 4:00? | 25,668,415 | 3 | 2014-09-04T14:43:23Z | 25,668,647 | 8 | 2014-09-04T14:53:50Z | [
"python",
"django",
"datetime",
"time",
"timezone"
] | I am creating a date that is 10:30 today in New York:
```
ny_tz = timezone('America/New_York')
ny_time = datetime(2014, 9, 4, 10, 30, 2, 294757, tzinfo=ny_tz)
```
This prints:
> 2014-09-04 10:30:02.294757-04:56
I am trying to compare this to another new york time which has the time zone offset 4:00 and so t... | You should instead do it like this:
```
ny_tz = timezone('America/New_York')
ny_time = ny_tz.localize(datetime(2014, 9, 4, 10, 30, 2, 294757))
```
This gives you the correct result:
```
>>> print ny_tz.localize(datetime(2014, 9, 4, 10, 30, 2, 294757))
2014-09-04 10:30:02.294757-04:00
```
Relevant `pytz` documentati... |
Convert percent string to float in pandas read_csv | 25,669,588 | 4 | 2014-09-04T15:38:20Z | 25,669,658 | 8 | 2014-09-04T15:42:16Z | [
"python",
"pandas"
] | Is there a way to convert values like '34%' directly to int or float when using read\_csv in pandas? I would like that it is directly read as 0.34.
Using this in read\_csv did not work:
```
read_csv(..., dtype={'col':np.float})
```
After loading the csv as 'df' this also did not work with the error "invalid literal ... | You can define a custom function to convert your percents to floats
```
In [149]:
# dummy data
temp1 = """index col
113 34%
122 50%
123 32%
301 12%"""
# custom function taken from http://stackoverflow.com/questions/12432663/what-is-a-clean-way-to-convert-a-string-percent-to-a-float
def p2f(x):
return float(x.stri... |
How do you run Python code using Emacs? | 25,669,809 | 5 | 2014-09-04T15:49:01Z | 25,687,205 | 8 | 2014-09-05T13:36:55Z | [
"python",
"emacs"
] | I'm trying to run Python code for testing and debugging using Emacs. How should I debug and run code in \*.py files ? I tried using the `M-x compile` commands . Using `M-x compile`, I get a compilation buffer that crashes (It says Python is compiling, but then nothing happens). | If you are using emacs24 this should be the default (in emacs23 you need python.el, not python-mode.el):
In a python buffer:
* C-c C-z : open a python shell
* C-c C-c : run the content of the buffer in the opened python shell
* C-c C-r : run the selected region in the python shell
default python shell is "python", i... |
Why does a copy get created when assigned with None? | 25,672,036 | 6 | 2014-09-04T17:54:20Z | 25,672,920 | 7 | 2014-09-04T18:44:56Z | [
"python",
"pandas",
"dataframe"
] | ```
In[216]: foo = pd.DataFrame({'a':[1,2,3], 'b':[3,4,5]})
In[217]: bar = foo.ix[:1]
In[218]: bar
Out[218]:
a b
0 1 3
1 2 4
```
A view is created as expected.
```
In[219]: bar['a'] = 100
In[220]: bar
Out[220]:
a b
0 100 3
1 100 4
In[221]: foo
Out[221]:
a b
0 100 3
1 100 4
2 3 5
```... | When you assign `bar['a'] = None`, you're forcing the column to change its dtype from, e.g., `I4` to `object`.
Doing so forces it to allocate a new array of `object` for the column, and then of course it writes to that new array instead of to the old array that's shared with the original `DataFrame`. |
ubuntu 14.04, pip cannot upgrade matplotllib | 25,674,612 | 37 | 2014-09-04T20:38:33Z | 25,695,462 | 54 | 2014-09-05T23:51:58Z | [
"python",
"matplotlib",
"pip"
] | When I try to upgrade my matplotlib using pip, it outputs:
```
Downloading/unpacking matplotlib from https://pypi.python.org/packages/source/m/matplotlib/matplotlib-1.4.0.tar.gz#md5=1daf7f2123d94745feac1a30b210940c
Downloading matplotlib-1.4.0.tar.gz (51.2MB): 51.2MB downloaded
Running setup.py (path:/tmp/pip_buil... | This is a known bug that has been fixed (<https://github.com/matplotlib/matplotlib/pull/3414>) on master.
The bug is in the handling of searching for a [freetype](http://www.freetype.org/) installation. If you install the Linux package freetype-dev, you will avoid this bug and be able to compile `matplotlib`.
```
sud... |
ubuntu 14.04, pip cannot upgrade matplotllib | 25,674,612 | 37 | 2014-09-04T20:38:33Z | 26,825,009 | 60 | 2014-11-09T04:29:56Z | [
"python",
"matplotlib",
"pip"
] | When I try to upgrade my matplotlib using pip, it outputs:
```
Downloading/unpacking matplotlib from https://pypi.python.org/packages/source/m/matplotlib/matplotlib-1.4.0.tar.gz#md5=1daf7f2123d94745feac1a30b210940c
Downloading matplotlib-1.4.0.tar.gz (51.2MB): 51.2MB downloaded
Running setup.py (path:/tmp/pip_buil... | On Ubuntu 14 server, you also need to install libxft-dev
```
sudo apt-get install libfreetype6-dev libxft-dev
``` |
ubuntu 14.04, pip cannot upgrade matplotllib | 25,674,612 | 37 | 2014-09-04T20:38:33Z | 29,868,312 | 9 | 2015-04-25T17:16:32Z | [
"python",
"matplotlib",
"pip"
] | When I try to upgrade my matplotlib using pip, it outputs:
```
Downloading/unpacking matplotlib from https://pypi.python.org/packages/source/m/matplotlib/matplotlib-1.4.0.tar.gz#md5=1daf7f2123d94745feac1a30b210940c
Downloading matplotlib-1.4.0.tar.gz (51.2MB): 51.2MB downloaded
Running setup.py (path:/tmp/pip_buil... | I had the same issues trying to install `matplotlib` on Python 3 using `pip3`, and it seems that this problem is related to a bare-bones installation of Python 3, and doing a:
```
sudo apt-get build-dep matplotlib
```
followed by
```
sudo pip3 install matplotlib
```
is probably a better solution than selectively in... |
How can I concatenate str and int objects? | 25,675,943 | 16 | 2014-09-04T22:28:29Z | 25,675,944 | 32 | 2014-09-04T22:28:29Z | [
"python",
"string",
"int",
"concatenation"
] | If I try to do the following:
```
things = 5
print("You have " + things + " things.")
```
I get the following error in Python 3.x:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't convert 'int' object to str implicitly
```
... and a similar error in Python 2.x:
```
Trac... | The problem here is that the `+` operator has (at least) two different meanings in Python: for numeric types, it means "add the numbers together":
```
>>> 1 + 2
3
>>> 3.4 + 5.6
9.0
```
... and for sequence types, it means "concatenate the sequences":
```
>>> [1, 2, 3] + [4, 5, 6]
[1, 2, 3, 4, 5, 6]
>>> 'abc' + 'def'... |
Why is FrozenList different from tuple? | 25,676,107 | 4 | 2014-09-04T22:44:11Z | 25,676,191 | 7 | 2014-09-04T22:52:36Z | [
"python",
"pandas"
] | ```
from pandas.core.base import FrozenList
Type: type
String form: <class 'pandas.core.base.FrozenList'>
File: /site-packages/pandas/core/base.py
Docstring:
Container that doesn't allow setting item *but*
because it's technically non-hashable, will be used
for lookups, appropriately, etc.
```
Why not j... | This is an internal pandas construct. Not using tuple because:
* It inherits from a common pandas class
* Its customizable (e.g. the repr)
* It doesn't have quite all of the functions of a tuple (some are disabled)
* It nots hashable (so more like a list here and not a tuple)
The construct is used to represent a Mult... |
Capturing high multi-collinearity in statsmodels | 25,676,145 | 5 | 2014-09-04T22:47:37Z | 25,833,792 | 17 | 2014-09-14T13:32:08Z | [
"python",
"statistics",
"scipy",
"statsmodels"
] | Say I fit a model in statsmodels
```
mod = smf.ols('dependent ~ first_category + second_category + other', data=df).fit()
```
When I do `mod.summary()` I may see the following:
```
Warnings:
[1] The condition number is large, 1.59e+05. This might indicate that there are
strong multicollinearity or other numerical pr... | You can detect high-multi-collinearity by inspecting the *eigen values* of *correlation matrix*. A very low eigen value shows that the data are collinear, and the corresponding *eigen vector* shows which variables are collinear.
If there is no collinearity in the data, you would expect that none of the eigen values ar... |
can't import is_secure_transport | 25,676,259 | 7 | 2014-09-04T23:00:24Z | 26,908,791 | 7 | 2014-11-13T12:21:15Z | [
"python",
"oauth",
"pycharm"
] | I am using TwitterAPI of python(ubuntu+pycharm).
I install TwitterAPI by "pip install networkx TwitterAPI".
I just import TwitterAPI as follows:
```
from TwitterAPI import TwitterAPI
```
When I execute the python file ,there is a mistake as follows:
```
File "/home/hanlu/PycharmProjects/cs579/a1.py", line 29, in ... | Use oauthlib-0.7.1 to fix the problem.
```
sudo pip install oauthlib --upgrade
``` |
Signal handling in multi-threaded Python | 25,676,835 | 3 | 2014-09-05T00:12:18Z | 25,677,040 | 9 | 2014-09-05T00:41:37Z | [
"python",
"linux",
"multithreading"
] | This should be very simple and I'm very surprised that I haven't been able to find this questions answered already on stackoverflow.
I have a daemon like program that needs to respond to the SIGTERM and SIGINT signals in order to work well with upstart. I read that the best way to do this is to run the main loop of th... | The problem is that, as explained in [Execution of Python signal handlers](https://docs.python.org/3.4/library/signal.html#execution-of-python-signal-handlers):
> A Python signal handler does not get executed inside the low-level (C) signal handler. Instead, the low-level signal handler sets a flag which tells the vir... |
How to install xmlrpclib in python 3.4? | 25,676,943 | 5 | 2014-09-05T00:28:47Z | 25,686,579 | 8 | 2014-09-05T13:02:46Z | [
"python",
"xml-rpc",
"python-3.4",
"xmlrpclib",
"xmlrpcclient"
] | When I am trying to install xmlrpclib, I am getting following error in python version 3.4
Downloading/unpacking xmlrpclib
Could not find any downloads that satisfy the requirement xmlrpclib
Some externally hosted files were ignored (use --allow-external xmlrpclib to allow).
Cleaning up...
No distributions at all found... | `xmlrpclib` is part of the standard library in Python 2.x. It's not a package that you need to install.
In Python 3.x you can import it from `xmlrpc` instead: <https://docs.python.org/3/library/xmlrpc.html>. You can import the client from `xmlrpc.client`: <https://docs.python.org/3/library/xmlrpc.client.html#module-xm... |
How does module loading work in CPython? | 25,678,174 | 8 | 2014-09-05T03:21:44Z | 25,985,846 | 10 | 2014-09-23T02:15:38Z | [
"python",
"python-import",
"cpython",
"dynamic-loading",
"python-internals"
] | How does module loading work in CPython under the hood? Especially, how does the dynamic loading of extensions written in C work? Where can I learn about this?
I find the source code itself rather overwhelming. I can see that trusty ol' `dlopen()` and friends is used on systems that support it but without any sense of... | TLDR short version bolded.
References to the Python source code are based on version 2.7.6.
**Python imports most extensions written in C through dynamic loading.** Dynamic loading is an esoteric topic that isn't well documented but it's an absolute prerequisite. Before explaining *how* Python uses it, I must briefly... |
Can os.environ['PYTHONHASHSEED'] be set dynamically from within an application? | 25,684,349 | 5 | 2014-09-05T10:58:02Z | 25,684,784 | 8 | 2014-09-05T11:23:00Z | [
"python"
] | Can it be changed for the current process by simply setting it to a new value like this?
```
os.environ['PYTHONHASHSEED'] = 'random'
``` | It depends by what you mean.
If you mean to change the behaviour of the current interpreter than the answer is *no*:
1. Modifying `os.environ` isn't reliable, since in some OSes you cannot modify the environment (see the documentation for [`os.environ`](https://docs.python.org/2/library/os.html#os.environ)).
2. Envir... |
Mocking a subprocess call in Python | 25,692,440 | 3 | 2014-09-05T19:08:04Z | 25,693,097 | 8 | 2014-09-05T19:55:10Z | [
"python",
"mocking"
] | I have a method (`run_script`) would like to test. Specifically I want to test that a call to `subprocess.Popen`occurs. It would be even better to test that `subprocess.Popen` is called with certain parameters. When I run the test however I get `TypeError: 'tuple' object is not callable`.
How can I test my method to e... | It seems unusual to me that you use the patch decorator over the `run_script` function, since you don't pass a mock argument there.
How about this:
```
def run_script(file_path):
process = subprocess.Popen(['myscript', -M, file_path], stdout=subprocess.PIPE)
output,err = process.communicate()
return process.ret... |
Bulk update in SQLAlchemy Core using WHERE | 25,694,234 | 4 | 2014-09-05T21:29:33Z | 25,720,751 | 7 | 2014-09-08T09:09:40Z | [
"python",
"orm",
"sqlalchemy"
] | I have managed to work with the bulk insert in SQLAlchemy like:
```
conn.execute(addresses.insert(), [
{'user_id': 1, 'email_address' : '[email protected]'},
{'user_id': 1, 'email_address' : '[email protected]'},
{'user_id': 2, 'email_address' : '[email protected]'},
{'user_id': 2, 'email_address' : '[email protected]'},
]... | Read [Inserts, Updates and Deletes](http://docs.sqlalchemy.org/en/rel_0_9/core/tutorial.html#inserts-updates-and-deletes) section of the documentation. Following code should get you started:
```
from sqlalchemy.sql.expression import bindparam
stmt = addresses.update().\
where(addresses.c.id == bindparam('_id')).\
... |
'module' object has no attribute 'choice' - trying to use random.choice | 25,695,412 | 6 | 2014-09-05T23:45:50Z | 25,695,431 | 16 | 2014-09-05T23:48:27Z | [
"python",
"attributeerror"
] | Could someone please tell me what I may be doing wrong. I keep getting this message when I run my python code:
```
import random
foo = ['a', 'b', 'c', 'd', 'e']
random_item = random.choice(foo)
print random_item
```
Error
> AttributeError: 'module' object has no attribute 'choice' | Shot in the dark: You probably named your script `random.py`. Do not name your script the same name as the module.
I say this because the `random` module indeed has a `choice` method, so the import is probably grabbing the wrong (read: undesired) module. |
How to embed HTML into iPython output? | 25,698,448 | 16 | 2014-09-06T08:36:40Z | 35,760,941 | 18 | 2016-03-03T00:31:26Z | [
"python",
"html",
"ipython"
] | Is it possible to embed rendered HTML output into iPython output?
One way is to use
```
from IPython.core.display import HTML
HTML('<a href="http://example.com">link</a>')
```
or (IPython multiline cell alias)
```
%%html
<a href="http://example.com">link</a>
```
Which return a formatted link, but
1. This link doe... | This seems to work for me:
```
from IPython.core.display import display, HTML
display(HTML('<h1>Hello, world!</h1>'))
```
The trick is to wrap it in "display" as well.
Source: <http://python.6.x6.nabble.com/Printing-HTML-within-IPython-Notebook-IPython-specific-prettyprint-tp5016624p5016631.html> |
Replace all occurrences of a string in a pandas dataframe (Python) | 25,698,710 | 12 | 2014-09-06T09:15:43Z | 25,698,756 | 26 | 2014-09-06T09:21:22Z | [
"python",
"replace",
"pandas",
"dataframe"
] | I have a pandas dataframe with about 20 columns.
It is possible to replace all occurrences of a string (here a newline) by manually writing all column names:
```
df['columnname1'] = df['columnname1'].str.replace("\n","<br>")
df['columnname2'] = df['columnname2'].str.replace("\n","<br>")
df['columnname3'] = df['column... | You can use `replace` and pass the strings to find/replace as dictionary keys/items:
```
df.replace({'\n': '<br>'}, regex=True)
```
For example:
```
>>> df = pd.DataFrame({'a': ['1\n', '2\n', '3'], 'b': ['4\n', '5', '6\n']})
>>> df
a b
0 1\n 4\n
1 2\n 5
2 3 6\n
>>> df.replace({'\n': '<br>'}, regex=Tru... |
How to iterate over consecutive chunks of Pandas dataframe efficiently | 25,699,439 | 8 | 2014-09-06T10:46:50Z | 25,701,576 | 8 | 2014-09-06T15:03:35Z | [
"python",
"pandas",
"parallel-processing",
"ipython"
] | I have a large dataframe (several million rows).
I want to be able to do a groupby operation on it, but just grouping by arbitrary consecutive (preferably equal-sized) subsets of rows, rather than using any particular property of the individual rows to decide which group they go to.
The use case: I want to apply a fu... | I'm not sure if this is exactly what you want, but I found these grouper functions on [another SO thread](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks) fairly useful for doing a multiprocessor pool.
Here's a short example from that thread, which might do some... |
BackgroundSubtractorMOG still keep the object after it left the frame | 25,701,657 | 5 | 2014-09-06T15:11:27Z | 27,509,183 | 7 | 2014-12-16T16:20:50Z | [
"python",
"opencv",
"background-subtraction"
] | I tried to use BackgroundSubtractorMOG to remove the background but there are some object that already left the frame but the result from BackgroundSubtractorMOG.apply() still show that the object is still on the scene.
Here is my code
```
inputVideo = cv2.VideoCapture('input.avi')
fgbg = cv2.BackgroundSubtractorMOG(... | The problem is in `fgbg.apply`. For some reason, the `learningRate` is set to `0`. Make call like this:
```
history = 10 # or whatever you want it to be
fgmask = fgbg.apply(frame, learningRate=1.0/history)
```
Credit should go to Sebastian Ramirez who started a ticket in opencv and found the solution |
Specify app dependency in migration | 25,703,414 | 3 | 2014-09-06T18:22:26Z | 25,705,955 | 7 | 2014-09-07T00:13:00Z | [
"python",
"django",
"django-migrations"
] | I'm trying to add initial data in Django 1.7 and I've read that it is recommended to use data migrations.
I've created my migration file correctly, called "0001\_groups", in which I create few `contrib.auth`'s groups and permissions.
The problem is that it is run before the `auth` migrations are run.
I went to find ... | I've found out that you can reference the last migration with `__latest__`:
```
dependencies = [
('auth', '__latest__'),
]
``` |
Image Cropping Tool (Python) | 25,705,773 | 5 | 2014-09-06T23:41:22Z | 25,705,844 | 9 | 2014-09-06T23:55:55Z | [
"python",
"image",
"python-imaging-library",
"tiff",
"pillow"
] | I'm a film photographer who deals a lot with cropping/image resizing. Because I shoot film, I have to scan my negatives and crop each frame out of the batch scan. My scanner scans four strips of six images each (24 frames/crops per scan).
A friend of mine wrote me a script for Python that automatically crops images ba... | PIL is merely trying to protect you. It'll not open larger images, as that could be a vector of attack for a malicious user to give you a large image that'll expand to use up all memory.
Since you are not a malicious user and are not accepting images from anyone else, you can simply *disable* the limit:
```
from PIL ... |
Why socket implementation is slower than requests? | 25,705,810 | 4 | 2014-09-06T23:49:51Z | 25,705,830 | 7 | 2014-09-06T23:53:54Z | [
"python",
"sockets",
"python-requests"
] | I have a python 3.4 script fetching multiple web pages. At first, I used requests library to fetch pages:
```
def get_page_requsets(url):
r = requests.get(url)
return r.content
```
Above code gives an average speed of 4.6 requests per second.
To increase speed I rewrote function to use sockets library:
```
d... | `requests` uses [`urllib3`](http://urllib3.readthedocs.org/en/latest/), which handles HTTP connections very efficiently. Connections to the same server are re-used wherever possible, saving you the socket connection and teardown costs:
> * Re-use the same socket connection for multiple requests, with optional client-s... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.