title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Add n tasks to celery queue and wait for the results | 26,686,850 | 9 | 2014-11-01T06:42:07Z | 36,466,097 | 9 | 2016-04-07T03:17:18Z | [
"python",
"django",
"celery",
"django-celery"
] | I would add multiple tasks to celery queue and wait for results. I have various ideas how I would achieve this utilising some form of shared storage (memcached, redis, db, etc.), however, I would have thought it's something that Celery can handle automatically but I can't find any resources online.
**Code example**
`... | For **Celery >= 3.0**, TaskSet is [deprecated](http://docs.celeryproject.org/en/3.0/internals/deprecation.html#taskset) in favour of [group](http://docs.celeryproject.org/en/latest/userguide/canvas.html#groups).
```
from celery import group
from tasks import add
job = group([
add.s(2, 2),
ad... |
Python: How to get PID by process name? | 26,688,936 | 10 | 2014-11-01T11:39:03Z | 26,688,998 | 19 | 2014-11-01T11:47:47Z | [
"python",
"linux",
"python-2.7",
"pid"
] | Is there any way I can get the PID by process name in Python?
```
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3110 meysam 20 0 971m 286m 63m S 14.0 7.9 14:24.50 chrome
```
For example I need to... | You can get the pid of processes by name using `pidof` though [subprocess.check\_output](https://docs.python.org/2/library/subprocess.html#subprocess.check_output):
```
from subprocess import check_output
def get_pid(name):
return check_output(["pidof",name])
In [5]: get_pid("java")
Out[5]: '23366\n'
```
`check... |
Matplotlib: ValueError: x and y must have same first dimension | 26,690,480 | 14 | 2014-11-01T14:57:03Z | 26,690,526 | 16 | 2014-11-01T15:02:58Z | [
"python",
"numpy",
"matplotlib"
] | I am trying to fit a linear line of best fit to my matplotlib graph. I keep getting the error that x and y do not have the same first dimension. But they both have lengths of 15. What am I doing wrong?
```
import matplotlib.pyplot as plt
from scipy import stats
import numpy as np
x = [0.46,0.59,0.68,0.99,0.39,0.31,1.... | You should make `x` and `y` numpy arrays, not lists:
```
x = np.array([0.46,0.59,0.68,0.99,0.39,0.31,1.09,
0.77,0.72,0.49,0.55,0.62,0.58,0.88,0.78])
y = np.array([0.315,0.383,0.452,0.650,0.279,0.215,0.727,0.512,
0.478,0.335,0.365,0.424,0.390,0.585,0.511])
```
With this change, it produces ... |
Can named arguments be used with Python enums? | 26,691,784 | 4 | 2014-11-01T18:42:03Z | 26,703,636 | 7 | 2014-11-02T19:52:45Z | [
"python",
"python-3.x",
"enums",
"python-3.4",
"named-parameters"
] | Example:
```
class Planet(Enum):
MERCURY = (mass: 3.303e+23, radius: 2.4397e6)
def __init__(self, mass, radius):
self.mass = mass # in kilograms
self.radius = radius # in meters
```
Ref: <https://docs.python.org/3/library/enum.html#planet>
Why do I want to do this? If there are a fe... | While you can't use named arguments the way you describe with enums, you can get a similar effect with a [`namedtuple`](https://docs.python.org/3/library/collections.html#collections.namedtuple) mixin:
```
from collections import namedtuple
from enum import Enum
Body = namedtuple("Body", ["mass", "radius"])
class Pl... |
NLTK and Stopwords Fail #lookuperror | 26,693,736 | 6 | 2014-11-01T22:05:16Z | 26,693,897 | 30 | 2014-11-01T22:26:03Z | [
"python",
"nltk",
"sentiment-analysis",
"stop-words"
] | I am trying to start a project of sentiment analysis and I will use the stop words method. I made some research and I found that nltk have stopwords but when I execute the command there is an error.
What I do is the following, in order to know which are the words that nltk use (like what you may found here <http://www... | You don't seem to have the stopwords corpus on your computer.
You need to start the NLTK Downloader and download all the data you need.
Open a Python console and do the following:
```
>>> import nltk
>>> nltk.download()
showing info http://nltk.github.com/nltk_data/
```
In the GUI window that opens simply press the... |
How to write a lot of nested 'for' loops (Recursively) | 26,695,506 | 2 | 2014-11-02T02:42:26Z | 26,695,512 | 8 | 2014-11-02T02:44:20Z | [
"python",
"loops",
"for-loop"
] | ```
word = "word"
# Splitting word into its characters
newword = []
for char in word:
newword.append(char)
print newword
#getting all permutations
test= []
for i in newword:
for j in newword:
if i != j:
for k in newword:
if j != k and i!= k:
for l in n... | ```
In [10]: import itertools
In [11]: word = "word"
In [12]: test = [''.join(perm) for perm in itertools.permutations(word)]
In [13]: test
Out[13]:
['word',
'wodr',
'wrod',
'wrdo',
'wdor',
'wdro',
'owrd',
'owdr',
'orwd',
'ordw',
'odwr',
'odrw',
'rwod',
'rwdo',
'rowd',
'rodw',
'rdwo',
'rdow',
'dwo... |
Django SMTPAuthenticationError | 26,697,565 | 13 | 2014-11-02T08:46:33Z | 26,698,173 | 28 | 2014-11-02T10:11:30Z | [
"python",
"django"
] | I am new in django and developing a web application using django.I have successfully set the Signup functionality using **Userena** in my web application and can Register as a user with Verification Email.
i can show you my SMTP settings in my settings.py file
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailB... | A relatively recent change in Google's authentication system means you're going to have to "allow less secure app access" to your Google account, in order for this to work.
In your error, you are recommended to visit this link: <https://support.google.com/mail/answer/78754>
On that page:
Step #2 asks you to try [Dis... |
MVC design with QtDesigner and PySide | 26,698,628 | 6 | 2014-11-02T11:09:50Z | 26,699,122 | 13 | 2014-11-02T12:09:08Z | [
"python",
"model-view-controller",
"pyqt",
"pyside",
"qt-designer"
] | Python newbie coming from Java (+SWT/Windowbuilder) and am having difficulty working out how to properly code a large desktop app in Python/Qt4(QtDesigner)/PySide.
I would like to keep any view logic in a controller class outside the .ui file (and it's .py conversion). Firstly as then the logic is independent of GUI f... | I'd start with a file structure like:
```
project\
main_app.py # main Qt application with App class
main_view.ui # Qt designer file for main view
other_view.ui
views\
main_view.py # main view with MainView class
other_view.py
gen\
ui_main_view.py ... |
Installing tkinter on ubuntu 14.04 | 26,702,119 | 7 | 2014-11-02T17:18:13Z | 26,703,204 | 17 | 2014-11-02T19:08:02Z | [
"python",
"python-2.7",
"tkinter",
"ubuntu-14.04"
] | I want to run python code on Ubuntu 14.04, but when I execute it, it gives me the following error message
```
Traceback (most recent call last):
File "main.py", line 2, in <module>
from tkinter import *
ImportError: No module named tkinter
``` | Try writing the following in the terminal:
```
sudo apt-get install python-tk
```
Don't forget to actually import Tkinter module at the beginning of your program:
```
import Tkinter
``` |
Extracting multiple submatrices in Python | 26,706,054 | 10 | 2014-11-03T00:14:29Z | 27,214,380 | 7 | 2014-11-30T14:35:32Z | [
"python",
"indexing"
] | I am trying to extract multiple submatrices if my sparse matrix has multiple regions of non-zero values.
For example,
Say I have the following matrix:
```
x = np.array([0,0,0,0,0,0],
[0,1,1,0,0,0],
[0,1,1,0,0,1],
[0,0,0,0,1,1],
[0,0,0,0,1,0])
```
Then I need to be ... | Procedure:
1. Delete leading and trailing rows and columns with all zeros. (Not middle ones)
2. Find all empty rows and columns and split matrix on these indices. This creates a list of matrices
3. For each newly created matrix repeat the procedure recursively until no further splitting is possible
Code:
```
def del... |
django filtering on a field in a foreign key object | 26,708,652 | 4 | 2014-11-03T06:02:01Z | 26,708,915 | 7 | 2014-11-03T06:28:28Z | [
"python",
"django"
] | Given the following models from the django polls tutorial:
```
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
def __str__(self):
return self.question_text
def was_published_recently(self):
now = timezone... | You can also use like below
```
from django.db.models import Count
......
def get_queryset(self):
return Question.objects.annotate(num_choice=Count('choice')).filter(pub_date__lte=timezone.now(), num_choice=0)
``` |
Django 1.7 migrations won't recreate a dropped table, why? | 26,713,231 | 15 | 2014-11-03T11:24:26Z | 26,713,469 | 10 | 2014-11-03T11:37:20Z | [
"python",
"django",
"django-1.7"
] | Using Django 1.7 migrations.
I accidentally dropped a table in my database. I assumed that by running migration again this would recreate the table but no, Django states "No migrations to apply".
How to I get Django to recreate the table?
I have run:
```
> makemigrations - No changes detected
> migrate - No migrati... | Migrations check for differences in your models, then translates that to actions, which are translated to SQL. It **does not** automatically sync the db scheme with your models, and it has no way of knowing you dropped a table (it doesn't know about manual changes because, well, you're not supposed to do manual changes... |
Django 1.7 migrations won't recreate a dropped table, why? | 26,713,231 | 15 | 2014-11-03T11:24:26Z | 34,751,698 | 19 | 2016-01-12T18:57:41Z | [
"python",
"django",
"django-1.7"
] | Using Django 1.7 migrations.
I accidentally dropped a table in my database. I assumed that by running migration again this would recreate the table but no, Django states "No migrations to apply".
How to I get Django to recreate the table?
I have run:
```
> makemigrations - No changes detected
> migrate - No migrati... | Go to your database and find the table `django_migrations`. Delete all the rows which have `app` equals your app name.
Then do a `makemigrations` & `migrate` will work. |
Django 1.7 migrations won't recreate a dropped table, why? | 26,713,231 | 15 | 2014-11-03T11:24:26Z | 36,565,161 | 7 | 2016-04-12T06:28:46Z | [
"python",
"django",
"django-1.7"
] | Using Django 1.7 migrations.
I accidentally dropped a table in my database. I assumed that by running migration again this would recreate the table but no, Django states "No migrations to apply".
How to I get Django to recreate the table?
I have run:
```
> makemigrations - No changes detected
> migrate - No migrati... | Another solution I've found and works perfectly:
In django 1.7:
1. Delete your migrations folder
2. In the database: `DELETE FROM django_migrations WHERE app = 'app_name'`.
You could alternatively just truncate this table.
3. `python manage.py makemigrations`
4. `python manage.py migrate --fake`
In django 1.9.5:... |
Django delete superuser | 26,713,443 | 12 | 2014-11-03T11:36:06Z | 26,713,562 | 16 | 2014-11-03T11:42:59Z | [
"python",
"django"
] | This may be a duplicate, but I couldn't find the question anywhere, so I'll go ahead and ask:
Is there a simple way to delete a superuser from the terminal, perhaps analogous to Django's `createsuperuser` command? | There's no built in command but you can easily do this from the shell:
```
> django-admin.py shell
$ from django.contrib.auth.models import User
$ User.objects.get(username="joebloggs", is_superuser=True).delete()
``` |
Convert Pandas DataFrame to dictionary | 26,716,616 | 11 | 2014-11-03T14:47:53Z | 26,716,774 | 26 | 2014-11-03T14:55:51Z | [
"python",
"dictionary",
"pandas"
] | I have a DataFrame with four columns. I want to convert this DataFrame to a python dictionary. I want the elements of first column be `keys` and the elements of other columns in same row be `values`.
`Dataframe:`
```
ID A B C
0 p 1 3 2
1 q 4 3 2
2 r 4 0 9
```
Output should be lik... | The `to_dict()` method sets the column names as dictionary keys so you'll need to reshape your DataFrame slightly. Setting the 'ID' column as the index and then transposing the DataFrame is one way to achieve this.
`to_dict()` also accepts an `outtype` keyword argument which you'll need in order to output a list of va... |
smtplib import email.utils error | 26,717,008 | 4 | 2014-11-03T15:07:28Z | 26,717,106 | 8 | 2014-11-03T15:12:21Z | [
"python",
"python-2.7",
"smtplib"
] | I am getting the following error when I am trying to use smtplib in my Python code.
```
Traceback (most recent call last):
File "myemail.py", line 1, in <module>
import smtplib
File "/usr/lib64/python2.7/smtplib.py", line 46, in <module>
import email.utils
ImportError: No module named utils
```
Surprisingly, I can in... | Make sure that there's no `email.py` in the same directory where the `myemail.py` lives. That prevent importing of the standard library module `email`.
Also make sure there are no remaining `email.pyc` in that directory.
---
If you use your own `email` module, rename it with a different name. |
Python cannot allocate memory using multiprocessing.pool | 26,717,120 | 6 | 2014-11-03T15:13:02Z | 26,724,420 | 8 | 2014-11-03T22:25:44Z | [
"python",
"python-2.7",
"memory-management",
"memory-leaks",
"python-multiprocessing"
] | My code (part of a genetic optimization algorithm) runs a few processes in parallel, waits for all of them to finish, reads the output, and then repeats with a different input. Everything was working fine when I tested with 60 repetitions. Since it worked, I decided to use a more realistic number of repetitions, 200. I... | As shown in the comments to my question, the answer came from Puciek.
The solution was to close the pool of processes after it is finished. I thought that it would be closed automatically because the `results` variable is local to `RunMany`, and would be deleted after `RunMany` completed. However, python doesn't alway... |
pg_config executable not found when using pgxnclient on Windows 7 x64 | 26,717,436 | 7 | 2014-11-03T15:30:39Z | 28,069,597 | 12 | 2015-01-21T14:27:23Z | [
"python",
"postgresql-9.3"
] | I installed Python 2.7.8 and pgxn client. And I tried to run this statement from command line from the bin folder and path is setup correctly
```
pgxnclient install http://api.pgxn.org/dist/pg_repack/1.2.1/pg_repack-1.2.1.zip
```
But I got an error `pg_config executable not found`. | Bakground: pg\_config is the configuration utility provided by **PostgreSQL**. This utility is used by various applications.
**Solution:**
1. Install [PostgreSQL](http://www.postgresql.org/download/windows/).
2. Set the path. System Properties > Advanced
PATH:C:\Program Files (x86)\PostgreSQL\9.4\bin\;
From this... |
Python - Theano scan() function | 26,718,812 | 8 | 2014-11-03T16:43:48Z | 26,789,849 | 9 | 2014-11-06T21:37:04Z | [
"python",
"theano"
] | I cannot fully understand the behaviour of theano.scan().
Here's an example:
```
import numpy as np
import theano
import theano.tensor as T
def addf(a1,a2):
return a1+a2
i = T.iscalar('i')
x0 = T.ivector('x0')
step= T.iscalar('step')
results, updates = theano.scan(fn=addf,
outputs_info... | When you use taps=[-1], scan suppose that the information in the output info is used as is. That mean the addf function will be called with a vector and the non\_sequence as inputs. If you convert x0 to a scalar, it will work as you expect:
```
import numpy as np
import theano
import theano.tensor as T
def addf(a1,a... |
Django 1.7 blank CharField/TextField convention | 26,719,088 | 3 | 2014-11-03T16:59:11Z | 26,719,174 | 7 | 2014-11-03T17:04:37Z | [
"python",
"django",
"django-1.7",
"django-migrations"
] | Using Django's new migration framework, let's say I have the following model that already exists in the database:
```
class TestModel(models.Model):
field_1 = models.CharField(max_length=20)
```
I now want to add a new TextField to the model, so it looks like this:
```
class TestModel(models.Model):
field_1 ... | It's not a bug, it's documented and logical.
You add a new field, which is (by best practice, as you noticed) not `NULL`able so django has to put something into it for the existing records - I guess you want it to be the empty string.
you can
```
1) Provide a one-off default now (will be set on all existing rows)
``... |
Faster way to rank rows in subgroups in pandas dataframe | 26,720,916 | 4 | 2014-11-03T18:47:25Z | 26,721,325 | 7 | 2014-11-03T19:13:58Z | [
"python",
"pandas"
] | I have a pandas data frame that has is composed of different subgroups.
```
df = pd.DataFrame({
'id':[1, 2, 3, 4, 5, 6, 7, 8],
'group':['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'],
'value':[.01, .4, .2, .3, .11, .21, .4, .01]
})
```
I want to find the rank of each id in its group with say, lower val... | rank is cythonized so should be very fast. And you can pass the same options as `df.rank()`
[here](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rank.html?highlight=rank#pandas.DataFrame.rank) are the docs for `rank`. As you can see, tie-breaks can be done in one of five different ways via the ... |
PIL: ImportError: The _imaging extension was built for another version of pillow or PIL | 26,720,968 | 13 | 2014-11-03T18:51:00Z | 27,804,221 | 14 | 2015-01-06T17:47:46Z | [
"python",
"python-2.7",
"python-imaging-library"
] | I get the error:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-4-0f6709e38f49> in <module>()
----> 1 from PIL import Image
C:\Anaconda\lib\site-packages\PIL\Image.py in <module>()
61 f... | This is only a installation issue.
first install pip in you system if it is not insalled it is available for windows also <https://pip.pypa.io/en/latest/installing.html>
upgrade your numpy, pip/pillow,scipy
pip install -U numpy
pip install -U pil/pillow
pip install -U scipy
or best option for windows is to use [an... |
Contains of HashSet<Integer> in Python | 26,724,002 | 7 | 2014-11-03T21:58:13Z | 26,724,024 | 17 | 2014-11-03T22:00:10Z | [
"python",
"contains",
"hashset"
] | In Java we have `HashSet<Integer>`, I need similar structure in Python to use contains like below:
```
A = [1, 2, 3]
S = set()
S.add(2)
for x in A:
if S.contains(x):
print "Example"
```
Could you please help? | Just use a set:
```
>>> l = set()
>>> l.add(1)
>>> l.add(2)
>>> 1 in l
True
>>> 34 in l
False
```
The same works for lists:
```
>>> ll = [1,2,3]
>>> 2 in ll
True
>>> 23 in ll
False
``` |
Pandas SettingWithCopyWarning | 26,724,378 | 10 | 2014-11-03T22:23:20Z | 26,724,581 | 19 | 2014-11-03T22:37:48Z | [
"python",
"pandas",
"warnings"
] | Python 3.4 and Pandas 0.15.0
df is a dataframe and col1 is a column. With the code below, I'm checking for the presence of the value 10 and replacing such values with 1000.
```
df.col1[df.col1 == 10] = 1000
```
Here's another example. This time, I'm changing values in col2 based on index.
```
df.col2[df.index == 15... | The issue here is that: `df.col1[df.col1 == 10]` returns a copy.
So I would say:
```
row_index = df.col1 == 10
# then with the form .loc[row_indexer,col_indexer]
df.loc[row_index, 'col1'] = 100
``` |
What is the equivalent of the collection.getIndexes() shell command in pymongo? | 26,724,393 | 5 | 2014-11-03T22:23:54Z | 26,724,998 | 8 | 2014-11-03T23:13:39Z | [
"python",
"mongodb",
"pymongo"
] | I can't seem to find a getIndexes() command implemented as part of pymongo's Collection object - is that intentional? is it implemented somewhere else in the class hierarchy?
if it isn't, what's the pymongo canonical way for getting the same effect? | What you might be looking for is [`index_information()`](http://api.mongodb.org/python/current/api/pymongo/collection.html#pymongo.collection.Collection.index_information) at the Collection level. From the docs:
> Get information on this collectionâs indexes.
>
> Returns a dictionary where the keys are index names (... |
Which Python Conditional Return Statement Is The Most Pythonic? | 26,725,284 | 4 | 2014-11-03T23:38:48Z | 26,725,294 | 9 | 2014-11-03T23:39:43Z | [
"python",
"function",
"return",
"conditional",
"conventions"
] | Which of the following is the proper way to return something with Python when using conditionals? Does it matter? And why?
```
# OPTION 1
if conditional:
return a
else:
return b
# OPTION 2
if conditional:
return a
return b
``` | The *proper* Python way would be:
```
return conditional
```
["Simple is better than complex"](http://legacy.python.org/dev/peps/pep-0020/) remember? :)
---
Regarding your edited question, I would use `# OPTION 2` because ["Flat is better than nested."](http://legacy.python.org/dev/peps/pep-0020/) and this solution... |
How to reverse a bitwise OR operation? | 26,727,044 | 2 | 2014-11-04T03:02:22Z | 26,727,086 | 11 | 2014-11-04T03:06:16Z | [
"python",
"bit-manipulation",
"bitwise-operators",
"bitwise-or"
] | Here's what I've done:
```
93 | 199
```
which returns
```
223
```
I understand that this is because `0b1011101 | 0b11000111` is `0b11011111`
However, suppose I want to do the reverse operation. How do I get `0b1011101` from a bitwise operation between `0b11000111` and `0b11011111`? | You can't get an unambiguous answer in the general case. If `C=A|B`, then wherever you have a 1 in C and a 1 in A, the corresponding bit of B could have been either 0 or 1.
In your example, 93|199=223, but 92|199 is also 223. So given 223 and 199 there's no single answer. |
Multiple files for one argument in argparse Python 2.7 | 26,727,314 | 8 | 2014-11-04T03:36:48Z | 27,008,413 | 12 | 2014-11-19T03:23:36Z | [
"python",
"file",
"parsing",
"arguments",
"argparse"
] | **Trying to make an argument in argparse where one can input several file names that can be read.**
In this example, i'm just trying to print each of the file objects to make sure it's working correctly but I get the error:
```
error: unrecognized arguments: f2.txt f3.txt
```
**. How can I get it to recognize all of ... | If your goal is to read **one or more** readable files, you can try this:
```
parser.add_argument('file', type=argparse.FileType('r'), nargs='+')
```
`nargs='+'` gathers all command line arguments into a list. There must also be one or more arguments or an error message will be generated.
`type=argparse.FileType('r'... |
Confused about try/except with custom Exception | 26,733,648 | 9 | 2014-11-04T11:09:47Z | 26,734,060 | 11 | 2014-11-04T11:31:06Z | [
"python",
"try-catch",
"except"
] | My code:
```
class AError(Exception):
print 'error occur'
for i in range(3):
try:
print '---oo'
raise AError
except AError:
print 'get AError'
else:
print 'going on'
finally:
print 'finally'
```
When I run the above code, the output is this:
```
error occur... | To clarify [Paul's answer](http://stackoverflow.com/a/26733734/3001761), here's a simple example:
```
class Test(object):
print "Class being defined"
def __init__(self):
print "Instance being created"
for _ in range(3):
t = Test()
```
The output from this will be:
```
Class being defined
Inst... |
PyPi description markdown doesn't work | 26,737,222 | 12 | 2014-11-04T14:09:31Z | 26,737,258 | 7 | 2014-11-04T14:11:40Z | [
"python",
"pypi"
] | I uploaded a package to PyPi using:
```
python setup.py register -r pypi
python setup.py sdist upload -r pypi
```
I'm trying to modify the decsription, I wrote (please don't edit the formatting of the following piece of code, I made it in purpose to demonstrate my problem):
```
**nose-docstring-plugin**
This plugin... | PyPI does *not* support Markdown, so your README will not be rendered into HTML.
If you want a rendered README, stick with reStructuredText; the [Sphinx introduction to reStructuredText](http://sphinx-doc.org/rest.html) is a good starting point.
You probably want to install the [`docutils` package](https://pypi.pytho... |
PyPi description markdown doesn't work | 26,737,222 | 12 | 2014-11-04T14:09:31Z | 26,737,672 | 20 | 2014-11-04T14:31:24Z | [
"python",
"pypi"
] | I uploaded a package to PyPi using:
```
python setup.py register -r pypi
python setup.py sdist upload -r pypi
```
I'm trying to modify the decsription, I wrote (please don't edit the formatting of the following piece of code, I made it in purpose to demonstrate my problem):
```
**nose-docstring-plugin**
This plugin... | As `@Martijn Pieters` stated, [PyPi](https://pypi.python.org) does not support Markdown. I'm not sure where I learned the following trick, but you can use [Pandoc](http://johnmacfarlane.net/pandoc/) and [PyPandoc](https://github.com/bebraw/pypandoc) to convert your Markdown files into RestructuredText before uploading ... |
Script using multiprocessing module does not terminate | 26,738,648 | 6 | 2014-11-04T15:16:16Z | 26,738,946 | 10 | 2014-11-04T15:30:47Z | [
"python",
"python-2.7",
"multiprocessing",
"python-multiprocessing"
] | The following code, does not print `"here"`. What is the problem?
I tested it on both my machines (windows 7, Ubuntu 12.10), and
<http://www.compileonline.com/execute_python_online.php>
It does not print `"here"` in all cases.
```
from multiprocessing import Queue, Process
def runLang(que):
print "start"
myD... | This is because when you `put` lots of items into a `multiprocessing.Queue`, they eventually get buffered in memory, once the underlying `Pipe` is full. The buffer won't get flushed until something starts reading from the other end of the `Queue`, which will allow the `Pipe` to accept more data. A `Process` cannot term... |
Python extract wav from video file | 26,741,116 | 2 | 2014-11-04T17:16:20Z | 26,741,357 | 8 | 2014-11-04T17:29:42Z | [
"python",
"audio",
"video",
"ffmpeg",
"gstreamer"
] | Related:
[How to extract audio from a video file using python?](http://stackoverflow.com/questions/19216450/how-to-extract-audio-from-a-video-file-using-python)
[Extract audio from video as wav](http://stackoverflow.com/questions/2117488/extract-audio-from-video-as-wav)
[How to rip the audio from a video?](http://st... | It is a very easy Task using **ffmpeg** with **python** subprocess and there is a reason why people are pointing to this solution as a good solution.
This is the basic command extracting audio from a given video File:
> ffmpeg -i test.mp4 -ab 160k -ac 2 -ar 44100 -vn audio.wav
The Python Code is just wrapping this c... |
How does the pyspark mapPartitions function work? | 26,741,714 | 5 | 2014-11-04T17:51:37Z | 26,745,371 | 8 | 2014-11-04T21:27:59Z | [
"python",
"scala",
"bigdata",
"apache-spark"
] | So I am trying to learn Spark using Python (Pyspark). I want to know how the function `mapPartitions` work. That is what Input it takes and what Output it gives. I couldn't find any proper example from the internet. Lets say, I have an RDD object containing lists, such as below.
```
[ [1, 2, 3], [3, 2, 4], [5, 2, 7] ]... | mapPartition should be thought of as a map operation over Partitions and not over the elements of the partition. It's input is the set of current Partitions its output will be another set of Partitions.
The function you pass map must take an individual element of your RDD
The funtion you pass mapPartition must take a... |
How does the pyspark mapPartitions function work? | 26,741,714 | 5 | 2014-11-04T17:51:37Z | 28,077,116 | 7 | 2015-01-21T21:24:29Z | [
"python",
"scala",
"bigdata",
"apache-spark"
] | So I am trying to learn Spark using Python (Pyspark). I want to know how the function `mapPartitions` work. That is what Input it takes and what Output it gives. I couldn't find any proper example from the internet. Lets say, I have an RDD object containing lists, such as below.
```
[ [1, 2, 3], [3, 2, 4], [5, 2, 7] ]... | It's easier to use mapPartitions with a generator function using the `yield` syntax:
```
def filter_out_2(partition):
for element in partition:
if element != 2:
yield element
filtered_lists = data.mapPartition(filter_out_2)
``` |
Interpreting scipy.stats.entropy values | 26,743,201 | 3 | 2014-11-04T19:16:53Z | 26,743,619 | 8 | 2014-11-04T19:42:07Z | [
"python",
"statistics",
"scipy",
"entropy"
] | I am trying to use **scipy.stats.entropy** to estimate the **KullbackâLeibler** (KL) divergence between two distributions. More specifically, I would like to use the KL as a metric to decide how consistent two distributions are.
However, I cannot interpret the KL values. For ex:
> t1=numpy.random.normal(-2.5,0.1,10... | `numpy.random.normal(-2.5,0.1,1000)` is a *sample* from a normal distribution. It's just 1000 numbers in a random order. The [documentation](http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.stats.entropy.html) for `entropy` says:
> `pk[i]` is the (possibly unnormalized) probability of event `i`.
So to ge... |
How do I import FileNotFoundError from Python 3? | 26,745,283 | 9 | 2014-11-04T21:22:47Z | 26,745,443 | 16 | 2014-11-04T21:33:24Z | [
"python",
"python-2.7",
"exception",
"python-3.x"
] | I am currently using Python 2 on a project that needs a Python 3 built-in exception: `FileNotFoundError`. How do I do it? | You can of course define any exceptions you want.
But they're not going to do you any good. The whole point of `FileNotFoundError` is that any Python operation that runs into a file-not-found error will raise that exception. Just defining your own exception won't make that true. All you're going to get is an `OSError`... |
Using an OrderedDict in **kwargs | 26,748,097 | 14 | 2014-11-05T01:10:48Z | 26,748,181 | 12 | 2014-11-05T01:21:00Z | [
"python",
"kwargs"
] | Is it possible to pass an OrderedDict instance to a function which uses the `**kwargs` syntax and retain the ordering?
What I'd like to do is :
```
def I_crave_order(**kwargs):
for k, v in kwargs.items():
print k, v
example = OrderedDict([('first', 1), ('second', 2), ('third', -1)])
I_crave_order(**exam... | No, it is not possible. Or, rather, it is possible, but the OrderedDict is just going to get turned into a dict anyway.
---
The first thing to realize is that the value you pass in `**example` does not automatically become the value in `**kwargs`. Consider this case, where `kwargs` will only have part of `example`:
... |
How do you set up neo4j to work with Google Compute Engine? | 26,751,346 | 2 | 2014-11-05T06:57:01Z | 26,752,772 | 11 | 2014-11-05T08:39:56Z | [
"python",
"neo4j",
"google-compute-engine"
] | I'm wondering how one would get neo4j to work with Google Compute Engine. Has anybody done this? What problems did you encounter? | Here you go,
## Basic Setup
* Install and setup [`gcloud`](https://cloud.google.com/sdk/gcloud/)
* Install [py2neo](http://book.py2neo.org/en/latest/)
* Create your GCE Instance (<https://console.developers.google.com/project/PROJECT_APPID/compute/instancesAdd>) using image (debian-7-wheezy-v20141021, Debian GNU/Linu... |
django celery beat DBAccessError | 26,756,166 | 6 | 2014-11-05T11:31:45Z | 26,756,289 | 14 | 2014-11-05T11:37:48Z | [
"python",
"celery",
"django-celery"
] | I am running django+celery with celerybeat, and i am getting this error
```
.../local/lib/python2.7/site-packages/celery/beat.py", line 367, in setup_schedule
writeback=True)
File "/usr/lib/python2.7/shelve.py", line 239, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/usr/lib/py... | I asked too soon!
answering my own question in case anyone else face the same issue.
The issue was because I did not have write permission in the folder my django project was running.
from the documentation (<http://celery.readthedocs.org/en/latest/userguide/periodic-tasks.html#starting-the-scheduler>)
> Beat needs... |
PsychoPy sending triggers on 64bit OS | 26,762,015 | 4 | 2014-11-05T16:20:25Z | 26,889,541 | 8 | 2014-11-12T14:37:46Z | [
"python",
"triggers",
"64bit",
"psychopy"
] | I have a problem with sending triggers for eeg recording using PsychoPy standalone v1.81.00 on a Win7 64bit OS. I followed the descriptions [here](https://groups.google.com/forum/#!searchin/psychopy-users/trigger$2064/psychopy-users/hbIO2wHK1KU/uDUrjzRuXTQJ) and don't get any (more) errors. The triggers, however, don't... | I managed to solve the problem. I'm not entirely sure which step(s) actually cut the curve but I recommend the following:
**Download and install [LPT Test Utility](http://www.xlentelectronics.nl/LPTTest/LPTTestUtilx64r.htm) on your presentation computer.**
At first, this program installs the `inpout32.dll` automatica... |
How to test the value of all items in a container? | 26,762,081 | 3 | 2014-11-05T16:23:33Z | 26,762,115 | 8 | 2014-11-05T16:25:21Z | [
"python",
"validation",
"python-3.x",
"containers"
] | Let's say I have a container such as a dictionary or a list. What is the Python way to test if all of the values of the container are equal to a given value (such as `None`)?
My naive implementation is to just use a boolean flag like I was taught to do in C so the code could look something like.
```
a_dict = {
"k... | You can still use `all` if you add in a [generator expression](https://docs.python.org/3/reference/expressions.html#grammar-token-generator_expression):
```
if all(x is None for x in a_dict.values()):
```
Or, with an arbitrary value:
```
if all(x == value for x in a_dict.values()):
``` |
Sorl-thumbnail generates black square instead of image | 26,762,180 | 5 | 2014-11-05T16:27:50Z | 27,829,124 | 8 | 2015-01-07T21:40:04Z | [
"python",
"django",
"sorl-thumbnail"
] | I'm developing my project inside a Vagrant VM, the software version I'm using is:
* Ubuntu 12.04
* Django 1.6
* Sorl Thumbnail 11.12
* Pillow 2.5.3
I have some pictures in the path `/var/www/django/my_project/media/icons` and I have a model with an **ImageField** pointing to that path.
I have also `THUMBNAIL_DEBUG =... | Finally solved !
To solve the issue with the black background:
* I updated **sorl-thumbnail** to `12.2`
* Added this 2 lines to settings.py:
+ `THUMBNAIL_COLORSPACE = None`
+ `THUMBNAIL_PRESERVE_FORMAT = True`
* Restart thumbnail database with `python manage.py thumbnail clear_delete_all` |
Convert Pandas Column to DateTime | 26,763,344 | 15 | 2014-11-05T17:24:34Z | 26,763,793 | 35 | 2014-11-05T17:50:27Z | [
"python",
"datetime",
"pandas"
] | I have one field in a pandas DataFrame that was imported as string format.
It should be a datetime variable.
How do I convert it to a datetime column and then filter based on date.
Example:
* DataFrame Name: **raw\_data**
* Column Name: **Mycol**
* Value
Format in Column: **'05SEP2014:00:00:00.000'** | Use the [`to_datetime`](http://pandas.pydata.org/pandas-docs/dev/timeseries.html#converting-to-timestamps) function, specifying a [format](http://strftime.org/) to match your data.
```
raw_data['Mycol'] = pd.to_datetime(raw_data['Mycol'], format='%d%b%Y:%H:%M:%S.%f')
``` |
Convert Pandas Column to DateTime | 26,763,344 | 15 | 2014-11-05T17:24:34Z | 26,763,810 | 12 | 2014-11-05T17:51:24Z | [
"python",
"datetime",
"pandas"
] | I have one field in a pandas DataFrame that was imported as string format.
It should be a datetime variable.
How do I convert it to a datetime column and then filter based on date.
Example:
* DataFrame Name: **raw\_data**
* Column Name: **Mycol**
* Value
Format in Column: **'05SEP2014:00:00:00.000'** | You can use the DataFrame method [`.apply()`](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.apply.html) to operate on the values in Mycol:
```
>>> df = pd.DataFrame(['05SEP2014:00:00:00.000'],columns=['Mycol'])
>>> df
Mycol
0 05SEP2014:00:00:00.000
>>> import datetime as dt
>... |
Why is -0.0 not the same as 0.0? | 26,764,972 | 5 | 2014-11-05T19:00:26Z | 26,765,083 | 11 | 2014-11-05T19:06:14Z | [
"python",
"math",
"floating-point"
] | I could be missing something fundamental, but consider this interpreter session1:
```
>>> -0.0 is 0.0
False
>>> 0.0 is 0.0
True
>>> -0.0 # The sign is even retained in the output. Why?
-0.0
>>>
```
You would think that the Python interpreter would realize that `-0.0` and `0.0` are the same number. In fact, it compa... | In IEEE754, the format of floating point numbers, the sign is a separate bit. So -0.0 and 0.0 are different by this bit.
Integers use the two's complement, to represent negative numbers; that's why there is only one `0`.
Use `is` only of you really want to compare instances of objects. Otherwise, especially for number... |
Why is -0.0 not the same as 0.0? | 26,764,972 | 5 | 2014-11-05T19:00:26Z | 26,765,106 | 8 | 2014-11-05T19:07:37Z | [
"python",
"math",
"floating-point"
] | I could be missing something fundamental, but consider this interpreter session1:
```
>>> -0.0 is 0.0
False
>>> 0.0 is 0.0
True
>>> -0.0 # The sign is even retained in the output. Why?
-0.0
>>>
```
You would think that the Python interpreter would realize that `-0.0` and `0.0` are the same number. In fact, it compa... | The IEEE Standard for Floating-Point Arithmetic ([IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)) defines the inclusion of [signed zeroes](http://en.wikipedia.org/wiki/Signed_zero). In theory they allow you to distinguish between negative number underflow and positive number [underflow](http://en.wikipedia... |
Python Pandas Pivot - Why Fails | 26,765,041 | 5 | 2014-11-05T19:04:10Z | 26,765,257 | 7 | 2014-11-05T19:16:14Z | [
"python",
"python-2.7",
"pandas"
] | I have tried for a while to get this to wrk and I can't - I read the documentation and I must be misunderstanding something
I have a Data Frame in long format and I want to make it wide - this is quite common. But I get an error
```
from pandas import DataFrame
data = DataFrame({'value' : [1,2,3,4,5,6,7,8,9,10,11,12... | From what you were trying to do, you were trying to pass 'group' as `index` so the pivot fails.
It should be:
```
data.pivot(data.index, 'group')
```
or,
```
# the format is pivot(index=None, columns=None, values=None)
data.pivot(index=data.index, columns='group')
```
However I'm not entirely sure what expected out... |
What are the benefits / drawbacks of a list of lists compared to a numpy array of OBJECTS with regards to MEMORY? | 26,767,694 | 5 | 2014-11-05T21:43:08Z | 26,768,083 | 7 | 2014-11-05T22:08:16Z | [
"python",
"arrays",
"numpy"
] | I'm trying to understand the memory and other overhead implications that using `numpy` lists would have for arrays of **`dtype`** `object` compared to lists of lists.
**Does this change with dimensionality?** eg 2D vs 3D vs N-D.
Some of the the benefits I can think of when using `numpy` arrays are that things like **... | I'm going to answer your primary question, and leave the others (performance of transpose, etc.) out. So:
> I'm trying to understand the memory and other overhead implications that using numpy lists would have ⦠Just to clarify I'm interested in the case where the numpy array type is `object` not a `float`, `double`... |
Why is numpy.power slower for integer exponents? | 26,770,996 | 7 | 2014-11-06T02:58:26Z | 26,780,128 | 11 | 2014-11-06T12:59:46Z | [
"python",
"performance",
"numpy",
"numeric"
] | I chose these numbers randomly, but these results seem to be consistent --- a float exponent is 25%-50% faster than an integer one. How are these handled differently?
```
In [209]: %timeit -n 100000 -r 100 np.power(3.71242, 7)
100000 loops, best of 100: 3.45 µs per loop
In [210]: %timeit -n 100000 -r 100 np.power(3.... | `np.power` is a [universal function](http://docs.scipy.org/doc/numpy/reference/ufuncs.html) (ufunc). These functions can be used on scalars and arrays which have a variety of different datatypes, but must first check the type of input values so that they can determine which internal loop to use to generate suitable out... |
Replicating rows in a pandas data frame by a column value | 26,777,832 | 7 | 2014-11-06T11:01:59Z | 26,778,637 | 9 | 2014-11-06T11:42:27Z | [
"python",
"pandas"
] | I want to replicate rows in a Pandas Dataframe. Each row should be repeated n times, where n is a field of each row.
```
import pandas as pd
what_i_have = pd.DataFrame(data={
'id': ['A', 'B', 'C'],
'n' : [ 1, 2, 3],
'v' : [ 10, 13, 8]
})
what_i_want = pd.DataFrame(data={
'id': ['A', 'B', 'B', 'C', 'C... | You could use `np.repeat` to get the repeated indices and then use that to index into the frame:
```
>>> df2 = df.loc[np.repeat(df.index.values,df.n)]
>>> df2
id n v
0 A 1 10
1 B 2 13
1 B 2 13
2 C 3 8
2 C 3 8
2 C 3 8
```
After which there's only a bit of cleaning up to do:
```
>>> df2 = df2... |
Create a virtualenv with both python2 and python3 | 26,782,537 | 7 | 2014-11-06T14:59:57Z | 27,011,757 | 8 | 2014-11-19T08:02:46Z | [
"python",
"python-3.x",
"virtualenv",
"virtualenvwrapper"
] | I tried to use virtualenvwrapper to create a virtualenv with both python2 and python3
Per [virtualenv with python2 and python3 via Homebrew](http://stackoverflow.com/questions/12566064/virtualenv-with-python2-and-python3-via-homebrew) I hoped this would work:
(The name of the virtualenv is 'double')
```
mkvirtualenv... | Sorry, virtualenv is designed to support single interpreter version.
If you need to use several python versions on the same codebase please create separate virtual environments. |
How to make a set of lists | 26,783,326 | 4 | 2014-11-06T15:36:13Z | 26,783,343 | 7 | 2014-11-06T15:37:15Z | [
"python"
] | I have a list of lists like this:
```
i = [[1, 2, 3], [2, 4, 5], [1, 2, 3], [2, 4, 5]]
```
I would like to get a list containing "unique" lists (based on their elements) like:
```
o = [[1, 2, 3], [2, 4, 5]]
```
I cannot use `set()` as there are non-hashable elements in the list. Instead, I am doing this:
```
o = [... | You can create a set of tuples, a set of lists will not be possible because of non hashable elements as you mentioned.
```
>>> l = [[1, 2, 3], [2, 4, 5], [1, 2, 3], [2, 4, 5]]
>>> set(tuple(i) for i in l)
{(1, 2, 3), (2, 4, 5)}
``` |
Normalizing a list of numbers in Python | 26,785,354 | 8 | 2014-11-06T17:12:59Z | 26,785,464 | 17 | 2014-11-06T17:17:53Z | [
"python",
"probability"
] | I need to normalize a list of values to fit in a probability distribution, i.e. between 0.0 and 1.0.
I understand *how* to normalize, but was curious if Python had a function to automate this.
I'd like to go from:
```
raw = [0.07, 0.14, 0.07]
```
to
```
normed = [0.25, 0.50, 0.25]
``` | Use :
```
norm = [float(i)/sum(raw) for i in raw]
```
to ensure that the sum is always 1.0 (or as close to as possible).
use
```
norm = [float(i)/max(raw) for i in raw]
```
to normalise again the maximum |
How to return indices of values between two numbers in numpy array | 26,786,946 | 4 | 2014-11-06T18:38:51Z | 26,787,042 | 7 | 2014-11-06T18:43:29Z | [
"python",
"numpy"
] | I would like to return the indices of all the values in a python numpy array that are between two values. Here is my code:
`inEllipseIndFar = np.argwhere(excessPathLen * 2 < ePL < excessPathLen * 3)`
But it returns an error:
```
inEllipseIndFar = np.argwhere((excessPathLen * 2 < ePL < excessPathLen * 3).all())
Valu... | Since `> < =` return masked arrays, you can multiply them together to get the effect you are looking for (essentially the logical AND):
```
>>> import numpy as np
>>> A = 2*np.arange(10)
array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18])
>>> idx = (A>2)*(A<8)
>>> np.where(idx)
array([2, 3])
``` |
pandas to_csv first extra column remove, how to? | 26,786,960 | 14 | 2014-11-06T18:39:38Z | 26,787,032 | 22 | 2014-11-06T18:43:00Z | [
"python",
"csv",
"pandas"
] | I'm trying to create csv with pandas , but when I export to csv it gave me one extra row
```
d = {'one' : pd.Series([1., 2., 3.]),'two' : pd.Series([1., 2., 3., 4.])}
df0_fa = pd.DataFrame(d)
df_csv = df0_fa.to_csv('revenue/data/test.csv',mode = 'w')
```
so my result would be :
```
,one,two
0,1.0,1.0
1,2.0,2.0
2,3.0... | What you are seeing is the index column. Just set index=False:
```
df_csv = df0_fa.to_csv('revenue/data/test.csv',mode = 'w', index=False)
``` |
super() and @staticmethod interaction | 26,788,214 | 11 | 2014-11-06T19:52:34Z | 26,807,879 | 15 | 2014-11-07T18:38:13Z | [
"python",
"python-2.7",
"static-methods",
"super"
] | Is super() not meant to be used with staticmethods?
When I try something like
```
class First(object):
@staticmethod
def getlist():
return ['first']
class Second(First):
@staticmethod
def getlist():
l = super(Second).getlist()
l.append('second')
return l
a = Second.getlist()
print a
```
I g... | The short answer to
> Am I calling super(type) incorrectly here or is there something I'm missing?
is: yes, you're calling it incorrectly... AND (indeed, *because*) there is something you're missing.
But don't feel bad; this is an extremely difficult subject.
The [documentation](https://docs.python.org/2/library/fu... |
More Pythonic way to define custom __eq__ method using try, assert, except | 26,789,825 | 2 | 2014-11-06T21:36:01Z | 26,790,016 | 7 | 2014-11-06T21:46:51Z | [
"python"
] | The code below works(EDIT: actually, turns out it doesn't!), but I don't like the hanging `return True` statement that appears after the `try: except:` block.
```
class MySlottedClass(object):
def __new__(klass, **slots):
klass.__slots__ = []
for k in slots:
klass.__slots__.append(k)
... | The `return True` is fine. I think the bigger problem is using an `assert` for flow control. Asserts do not run at all if the user passes `-O` to `python` on the command line. You should write something more like this:
```
for slot in self.__slots__:
if not hasattr(other, slot) or getattr(self, slot) != getattr(ot... |
Use str.format() to access object attributes | 26,791,908 | 10 | 2014-11-07T00:17:17Z | 26,791,923 | 17 | 2014-11-07T00:18:44Z | [
"python",
"string",
"string-formatting"
] | I have a Python object with attributes `a`, `b`, `c`.
I still use old string formatting, so I'd normally print these manually:
```
print 'My object has strings a=%s, b=%s, c=%s' % (obj.a, obj.b, obj.c)
```
Lately, my strings have been getting super long, and I'd much rather be able to simply pass the object into a ... | You can use the `.attribute_name` notation inside the format fields themselves:
```
print 'My object has strings a={0.a}, b={0.b}, c={0.c}'.format(obj)
```
Below is a demonstration:
```
>>> class Test(object):
... def __init__(self, a, b, c):
... self.a = a
... self.b = b
... self.c = c
.... |
decorate __call__ with @staticmethod | 26,793,600 | 4 | 2014-11-07T03:32:10Z | 26,793,649 | 8 | 2014-11-07T03:39:09Z | [
"python",
"decorator",
"static-methods",
"python-decorators"
] | Why can't I make a class' \_\_call\_\_ method static using the @staticmethod decorator?
```
class Foo(object):
@staticmethod
def bar():
return 'bar'
@staticmethod
def __call__():
return '__call__'
print Foo.bar()
print Foo()
```
outputs
```
bar
<__main__.Foo object at 0x7fabf93c89d0... | You need to override `__call__` on the metaclass. The special methods defined in a class are for its instances, to change a class's special methods you need to change them in its class, i.e metaclass. (When you call `Foo()` usually the order is: `Meta.__call__()` --> `Foo.__new__()` --> `Foo.__init__()`, only if they r... |
Authentication with Azure Active Directory - how to accept user credentials programmatically | 26,794,759 | 2 | 2014-11-07T05:35:02Z | 26,795,582 | 8 | 2014-11-07T06:43:20Z | [
"python",
"authentication",
"azure",
"oauth",
"azure-active-directory"
] | Is there any way to login via web application or web api to Azure Active Directory (with AD credentials) using my own username and password page which is hosted outside of Azure?
From my investigation it seems there is no programmatic way to send username and password to authenticate users with Azure AD (if you hosted... | The [Resource Owner Password Credentials Grant](https://tools.ietf.org/html/rfc6749#section-4.3) (`grant_type=password`) flow **is** supported by Azure Active Directory. However, before using it, consider if it is truly required. As it says in the OAuth 2.0 RFC:
> The resource owner password credentials (i.e., usernam... |
Does Python import order matter | 26,804,689 | 5 | 2014-11-07T15:35:52Z | 26,804,809 | 12 | 2014-11-07T15:41:34Z | [
"python"
] | I read [here](http://stackoverflow.com/questions/20762662/whats-the-correct-way-to-sort-python-import-x-and-from-x-import-y-statement) about sorting your `import` statements in Python, but what if the thing you are importing needs dependencies that have not been imported yet? Is this the difference between compiled lan... | Import order does not matter. If a module relies on other modules, it needs to import them itself. Python treats each `.py` file as a self-contained unit as far as what's visible in that file.
(Technically, changing import order could change behavior, because modules can have initialization code that runs when they ar... |
subprocess.Popen(): OSError: [Errno 8] Exec format error in python? | 26,807,937 | 3 | 2014-11-07T18:42:21Z | 30,551,364 | 11 | 2015-05-30T21:02:08Z | [
"python",
"linux",
"shell"
] | Yesterday, I wrote and ran a python `script` which executes a `shell` using `subprocess.Popen(command.split())` where command is string which constitutes `.sh` script and its argument. This script was working fine till yesterday. Today, I ran the same script and now i am contiguously hitting by this error.
```
p=subpr... | I solved this by putting this line at the top of the called shell script:
`#!/bin/sh`
That will guarantee that the system always uses the correct interpreter when running your script. |
Why does PyCharm use 120 Character Lines even though PEP8 Specifies 79? | 26,808,681 | 14 | 2014-11-07T19:31:29Z | 26,808,800 | 18 | 2014-11-07T19:38:39Z | [
"python",
"pycharm",
"pep8"
] | PEP8 clearly specifies 79 characters, however, PyCharm defaults to 120 and gives me the warning "PEP8: line too long (... > 120 characters)".
Did previous versions of PEP8 use 120 and PyCharm not update its PEP8 checker? I couldn't find any previous versions of the PEP8 Guide, however, I can easily find previous versi... | PyCharm is built on top of IntelliJ. IntelliJ has a default line length of 120 characters.
This is probably because you can't fit a common Java name like: `@annotated public static MyObjectFactoryFactory enterpriseObjectFactoryFactoryBuilderPattern {` in a mere 80 character line. (I'm poking fun, but Java names to ten... |
Split string at commas except when in bracket environment | 26,808,913 | 6 | 2014-11-07T19:44:47Z | 26,809,037 | 9 | 2014-11-07T19:52:08Z | [
"python",
"regex",
"parsing",
"pyparsing"
] | I would like to split a Python multiline string at its commas, except when the commas are inside a bracketed expression. E.g., the string
```
{J. Doe, R. Starr}, {Lorem
{i}psum dolor }, Dol. sit., am. et.
```
Should be split into
```
['{J. Doe, R. Starr}', '{Lorem\n{i}psum dolor }', 'Dol. sit.', 'am. et.']
```
This... | Write your own custom split-function:
```
input_string = """{J. Doe, R. Starr}, {Lorem
{i}psum dolor }, Dol. sit., am. et."""
expected = ['{J. Doe, R. Starr}', '{Lorem\n{i}psum dolor }', 'Dol. sit.', 'am. et.']
def split(s):
parts = []
bracket_level = 0
current = []
# trick to remove special... |
How to structure a Python module to limit exported symbols? | 26,813,545 | 2 | 2014-11-08T03:54:57Z | 26,813,632 | 7 | 2014-11-08T04:08:51Z | [
"python",
"module"
] | I am writing a Python module whose purpose is to export a single data structure. I believe this means my module should export a single symbol (e.g. `foo`), with all its other symbols being underscore-prefixed.
Generating the data structure takes a fair amount of code - how should I structure the module to ensure that ... | Note that using the underscore only prevents that name from being imported with `from module import *` (as [documented](https://docs.python.org/2/reference/simple_stmts.html#the-import-statement)). It doesn't make the name "private" in any real way. People can still see everything in your module by doing `import module... |
Running cron python jobs within docker | 26,822,067 | 39 | 2014-11-08T21:04:01Z | 26,830,468 | 7 | 2014-11-09T16:36:51Z | [
"python",
"cron",
"docker"
] | I would like to run a python cron job inside of a docker container in detached mode. My set-up is below:
My python script is test.py
```
#!/usr/bin/env python
import datetime
print "Cron job has run at %s" %datetime.datetime.now()
```
My cron file is my-crontab
```
* * * * * /test.py > /dev/console
```
and m... | Adding crontab fragments in `/etc/cron.d/` instead of using root's `crontab` might be preferable.
This would:
* Let you add additional cron jobs by adding them to that folder.
* Save you a few layers.
* Emulate how Debian distros do it for their own packages.
Observe that the format of those files is a bit different... |
Running cron python jobs within docker | 26,822,067 | 39 | 2014-11-08T21:04:01Z | 26,958,348 | 23 | 2014-11-16T15:03:03Z | [
"python",
"cron",
"docker"
] | I would like to run a python cron job inside of a docker container in detached mode. My set-up is below:
My python script is test.py
```
#!/usr/bin/env python
import datetime
print "Cron job has run at %s" %datetime.datetime.now()
```
My cron file is my-crontab
```
* * * * * /test.py > /dev/console
```
and m... | Several issues that I faced while trying to get a cron job running in a docker container were:
1. time in the docker container is in UTC not local time;
2. the docker environment is not passed to cron;
3. as Thomas noted, cron logging leaves a lot to be desired and accessing it through docker requires a docker-based s... |
Running cron python jobs within docker | 26,822,067 | 39 | 2014-11-08T21:04:01Z | 29,790,710 | 13 | 2015-04-22T07:33:05Z | [
"python",
"cron",
"docker"
] | I would like to run a python cron job inside of a docker container in detached mode. My set-up is below:
My python script is test.py
```
#!/usr/bin/env python
import datetime
print "Cron job has run at %s" %datetime.datetime.now()
```
My cron file is my-crontab
```
* * * * * /test.py > /dev/console
```
and m... | Here is a complement on rosksw answer.
There is no need to do some string replacement in the crontab file in order to pass environment variables to the cron jobs.
It is simpler to store the environment variables in a file when running the contrainer, then load them from this file at each cron execution. I found the t... |
Brick Wall Algorithm | 26,824,456 | 2 | 2014-11-09T02:52:40Z | 26,824,481 | 8 | 2014-11-09T02:57:00Z | [
"python",
"algorithm"
] | I am doing this problem on SingPath -
> We want to make a row of bricks that is exactly goal inches long. We have a number of small bricks (1 inch each) and big bricks (5 inches each). Return True if it is possible to make the goal by choosing from the given bricks. This is a little harder than it looks and can be don... | You have your calculations wrong; you need to see if *some* of the big and small bricks can be used to the required row; you are only determining if the bricks given can produce a row *at least* that long.
You cannot build a row of length 13, because you don't have enough small bricks; with two large bricks you'd need... |
Extract Number from String - Python | 26,825,729 | 4 | 2014-11-09T06:39:50Z | 26,825,781 | 8 | 2014-11-09T06:48:42Z | [
"python",
"string",
"python-3.x"
] | I am new to `Python` and I have a String, I want to extract Number from that one
```
str1 = "3158 reviews"
print (re.findall('\d+', str1 ))
```
Output is `['4', '3']`
I want to get `3158` only, as an Integer preferably, not as List. | This code works fine. There is definitely some other problem:
```
>>> str1 = "3158 reviews"
>>> print (re.findall('\d+', str1 ))
['3158']
``` |
Extract Number from String - Python | 26,825,729 | 4 | 2014-11-09T06:39:50Z | 26,825,833 | 15 | 2014-11-09T06:57:16Z | [
"python",
"string",
"python-3.x"
] | I am new to `Python` and I have a String, I want to extract Number from that one
```
str1 = "3158 reviews"
print (re.findall('\d+', str1 ))
```
Output is `['4', '3']`
I want to get `3158` only, as an Integer preferably, not as List. | You can `filter` the string by **digits** using `str.isdigit` method,
```
>>> int(filter(str.isdigit, str1))
3158
``` |
'collectstatic' command fails when WhiteNoise is enabled | 26,829,435 | 12 | 2014-11-09T14:50:16Z | 26,908,206 | 10 | 2014-11-13T11:47:21Z | [
"python",
"django",
"python-3.x",
"static-files"
] | I'm trying to serve static files through [WhiteNoise](https://github.com/evansd/whitenoise/blob/master/docs/django.rst) as per [Heroku](https://devcenter.heroku.com/articles/django-assets)'s recommendation. When I run `collectstatic` in my development environment, this happens:
```
Post-processing 'css/iconic/open-ico... | The problem here is that `css/iconic/open-iconic-bootstrap.css` is referencing a file, `open-iconic.eot`, which doesn't exist in the expected location.
When you run `collectstatic` with that storage backend Django attempts to rewrite all the URLs in your CSS files so they reference the files by their new names e.g, `c... |
How can I set Cython compiler flags when using pyximport? | 26,833,947 | 4 | 2014-11-09T22:14:59Z | 26,834,595 | 7 | 2014-11-09T23:33:29Z | [
"python",
"cython"
] | This question ([How does one overwrite the default compile flags for Cython when building with distutils?](http://stackoverflow.com/questions/8236648/how-does-one-overwrite-the-default-compile-flags-for-cython-when-building-with-d/16402557#16402557)) describes how to set default Cython flags when using distutils.
But ... | You should use a `.pyxbld` file, see for example [this question](http://stackoverflow.com/questions/7620003/how-do-you-tell-pyximport-to-use-the-cython-cplus-option).
For a file named `foo.pyx`, you would make a `foo.pyxbld` file. The following would give extra optimization args:
```
def make_ext(modname, pyxfilename)... |
Python multiple inheritance questions | 26,834,201 | 4 | 2014-11-09T22:45:51Z | 26,834,268 | 9 | 2014-11-09T22:53:09Z | [
"python",
"inheritance",
"multiple-inheritance"
] | Sorry if this question has been asked before, I could not find the answer while searching other questions.
I'm new to Python and I'm having issues with multiple inheritance. Suppose I have 2 classes, B and C, which inherit from the same class A, which are defined as follows:
```
class B(A):
def foo():
...... | Resisting the temptation to say "avoid this situation in the first place", one (not necessarily elegant) solution could be to wrap the methods explicitly:
```
class A: pass
class B( A ):
def foo( self ): print( 'B.foo')
def bar( self ): print( 'B.bar')
class C( A ):
def foo( self ): print( 'C.foo')
d... |
Pandas Replace NaN with blank/empty string | 26,837,998 | 16 | 2014-11-10T06:29:26Z | 26,838,140 | 13 | 2014-11-10T06:40:47Z | [
"python",
"pandas",
null
] | I have a Pandas Dataframe as shown below:
```
1 2 3
0 a NaN read
1 b l unread
2 c NaN read
```
I want to remove the NaN values with an empty string so that it looks like so:
```
1 2 3
0 a "" read
1 b l unread
2 c "" read
``` | ```
import numpy as np
df1 = df.replace(np.nan,' ', regex=True)
```
This might help. It will replace all Nans with a blank space. |
Pandas Replace NaN with blank/empty string | 26,837,998 | 16 | 2014-11-10T06:29:26Z | 28,390,992 | 40 | 2015-02-08T05:44:58Z | [
"python",
"pandas",
null
] | I have a Pandas Dataframe as shown below:
```
1 2 3
0 a NaN read
1 b l unread
2 c NaN read
```
I want to remove the NaN values with an empty string so that it looks like so:
```
1 2 3
0 a "" read
1 b l unread
2 c "" read
``` | Slightly shorter is:
```
df = df.fillna('')
```
This will fill na's (e.g. NaN's) with ''.
Edit:
If you want to fill a single column, you can use:
```
df.column1 = df.column1.fillna('')
``` |
How to find all unused methods of a class in PyCharm? | 26,839,721 | 6 | 2014-11-10T08:39:58Z | 28,640,287 | 7 | 2015-02-21T00:01:23Z | [
"python",
"intellij-idea",
"pycharm"
] | I have a class named `Article` in my project. I want to find all its methods that are unused in the project. For a particular method I can press `Alt+F7` and see where it's used, and if it's not used anywhere, I can delete it safely. Is it possible to automate the process and find all methods of the class that are unus... | PyCharm doesn't offer this feature since «it's not possible to reliably determine that a method is unused, because there are simply too many ways to call it dynamically.»[ref](http://forum.jetbrains.com/thread/PyCharm-1212)
But there's another way, vulture can find most of dead code in a project([ref](https://pypi.p... |
DockerDaemonConnectionError when setting Google Cloud Managed VM in Ubuntu | 26,842,682 | 7 | 2014-11-10T11:23:51Z | 26,899,849 | 7 | 2014-11-13T01:31:27Z | [
"python",
"google-compute-engine",
"google-cloud-platform"
] | I'm trying to install Google Cloud Managed VM in Ubuntu according to this manuals: [[1]](https://cloud.google.com/appengine/docs/python/managed-vms/), [[2]](https://cloud.google.com/appengine/docs/python/managed-vms/sdk)
I've installed Docker following the [Docker installation guide](https://docs.docker.com/installati... | I finally got `gcloud preview app setup-managed-vms` to work on ubuntu. Here's what I had to do:
1. get docker 1.3.0, not 1.3.1. `sudo apt-get install docker.io` installed and old version of docker on my machine, so I had to remove that first. But `curl -sSL https://get.docker.com/ubuntu/ | sudo sh` installs version 1... |
Python imports relative path | 26,849,832 | 8 | 2014-11-10T17:51:30Z | 26,850,537 | 9 | 2014-11-10T18:34:24Z | [
"python",
"import",
"path",
"relative"
] | I've got a project where I would like to use some python classes located in other directories.
Example structure:
```
/dir
+../subdirA
+../subdirB
+../mydir
```
The absolute path varies, because this project is run on different machines.
When my python file with *MySampleClass* located in */mydir* is executed, h... | You will need an `__init__.py` in the mydir directory (and it can be empty), then as long as dir is in the sys path, assuming your MySampleClass is in myfile.py and myfile.py is in mydir
```
from mydir.myfile import MySampleClass
```
If you want to import top level functions from a file called util.py that reside in ... |
Opening a SSL socket connection in Python | 26,851,034 | 16 | 2014-11-10T19:04:38Z | 26,851,670 | 35 | 2014-11-10T19:40:39Z | [
"python",
"sockets",
"ssl"
] | I'm trying to establish a secure socket connection in Python, and i'm having a hard time with the SSL bit of it. I've found some code examples of how to establish a connection with SSL, but they all involve key files. The server i'm trying to connect with doesn't need to receive any keys or certificates. My question is... | Ok, I figured out what was wrong. It was kind of foolish of me. I had `two` problems with my code. My first mistake was when specifying the `ssl_version` I put in `TLSv1` when it should have been `ssl.PROTOCOL_TLSv1`. The second mistake was that I wasn't referencing the wrapped socket, instead I was calling the origina... |
Javascript to Django views.py? | 26,855,631 | 3 | 2014-11-11T00:41:38Z | 26,858,442 | 9 | 2014-11-11T05:52:06Z | [
"javascript",
"python",
"django"
] | This may sound simple, but how do I send the data from a Javascript array in my index.html template to my views.py?
When the user clicks a "Recommend" button, my code calls a function that accesses my database and prints a name on the template.
```
def index(request):
if(request.GET.get('Recommend')):
sql... | Alright, so for sending data from the client (JavaScript) to the backend (your Django app) you need to employ something called Ajax, it stands for Asynchronous JavaScript and XML.
Basically what it does is allowing you to communicate with your backend services without the need of having to reload the page, which, you w... |
Python+OpenCV 3 - cant use SIFT | 26,855,753 | 5 | 2014-11-11T00:55:25Z | 26,859,318 | 7 | 2014-11-11T06:58:30Z | [
"python",
"opencv",
"sift"
] | I compiled OpenCV 3 & opencv\_contrib from latest source code. Installed it into site-packages folder for Python 2.7. I can follow all of the tutorials at <http://docs.opencv.org/trunk/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html> except the ones involving SIFT.
Here is the error I get:
```
Traceback (mos... | as of 3.0, SIFT, SURF, BRIEF and FREAK were moved to a seperate [opencv\_contrib repo](https://github.com/Itseez/opencv_contrib).
you will have to download that, add it to your main cmake settings (please see the README there), and rebuild the main opencv repo. after 'make install' your python should have a new cv2.py... |
How to Reduce the time taken to load a pickle file in python | 26,860,051 | 8 | 2014-11-11T07:51:14Z | 26,860,404 | 10 | 2014-11-11T08:16:33Z | [
"python",
"performance",
"pickle"
] | I have created a **dictionary** in python and dumped into pickle. Its size went to 300MB.
Now, I want to **load** the same **pickle** whose data is saved in the form of dictionary.
```
output = open('myfile.pkl', 'rb')
mydict = pickle.load(output)
```
Loading a pickle which is taking around **15 seconds**. I want to ... | Try using the [`json` library](https://docs.python.org/library/json.html) instead of `pickle`. This should be an option in your case because you're dealing with a dictionary which is a relatively simple object.
According to [this website](http://kovshenin.com/2010/pickle-vs-json-which-is-faster/),
> JSON is 25 times ... |
operational error: database is locked | 26,862,809 | 5 | 2014-11-11T10:34:17Z | 26,864,360 | 12 | 2014-11-11T11:57:35Z | [
"python",
"flask"
] | So I know this problem is not new in flask, and people have already asked it before. However I am still facing a problem while executing my database commands in bash as I am new to python.
This is what i did
```
import sqlite3
conn = sqlite.connect('/home/pjbardolia/mysite/tweet_count.db')
c = conn.cursor()
c.execute... | This is what this error means:
> SQLite is meant to be a lightweight database, and thus can't support a
> high level of concurrency. OperationalError: database is locked errors
> indicate that your application is experiencing more concurrency than
> sqlite can handle in default configuration. This error means that one... |
mysql-python install fatal error | 26,866,147 | 20 | 2014-11-11T13:34:46Z | 31,077,052 | 37 | 2015-06-26T15:31:51Z | [
"python",
"mysql",
"mysql-python"
] | I am trying to pip install mysql-python connector but it keeps erroring on me. Works fine on my mac and another windows machine but not this one. I have downloaded visual studio c++ and tried it as 32 bit and 64. Does anyone have an idea on how to get around this.
```
_mysql.c(42) : fatal error C1083: Cannot open incl... | for 64-bit windows
* **install using wheel**
```
pip install wheel
```
* **download from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysql-python>**
```
pip install MySQL_python-1.2.5-cp27-none-win_amd64.whl
``` |
Deploying a local django app using openshift | 26,871,381 | 16 | 2014-11-11T18:01:23Z | 26,874,375 | 34 | 2014-11-11T20:54:33Z | [
"python",
"django",
"git",
"openshift"
] | I've built a webapp using django. In order to host it I'm trying to use openshift but am having difficulty in getting anything working. There seems to be a lack of step by steps for this. So far I have git working fine, the app works on the local dev environment and I've successfully created an app on openshift.
Follo... | **Edit**: *Remember this is a platform-dependent answer and since the OpenShift platform serving Django may change, this answer could become invalid. As of Apr 1 2016, this answer remains valid at its whole extent.*
Many times this happened to me and, since I had to mount at least 5 applications, I had to create my ow... |
How can I easily determine if a Boto 3 S3 bucket resource exists? | 26,871,884 | 5 | 2014-11-11T18:29:16Z | 26,871,885 | 9 | 2014-11-11T18:29:16Z | [
"python",
"amazon-web-services",
"boto3"
] | For example, I have this code:
```
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket-name')
# Does it exist???
``` | At the time of this writing there is no high-level way to quickly check whether a bucket exists and you have access to it, but you can make a low-level call to the HeadBucket operation. This is the most inexpensive way to do this check:
```
from botocore.client import ClientError
try:
s3.meta.client.head_bucket(B... |
How can I easily determine if a Boto 3 S3 bucket resource exists? | 26,871,884 | 5 | 2014-11-11T18:29:16Z | 26,876,807 | 8 | 2014-11-11T23:54:12Z | [
"python",
"amazon-web-services",
"boto3"
] | For example, I have this code:
```
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket-name')
# Does it exist???
``` | ```
>>> import boto3
>>> s3 = boto3.resource('s3')
>>> s3.Bucket('Hello') in s3.buckets.all()
False
>>> s3.Bucket('some-docs') in s3.buckets.all()
True
>>>
``` |
How can I get my contour plot superimposed on a basemap | 26,872,337 | 5 | 2014-11-11T18:54:41Z | 26,885,815 | 8 | 2014-11-12T11:25:10Z | [
"python",
"matplotlib",
"pandas",
"gis",
"matplotlib-basemap"
] | This is a question I asked several months ago and am still struggling to come to a solution. My code gives me a basemap and a contour plot side by side (but printing to file only gives the contour plot), but I want them superimposed. The best solution would the one here <https://gist.github.com/oblakeobjet/7546272> but... | You're almost there, but Basemap can be temperamental, and you have to manage the z-order of plots / map details. Also, you have to transform your lon / lat coordinates to *map projection coordinates* before you plot them using basemap.
Here's a complete solution, which gives the following output. I've changed some co... |
Why does Python provide locking mechanisms if it's subject to a GIL? | 26,873,512 | 7 | 2014-11-11T20:04:04Z | 26,873,766 | 8 | 2014-11-11T20:18:37Z | [
"python",
"multithreading",
"locks",
"gil",
"python-multiprocessing"
] | I'm aware that Python threads can only execute bytecode one at a time, so why would the [threading](https://docs.python.org/2/library/threading.html) library provide locks? I'm assuming race conditions can't occur if only one thread is executing at a time.
The library provides locks, conditions, and semaphores. Is the... | The GIL synchronizes bytecode operations. Only one byte code can execute at once. But if you have an operation that requires more than one bytecode, you could switch threads between the bytecodes. If you need the operation to be atomic, then you need synchronization above and beyond the GIL.
For example, incrementing ... |
Why does Python have a maximum recursion depth? | 26,873,627 | 6 | 2014-11-11T20:09:52Z | 26,873,666 | 11 | 2014-11-11T20:12:02Z | [
"python",
"recursion",
"stack"
] | Python has a maximum recursion depth, but no maximum iteration depth. Why is recursion restricted? Wouldn't it be more natural to treat recursion like iteration, and not restrict the number of recursive calls?
Let me just say that the source of this issue came from trying to implement a stream (see [this question](htt... | This is not unique to Python, and has to do with each call taking space on the [*call stack*](http://en.wikipedia.org/wiki/Call_stack), and the size of the stack being limited.
Iteration alone does not consume stack space and is therefore not subject to this limit.
Not every recursive call needs to consume stack spac... |
Why does Python have a maximum recursion depth? | 26,873,627 | 6 | 2014-11-11T20:09:52Z | 26,873,813 | 9 | 2014-11-11T20:21:13Z | [
"python",
"recursion",
"stack"
] | Python has a maximum recursion depth, but no maximum iteration depth. Why is recursion restricted? Wouldn't it be more natural to treat recursion like iteration, and not restrict the number of recursive calls?
Let me just say that the source of this issue came from trying to implement a stream (see [this question](htt... | There are actually a few issues here.
First, as [NPE's answer](http://stackoverflow.com/a/26873666/908494) nicely explains, Python doesn't eliminate tail calls, so many functions that would allow unlimited recursion in, say, Scheme are limited in Python.
Second, as again as explained by NPE, calls that can't be elimi... |
How to find the number of nested lists in a list? | 26,877,186 | 13 | 2014-11-12T00:31:05Z | 26,877,236 | 15 | 2014-11-12T00:36:52Z | [
"python",
"list",
"recursion"
] | The function takes a list and returns an int depending on how many lists are in the list not including the list itself. (For the sake of simplicity we can assume everything is either an integer or a list.)
For example:
```
x=[1,2,[[[]]],[[]],3,4,[1,2,3,4,[[]] ] ]
count_list(x) # would return 8
```
I think using rec... | This seems to do the job:
```
def count_list(l):
count = 0
for e in l:
if isinstance(e, list):
count = count + 1 + count_list(e)
return count
``` |
How to find the number of nested lists in a list? | 26,877,186 | 13 | 2014-11-12T00:31:05Z | 26,877,275 | 20 | 2014-11-12T00:41:10Z | [
"python",
"list",
"recursion"
] | The function takes a list and returns an int depending on how many lists are in the list not including the list itself. (For the sake of simplicity we can assume everything is either an integer or a list.)
For example:
```
x=[1,2,[[[]]],[[]],3,4,[1,2,3,4,[[]] ] ]
count_list(x) # would return 8
```
I think using rec... | You can do it with a recursion function :
```
def count(l):
return sum(1+count(i) for i in l if isinstance(i,list))
```
Demo:
```
>>> x=[1,2,[[[]]],[[]],3,4,[1,2,3,4,[[]] ] ]
>>> count(x)
8
``` |
plyr or dplyr in Python | 26,878,476 | 9 | 2014-11-12T02:55:11Z | 29,585,283 | 14 | 2015-04-12T02:26:51Z | [
"python",
"pandas",
"plyr",
"dplyr"
] | This is more of a conceptual question, I do not have a specific problem
I am learning python for data analysis, but I ma very familiar with R - one of the great things about R is plyr (and of course ggplot2) and even better dplyr. Pandas of course has split-apply as well however in R I can do things like (in dplyr, a ... | I'm also a big fan of dplyr for R and am working to improve my knowledge of Pandas. Since you don't have a specific problem, I'd suggest checking out the post below that breaks down the entire introductory dplyr vignette and shows how all of it can be done with Pandas.
For example, the author demonstrates chaining wit... |
plyr or dplyr in Python | 26,878,476 | 9 | 2014-11-12T02:55:11Z | 32,000,194 | 10 | 2015-08-13T23:43:18Z | [
"python",
"pandas",
"plyr",
"dplyr"
] | This is more of a conceptual question, I do not have a specific problem
I am learning python for data analysis, but I ma very familiar with R - one of the great things about R is plyr (and of course ggplot2) and even better dplyr. Pandas of course has split-apply as well however in R I can do things like (in dplyr, a ... | One could simply use dplyr from Python.
There is an interface to `dplyr` in rpy2 (introduced with rpy2-2.7.0) that lets you write things like:
```
dataf = (DataFrame(mtcars).
filter('gear>3').
mutate(powertoweight='hp*36/wt').
group_by('gear').
summarize(mean_ptw='mean(powertoweigh... |
Writing then reading in-memory bytes (BytesIO) gives a blank result | 26,879,981 | 10 | 2014-11-12T05:36:10Z | 26,880,042 | 19 | 2014-11-12T05:40:40Z | [
"python",
"byte",
"gzip",
"bytesio"
] | I wanted to try out the python BytesIO class.
As an experiment I tried writing to a zip file in memory, and then reading the bytes back out of that zip file. So instead of passing in a file-object to `gzip`, I pass in a `BytesIO` object. Here is the entire script:
```
from io import BytesIO
import gzip
# write bytes... | You need to [`seek`](https://docs.python.org/3/library/io.html#io.IOBase.seek) back to the beginning of the file after writing the initial in memory file...
```
myio.seek(0)
``` |
pandas create new column based on values from other columns | 26,886,653 | 22 | 2014-11-12T12:08:12Z | 26,887,820 | 51 | 2014-11-12T13:11:09Z | [
"python",
"numpy",
"pandas"
] | I've tried different methods from other questions but still can't seem to find the right answer for my problem. The critical piece of this is that if the person is counted as Hispanic they can't be counted as anything else. Even if they have a "1" in another ethnicity column they still are counted as Hispanic not a two... | OK, two steps to this - first is to write a function that does the translation you want - I've put an example together based on your pseudo-code:
```
def label_race (row):
if row['eri_hispanic'] == 1 :
return 'Hispanic'
if row['eri_afr_amer'] + row['eri_asian'] + row['eri_hawaiian'] + row['eri_nat_amer'] +... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.