title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
How does \v differ from \x0b or \x0c? | 26,184,100 | 18 | 2014-10-03T17:50:16Z | 26,184,126 | 29 | 2014-10-03T17:52:35Z | [
"python",
"string",
"escaping",
"whitespace"
] | Typing `string.whitespace` gives you a string containing all whitespace characters defined by Python's `string` module:
```
'\t\n\x0b\x0c\r '
```
Both `\x0b` and `\x0c` seem to give a vertical tab.
```
>>> print 'first\x0bsecond'
first
second
```
`\v` gives the same effect. How are these three different? Why d... | `\v` *is* `\x0b`:
```
>>> '\v'
'\x0b'
```
but the string literal representation in Python is using the `\x0b` notation instead.
The Python string literal representation only ever uses `\n`, `\r` and `\t`, everything else that is not a printable ASCII character is represented using the `\xhh` notation instead.
`\x0c... |
You are trying to add a non-nullable field 'new_field' to userprofile without a default | 26,185,687 | 22 | 2014-10-03T19:42:08Z | 26,185,765 | 18 | 2014-10-03T19:47:39Z | [
"python",
"django"
] | I know that from Django 1.7 I don't need to use South or any other migration system, so I am just using simple command `python manage.py makemigrations`
However, all I get is this error:
```
You are trying to add a non-nullable field 'new_field' to userprofile without a default;
we can't do that (the database needs s... | You need to provide a default value:
```
new_field = models.CharField(max_length=140, default='SOME STRING')
``` |
You are trying to add a non-nullable field 'new_field' to userprofile without a default | 26,185,687 | 22 | 2014-10-03T19:42:08Z | 27,036,431 | 9 | 2014-11-20T09:50:52Z | [
"python",
"django"
] | I know that from Django 1.7 I don't need to use South or any other migration system, so I am just using simple command `python manage.py makemigrations`
However, all I get is this error:
```
You are trying to add a non-nullable field 'new_field' to userprofile without a default;
we can't do that (the database needs s... | One option is to declare a default value for 'new\_field':
```
new_field = models.CharField(max_length=140, default='DEFAULT VALUE')
```
another option is to declare 'new\_field' as a nullable field:
```
new_field = models.CharField(max_length=140, null=True)
```
If you decide to accept 'new\_field' as a nullable f... |
You are trying to add a non-nullable field 'new_field' to userprofile without a default | 26,185,687 | 22 | 2014-10-03T19:42:08Z | 30,456,675 | 20 | 2015-05-26T11:01:21Z | [
"python",
"django"
] | I know that from Django 1.7 I don't need to use South or any other migration system, so I am just using simple command `python manage.py makemigrations`
However, all I get is this error:
```
You are trying to add a non-nullable field 'new_field' to userprofile without a default;
we can't do that (the database needs s... | **If** you are in early development cycle and **don't care** about your current **database data** you can just remove it and than migrate. But first you need to clean migrations dir
```
rm your_app/migrations/*
rm db.sqlite3
python manage.py makemigrations
python manage.py migrate
``` |
Pandas Dataframe CSV export, how to prevent additional double-quote characters | 26,186,033 | 4 | 2014-10-03T20:07:28Z | 26,186,178 | 7 | 2014-10-03T20:17:59Z | [
"python",
"csv",
"pandas"
] | I am using Pandas to process and output data for a table which is published in Wordpress
I am adding HTML code to format color one column
Starting with a sample Dataframe:
```
import numpy as np
import pandas as pd
df = pd.DataFrame({
'A': ['group1', 'group2', 'group3'],
'B': ['foo', ... | ```
df.to_csv('test.csv', index=False, quoting=csv.QUOTE_NONE)
```
References:
* <http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html>
* <https://docs.python.org/2/library/csv.html#csv.QUOTE_NONE>
Sample Program:
```
import numpy as np
import pandas as pd
import csv
df = pd.DataFrame(... |
Parallelize apply after pandas groupby | 26,187,759 | 12 | 2014-10-03T22:43:28Z | 27,026,479 | 7 | 2014-11-19T20:35:42Z | [
"python",
"pandas",
"parallel-processing",
"rosetta"
] | I have used rosetta.parallel.pandas\_easy to parallelize apply after group by, for example:
```
from rosetta.parallel.pandas_easy import groupby_to_series_to_frame
df = pd.DataFrame({'a': [6, 2, 2], 'b': [4, 5, 6]},index= ['g1', 'g1', 'g2'])
groupby_to_series_to_frame(df, np.mean, n_jobs=8, use_apply=True, by=df.index... | I have a hack I use for getting parallelization in Pandas. I break my dataframe into chunks, put each chunk into the element of a list, and then use ipython's parallel bits to do a parallel apply on the list of dataframes. Then I put the list back together using pandas `concat` function.
This is not generally applicab... |
Parallelize apply after pandas groupby | 26,187,759 | 12 | 2014-10-03T22:43:28Z | 27,027,632 | 25 | 2014-11-19T21:46:17Z | [
"python",
"pandas",
"parallel-processing",
"rosetta"
] | I have used rosetta.parallel.pandas\_easy to parallelize apply after group by, for example:
```
from rosetta.parallel.pandas_easy import groupby_to_series_to_frame
df = pd.DataFrame({'a': [6, 2, 2], 'b': [4, 5, 6]},index= ['g1', 'g1', 'g2'])
groupby_to_series_to_frame(df, np.mean, n_jobs=8, use_apply=True, by=df.index... | This seems to work, although it really should be built in to pandas
```
import pandas as pd
from joblib import Parallel, delayed
import multiprocessing
def tmpFunc(df):
df['c'] = df.a + df.b
return df
def applyParallel(dfGrouped, func):
retLst = Parallel(n_jobs=multiprocessing.cpu_count())(delayed(func)(... |
Parallelize apply after pandas groupby | 26,187,759 | 12 | 2014-10-03T22:43:28Z | 29,281,494 | 9 | 2015-03-26T14:47:24Z | [
"python",
"pandas",
"parallel-processing",
"rosetta"
] | I have used rosetta.parallel.pandas\_easy to parallelize apply after group by, for example:
```
from rosetta.parallel.pandas_easy import groupby_to_series_to_frame
df = pd.DataFrame({'a': [6, 2, 2], 'b': [4, 5, 6]},index= ['g1', 'g1', 'g2'])
groupby_to_series_to_frame(df, np.mean, n_jobs=8, use_apply=True, by=df.index... | Ivan's answer is great, but it looks like it can be slightly simplified, also removing the need to depend on joblib:
```
from multiprocessing import Pool, cpu_count
def applyParallel(dfGrouped, func):
with Pool(cpu_count()) as p:
ret_list = p.map(func, [group for name, group in dfGrouped])
return pand... |
Cython: are typed memoryviews the modern way to type numpy arrays? | 26,189,570 | 12 | 2014-10-04T04:19:30Z | 26,192,382 | 26 | 2014-10-04T11:30:27Z | [
"python",
"numpy",
"cython"
] | Let's say I'd like to pass a numpy array to a `cdef` function:
```
cdef double mysum(double[:] arr):
cdef int n = len(arr)
cdef double result = 0
for i in range(n):
result = result + arr[i]
return result
```
Is this the modern way to handle typing numpy arrays? Compare with this question: [c... | I will quote from the docs [the docs](http://docs.cython.org/src/userguide/memoryviews.html)
> Memoryviews are similar to the current NumPy array buffer support (`np.ndarray[np.float64_t, ndim=2]`), but they have more features and cleaner syntax.
This indicates that the developers of Cython consider memory views to b... |
Why does a class' body get executed at definition time? | 26,193,653 | 7 | 2014-10-04T14:13:28Z | 26,194,738 | 10 | 2014-10-04T16:24:26Z | [
"python",
"class",
"python-2.7",
"python-internals"
] | In contrast to functions, a class' body is executed at definition time:
```
class A(object):
print 'hello'
```
Out:
```
hello
```
Why is it the case? Is it related to `@classmethod` / `@staticmethod` methods and class attributes? | **Everything** is executed at the module level when Python first imports a module. Function bodies (and generator expression bodies) are the *exception* here, not the rule. Python executes everything to create the *objects* contained in a module; like everything in Python, classes are objects, and so are functions.
Th... |
Numpy: Rearrange array based upon index array | 26,194,389 | 9 | 2014-10-04T15:43:28Z | 26,194,423 | 10 | 2014-10-04T15:46:58Z | [
"python",
"arrays",
"numpy"
] | I'm looking for a one line solution that would help me do the following.
Suppose I have
```
array = np.array([10, 20, 30, 40, 50])
```
I'd like to rearrange it based upon an input ordering. If there were a numpy function called `arrange`, it would do the following:
```
newarray = np.arrange(array, [1, 0, 3, 4, 2])
... | You can simply use your "index" list directly, as, well, an index array:
```
>>> arr = np.array([10, 20, 30, 40, 50])
>>> idx = [1, 0, 3, 4, 2]
>>> arr[idx]
array([20, 10, 40, 50, 30])
```
It tends to be much faster if `idx` is already an `ndarray` and not a `list`, even though it'll work either way:
```
>>> %timeit... |
numpy column arrays and strange results | 26,195,390 | 2 | 2014-10-04T17:38:31Z | 26,195,660 | 9 | 2014-10-04T18:08:28Z | [
"python",
"arrays",
"numpy"
] | I am trying to write a function where its arguments are arrays with different shapes. I am having some troubles to understand column arrays and to make my function work for all shapes of arrays, here are the problems I found:
**Transposing:**
If the argument array `A` is not a vector then I can transpose it nicely usi... | If you're used to Matlab, Numpy's way of dealing with "column" and "row" vectors is a little strange. The thing to realize is that a 1-d array is *neither* a column nor a row vector. To be a column or row vector, an array has to be a *2-d array with one dimension set to one*. You can tell the difference between a 1-d a... |
sklearn: How to speed up a vectorizer (eg Tfidfvectorizer) | 26,195,699 | 3 | 2014-10-04T18:12:34Z | 26,212,970 | 7 | 2014-10-06T09:04:19Z | [
"python",
"scikit-learn",
"nltk"
] | After thoroughly profiling my program, I have been able to pinpoint that it is being slowed down by the vectorizer.
I am working on text data, and two lines of simple tfidf unigram vectorization is taking up 99.2% of the total time the code takes to execute.
Here is a runnable example (this will download a 3mb traini... | Unsurprisingly, it's NLTK that is slow:
```
>>> tfidf = StemmedTfidfVectorizer(min_df=1, stop_words='english', analyzer='word', ngram_range=(1,1))
>>> %timeit tfidf.fit_transform(X_train)
1 loops, best of 3: 4.89 s per loop
>>> tfidf = TfidfVectorizer(min_df=1, stop_words='english', analyzer='word', ngram_range=(1,1))... |
How to replace different blocks of text in all combinations using awk? | 26,196,691 | 2 | 2014-10-04T20:05:46Z | 26,196,758 | 7 | 2014-10-04T20:13:45Z | [
"python",
"regex",
"bash",
"perl",
"awk"
] | I'm trying to replace blocks of lines like this pattern:
* A block of lines is formed by the lines bellow which has an minor number.
* When a line has the "=", then this block of lines could replace the block named after the "="
Let's see an example, this input:
```
01 hello
02 stack
02 overflow
04 h... | Here is a python script to read the cobol input file and print out all the possible combinations of defined and redefined variables:
```
#!/usr/bin/python
"""Read cobol file and print all possible redefines."""
import sys
from itertools import product
def readfile(fname):
"""Read cobol file & return a master list... |
GDB pretty printing ImportError: No module named 'printers' | 26,205,564 | 6 | 2014-10-05T17:58:31Z | 26,205,929 | 9 | 2014-10-05T18:37:02Z | [
"python",
"c++",
"stl",
"gdb",
"pretty-print"
] | I'm trying to add [pretty printing](https://sourceware.org/gdb/wiki/STLSupport) for STL in my GDB on Ubuntu 14.04. Some details on the tools:
OS: Ubuntu 14.04
gdb version: 7.7
python version: 2.7.6
python3 version: 3.4.0
But after I setup exactly as what the instruction said. I still get the following errors:
```... | I just tried something myself, and luckily, now it's working. At least it can print out the map and vector content as expected. Here is what I did:
Since it's complaining that it can't find the `printer.py` module, then I think should probably I tell python interpreter where this file is located. So I first added this... |
Calculate weighted average using a pandas/dataframe | 26,205,922 | 11 | 2014-10-05T18:36:05Z | 26,206,622 | 8 | 2014-10-05T19:56:52Z | [
"python",
"numpy",
"pandas"
] | I have the following table. I want to calculate a weighted average grouped by each date based on the formula below. I can do this using some standard conventional code, but assuming that this data is in a pandas dataframe, is there any easier way to achieve this rather than through iteration?
```
Date ID w... | I think I would do this with two groupbys.
First to calculate the "weighted average":
```
In [11]: g = df.groupby('Date')
In [12]: df.value / g.value.transform("sum") * df.wt
Out[12]:
0 0.125000
1 0.250000
2 0.416667
3 0.277778
4 0.444444
dtype: float64
```
If you set this as a column, you can groupb... |
Custom user in django 1.7 raises ValueError | 26,207,022 | 2 | 2014-10-05T20:42:38Z | 26,207,074 | 17 | 2014-10-05T20:49:48Z | [
"python",
"django",
"django-1.7",
"django-custom-user"
] | Even this simple example throws a `ValueError: Dependency on app with no migrations: myApp` during `python manage.py syncdb`
**myApp/models.py**
```
from django.contrib.auth.models import AbstractUser
class User(AbstractUser):
pass
```
**settings.py**
```
AUTH_USER_MODEL = 'myApp.User'
```
---
Running `./man... | I guess I was looking in the wrong place for an answer:
I solved this by running: `./manage.py makemigrations myApp`
```
(env)Bens-MacBook-Pro:social_auth bdhammel$ ./manage.py makemigrations myApp
Migrations for 'myApp':
0001_initial.py:
- Create model User
(env)Bens-MacBook-Pro:social_auth bdhammel$ python ma... |
Getting Django 1.7 to work on Google App Engine | 26,207,326 | 6 | 2014-10-05T21:18:39Z | 26,212,173 | 16 | 2014-10-06T08:10:58Z | [
"python",
"django",
"google-app-engine",
"google-cloud-sql"
] | Can anyone help to point us to instructions on how to get Django >1.5 working on Google App Engine? I have seen a number of people claim they have Django 1.6 working. We'd like to get 1.6 or 1.7 running. I have searched here for instructions on how to set this up. No luck so far.
Update:
In our development machine we ... | > You can use any pure Python third party libraries in your Google App Engine application. In order to use a third party library, simply include the files in your application's directory, and they will be uploaded with your application when you deploy it to our system. You can import the files as you would any other Py... |
What's the difference between '_io' and 'io'? | 26,208,863 | 2 | 2014-10-06T01:07:38Z | 26,208,904 | 7 | 2014-10-06T01:15:49Z | [
"python",
"python-3.x"
] | I tried the code below. The `f` is of type `_io.TextIOWrapper`, but I cannot find any help info about this type. While there does exsist another similar type `io.TextIOWrapper`.
```
>>> f=open("c:\setup.log","r")
>>> type(f)
<class '_io.TextIOWrapper'>
>>> help(_io.TextIOWrapper)
Traceback (most recent call last):
F... | The `_io` module provides the C code that the `io` module uses internally. The source for it can be found [here](https://hg.python.org/cpython/file/8da1aa71cd73/Modules/_io). You can actually import both `io` and `_io` separately:
```
>>> import _io
>>> import io
>>> _io
<module 'io' (built-in)> # The fact that this ... |
Swapping maximum and minimum values in a list | 26,209,142 | 4 | 2014-10-06T01:57:53Z | 26,209,162 | 9 | 2014-10-06T02:01:45Z | [
"python"
] | Given a list (for instance, [1,1,2,1,2,2,3]) which is not sorted highest to lowest, and contains multiples of all numbers, I need to swap, in place, the maximums with the minimums, the second maxes with the second mins, etc. So, our example list would become [3,3,2,3,2,2,1].
Also, just to clarify, it's not just the ma... | This is one way to do it, possible because Python is such an expressive language:
```
>>> a = [1,1,2,1,2,2,3]
>>> d = dict(zip(sorted(set(a)), sorted(set(a), reverse=True)))
>>> [d[x] for x in a]
[3, 3, 2, 3, 2, 2, 1]
``` |
How to use ModelMultipleChoiceFilter? | 26,210,217 | 4 | 2014-10-06T04:56:55Z | 32,369,999 | 8 | 2015-09-03T07:55:13Z | [
"python",
"django",
"django-rest-framework",
"django-filter"
] | I have been trying to get a ModelMultipleChoiceFilter to work for hours and have read both the DRF and Django Filters documentation.
I want to be able to filter a set of Websites based on the tags that have been assigned to them via a ManyToManyField. For example I want to be able to get a list of websites that have b... | I stumbled across this question while trying to solve a nearly identical problem to yourself, and while I could have just written a custom filter, your question got me intrigued and I had to dig deeper!
It turns out that a `ModelMultipleChoiceFilter` only makes one change over a normal `Filter`, as seen in the `django... |
combine python coverage files? | 26,214,055 | 4 | 2014-10-06T10:10:15Z | 26,219,253 | 8 | 2014-10-06T15:09:50Z | [
"python",
"unit-testing",
"code-coverage",
"coverage.py",
"python-coverage"
] | I'm wondering if it's possible to combine `coverage.xml` files into 1 file to see global report in HTML output.
I've got my `unit/functional tests` running as 1 command and `integration tests` as the second command. That means my coverage for `unit/functional tests` are overridden by `unit tests`.
That would be great... | You can't combine .xml files, but you can combine the raw data files. Your workflow would look like this:
```
$ COVERAGE_FILE=.coverage_func coverage run the_functional_tests.py
$ COVERAGE_FILE=.coverage_inte coverage run the_integration_tests.py
$ coverage combine
$ coverage xml
``` |
Filter based on another RDD in Spark | 26,214,112 | 9 | 2014-10-06T10:13:17Z | 26,216,315 | 7 | 2014-10-06T12:33:16Z | [
"python",
"scala",
"apache-spark"
] | I would like to keep only the employees which does have a departement ID referenced in the second table.
```
Employee table
LastName DepartmentID
Rafferty 31
Jones 33
Heisenberg 33
Robinson 34
Smith 34
Department table
DepartmentID
31
33
```
I have tried the following code which does not work:
```
e... | I finally implemented a solution using a join. I had to add a 0 value to the department to avoid an exception from Spark:
```
employee = [['Raffery',31], ['Jones',33], ['Heisenberg',33], ['Robinson',34], ['Smith',34]]
department = [31,33]
# invert id and name to get id as the key
employee = sc.parallelize(employee).ma... |
Filter based on another RDD in Spark | 26,214,112 | 9 | 2014-10-06T10:13:17Z | 26,259,750 | 12 | 2014-10-08T14:42:19Z | [
"python",
"scala",
"apache-spark"
] | I would like to keep only the employees which does have a departement ID referenced in the second table.
```
Employee table
LastName DepartmentID
Rafferty 31
Jones 33
Heisenberg 33
Robinson 34
Smith 34
Department table
DepartmentID
31
33
```
I have tried the following code which does not work:
```
e... | In this case, what you would like to achieve is to filter at each partition with the data contained in the department table:
This would be the basic solution:
```
val dept = deptRdd.collect.toSet
val employeesWithValidDeptRdd = employeesRdd.filter{case (employee, d) => dept.contains(d)}
```
If your department data is... |
why python regex is so slow? | 26,214,328 | 5 | 2014-10-06T10:28:31Z | 26,214,625 | 8 | 2014-10-06T10:48:28Z | [
"python",
"regex"
] | After long debugging I found why my application using python regexps is slow. Here is something I find surprising:
```
import datetime
import re
pattern = re.compile('(.*)sol(.*)')
lst = ["ciao mandi "*10000 + "sol " + "ciao mandi "*10000,
"ciao mandi "*1000 + "sal " + "ciao mandi "*1000]
for s in lst:
pr... | The Thompson NFA approach changes regular expressions from default greedy to default non-greedy. Normal regular expression engines can do the same; simply change `.*` to `.*?`. You should not use greedy expressions when non-greedy will do.
Someone already built an NFA regular expression parser for Python: <https://git... |
How to install pygments on ubuntu? | 26,215,738 | 4 | 2014-10-06T11:56:40Z | 26,215,857 | 7 | 2014-10-06T12:04:17Z | [
"python",
"django",
"django-rest-framework"
] | I'm following Django-rest-framework.org tutorial and this is the models.py's code as below.
```
from django.db import models
from pygments.lexers import get_all_lexers
from pygments.styles import get_all_styles
LEXERS = [item for item in get_all_lexers() if item[1]]
LANGUAGE_CHOICES = sorted([(item[1][0], item[0]) fo... | Most basically open a terminal with `Ctrl`-`Alt`-`t` and type `sudo apt-get install python-pygments`. That will work but there is a better way, which I'll explain.
When you're developing a web app you will eventually want to deploy it. You'll want the environment on which you're developing to be as similar to the one ... |
venv doesn't create activate script python3 | 26,215,790 | 7 | 2014-10-06T12:00:20Z | 26,314,477 | 14 | 2014-10-11T12:07:57Z | [
"python",
"ubuntu",
"python-3.x",
"virtualenv",
"python-venv"
] | When trying to create a virtulenv using venv with python 3 on ubuntu it isnât creating an activate script. It conitunally exits with an error 1.
Following docs and other posts on SO such as <http://stackoverflow.com/a/19848770>
I have tried creating it 2 different ways.
```
sayth@sayth-TravelMate-5740G:~/scripts$ ... | Looks like you are using `Ubuntu 14.04`. It was shipped with a [broken](https://bugs.launchpad.net/ubuntu/+source/python3.4/+bug/1290847) [`pyvenv`](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=732703). There is a simple work around to create venv using `Python 3`
**1. Create venv without pip**
```
python3 -m ve... |
How to do encapsulation in Python? | 26,216,563 | 2 | 2014-10-06T12:48:05Z | 26,216,917 | 16 | 2014-10-06T13:08:14Z | [
"python",
"encapsulation"
] | What's wrong with this? From objective, and functional standpoints?
```
import sys
class EncapsulationClass(object):
def __init__(self):
self.privates = ["__dict__", "privates", "protected", "a"]
self.protected = ["b"]
print self.privates
self.a = 1
self.b = 2
self.c = 3
pass
def _... | Python has encapsulation - you are using it in your class.
What it doesn't have is access control such as private and protected attributes. However, in Python, there is an attribute naming convention to denote private attributes by prefixing the attribute with one or two underscores, e.g:
```
self._a
self.__a
```
A ... |
How to get the name of a submitted form in Flask? | 26,217,779 | 5 | 2014-10-06T13:51:54Z | 26,217,842 | 7 | 2014-10-06T13:54:35Z | [
"python",
"forms",
"post",
"flask"
] | I'm building a website using Flask, and on one page I've got two forms. If there's a POST, I need to decide which form is being posted. I can of course deduct it from the fields that are present in `request.form`, but I would rather make it explicit by getting the name (defined by `<form name="my_form">`) of the form t... | There is no 'name of the form'. That information is not sent by the browser; the `name` attribute on `<form>` tags is meant to be used solely on the browser side (and deprecated to boot, use `id` instead).
You could *add* that information by using a hidden field, but the most common way to distinguish between forms po... |
Matplotlib histogram with collection bin for high values | 26,218,704 | 5 | 2014-10-06T14:42:22Z | 30,305,331 | 7 | 2015-05-18T14:04:21Z | [
"python",
"matplotlib",
"histogram",
"bins"
] | I have an array with values, and I want to create a histogram of it. I am mainly interested in the low end numbers, and want to collect every number above 300 in one bin. This bin should have the same width as all other (equally wide) bins. How can I do this?
Note: this question is related to this question: [Defining ... | Numpy has a handy function for dealing with this: [`np.clip`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.clip.html). Basically, it does Artem's "dirty hack" inline. You can leave the values as they are, but in the `hist` call, just wrap the array in an `np.clip` call, like so
```
plt.hist(np.clip(values... |
ProgrammingError: relation "django_session" does not exist error after installing Psycopg2 | 26,220,689 | 15 | 2014-10-06T16:32:18Z | 26,220,719 | 22 | 2014-10-06T16:34:25Z | [
"python",
"django",
"psycopg2",
"django-1.7"
] | I started to develop a Django base web application. Everything were fine until I installed `Psycopg2` for my database which I created in `PstgreSql`. Now when I'm trying to open any page in my site, it throws `ProgrammingError: relation "django_session" does not exist` error.
```
Request Method: GET
Request URL: ... | If you change the database, you should create database for the new database.
Use [`manage.py migrate`](https://docs.djangoproject.com/en/1.7/ref/django-admin/#migrate-app-label-migrationname) command. (or [`manage.py syncdb`](https://docs.djangoproject.com/en/1.7/ref/django-admin/#syncdb) if you used it for old databa... |
Dealing with the class imbalance in binary classification | 26,221,312 | 8 | 2014-10-06T17:14:25Z | 26,244,744 | 15 | 2014-10-07T20:31:22Z | [
"python",
"machine-learning",
"classification"
] | Here's a brief description of my problem:
1. I am working on a *supervised learning* task to train a *binary* classifier.
2. I have a dataset with a large class *imbalance* distribution: 8 negative instances every one positive.
3. I use the *f-measure*, i.e. the harmonic mean between specificity and sensitivity, to as... | Both weighting (cost-sensitive) and thresholding are valid forms of cost-sensitive learning. In the briefest terms, you can think of the two as follows:
# Weighting
Essentially one is asserting that the âcostâ of misclassifying the rare class is worse than misclassifying the common class. This is **applied at the... |
ndb Models are not saved in memcache when using MapReduce | 26,223,098 | 2 | 2014-10-06T19:08:18Z | 29,980,942 | 8 | 2015-05-01T02:44:34Z | [
"python",
"google-app-engine",
"mapreduce",
"memcached",
"app-engine-ndb"
] | I've created two MapReduce Pipelines for uploading CSVs files to create Categories and Products in bulk. Each product is gets tied to a Category through a KeyProperty. The Category and Product models are built on ndb.Model, so based on the documentation, I would think they'd be automatically cached in Memcache when ret... | MapReduce intentionally disables memcache for NDB.
See [mapreduce/util.py](http://googleappengine.googlecode.com/svn/trunk/python/google/appengine/ext/mapreduce/util.py) ln 373, `_set_ndb_cache_policy()` (as of 2015-05-01):
```
def _set_ndb_cache_policy():
"""Tell NDB to never cache anything in memcache or in-proce... |
Do I need to explicitly pass multiprocessing.Queue instance variables to a child Process executing on an instance method? | 26,225,108 | 8 | 2014-10-06T21:24:46Z | 26,241,406 | 7 | 2014-10-07T17:06:01Z | [
"python",
"multiprocessing",
"python-multiprocessing"
] | I have few basic questions when it comes to using Python's `multiprocessing` module :
```
class Someparallelworkerclass(object) :
def __init__(self):
self.num_workers = 4
self.work_queue = multiprocessing.JoinableQueue()
self.result_queue = multiprocessing.JoinableQueue()
def someparalle... | It's actually not necessary to include the queues in the `args` argument in this case, no matter what platform you're using. The reason is that even though it doesn't look like you're explicitly passing the two `JoinableQueue` instances to the child, you actually are - via `self`. Because `self` *is* explicitly being p... |
Is there a way to get the run count of a for loop not using range? | 26,225,823 | 3 | 2014-10-06T22:25:30Z | 26,225,833 | 10 | 2014-10-06T22:26:26Z | [
"python"
] | If I have a for loop using a range like this:
```
for x in range(10):
```
then in order to get the count, it's just x. But say I have a for loop using a list:
```
layer = [somedata,someotherdata...etc]
for row in layer:
print #the number the loop is on
```
Is there a way to do this besides specifying an intege... | You can use `enumerate`. This will give you a count of every iteration and the value you're iterating.
Note: like `range` you can specify at what index to begin counting.
```
for count, row in enumerate(layer):
print count
``` |
Pandas concat gives error ValueError: Plan shapes are not aligned | 26,226,343 | 11 | 2014-10-06T23:21:31Z | 27,412,913 | 15 | 2014-12-10T23:40:06Z | [
"python",
"pandas",
"concat"
] | I am quite new to pandas, I am attempting to a set of dataframes and I am getting this error:
```
ValueError: Plan shapes are not aligned
```
My understanding of concat is that it will join where columns are the same, but for those that it can't find it will fill with NA. This doesn't seem to be the case here.
Heres... | In case it helps, I have also hit this error when I tried to concatenate two data frames (and as of the time of writing this is the only related hit I can find on google other than the source code).
I don't know whether this answer would have solved the OP's problem (since they didn't post enough information), but for... |
Pip build option to use multicore | 26,228,136 | 25 | 2014-10-07T03:29:46Z | 32,598,533 | 14 | 2015-09-16T01:58:45Z | [
"python",
"install",
"pip"
] | I found that pip only use single core when it compiles packages. Since some python packages takes some time to build using pip, I'd like to utilize multicore on the machine. When using Makefile, I can do that like following command:
```
make -j4
```
How can I achieve same thing for pip? | Use: **--install-option="--jobs=6"**.
```
pip3 install --install-option="--jobs=6" PyXXX
```
I have the same demand that use pip install to speed the compile progress. My target pkg is PySide. At first I use `pip3 install pyside`, it takes me nearly 30 minutes (AMD 1055T 6-cores, 10G RAM), only one core take 100% loa... |
python if statement evaluation with multiple values | 26,228,747 | 2 | 2014-10-07T04:50:58Z | 26,228,765 | 10 | 2014-10-07T04:53:23Z | [
"python"
] | I'm not exactly sure why but when I execute this section of code nothing happens.
```
while (True) :
choice = str(input("Do you want to draw a spirograph? (Y/N) "))
if choice == 'n' or 'N' :
break
elif choice == 'y' or 'Y' :
<CODE>
else :
print("Please enter a valid comm... | It won't work because the `'N'` literal always evaluates to `True` within your `if` statement.
Your `if` condition currently stands as `if choice == 'n' or 'N' :`, which is equivalent to `if (choice == 'n') or ('N')`, which will always evaluate to `True` irrespective of the value of variable `choice`, since the litera... |
Scapy installation fails on osx with dnet import error | 26,229,057 | 18 | 2014-10-07T05:24:01Z | 26,247,500 | 14 | 2014-10-08T00:42:43Z | [
"python",
"osx",
"installation",
"importerror",
"scapy"
] | Having trouble installing Scapy and it's required dependancies. I have spent some time Googling for a solution but all 'solutions' seem to affect older versions of Python, or simply do not work.
Script:
```
#!/usr/bin/python
import threading
import Queue
import time
from scapy.all import *
class WorkerThread(thread... | **EDIT** - [This answer below](http://stackoverflow.com/a/28385092/3903832) states that all mentioned issues were fixed, and provides a much simpler installation method. However, its comments suggest that although it seems to work on *OS X 10.10 Yosemite* and *OS X 10.11 El Capitan*, it might fail for certain other ver... |
Scapy installation fails on osx with dnet import error | 26,229,057 | 18 | 2014-10-07T05:24:01Z | 28,385,092 | 13 | 2015-02-07T17:23:44Z | [
"python",
"osx",
"installation",
"importerror",
"scapy"
] | Having trouble installing Scapy and it's required dependancies. I have spent some time Googling for a solution but all 'solutions' seem to affect older versions of Python, or simply do not work.
Script:
```
#!/usr/bin/python
import threading
import Queue
import time
from scapy.all import *
class WorkerThread(thread... | All issues mentioned above seem all fixed. I'm running OS X Yosemite. I got a working scapy by simply the following three commands.
```
brew install --with-python libdnet
pip install pcapy
pip install scapy
``` |
How should I indent ternary conditional operator in python so that it complies with PEP8? | 26,233,481 | 6 | 2014-10-07T10:04:46Z | 26,233,610 | 7 | 2014-10-07T10:11:46Z | [
"python",
"formatting",
"pep8",
"ternary"
] | PEP8 doesn't say anything about ternary operators, if I am not mistaken.
So what do you suggest, how should I write long lines with ternary conditional operators?
```
some_variable = some_very_long_value \
if very_long_condition_holds \
else very_long_condition_doesnt_hold
```
or
```
... | Neither. For any long line, it's usually better to use parentheses to allow line breaks. Opinions differ whether you should do this:
```
some_variable = (some_very_long_value
if very_long_condition_holds
else very_long_condition_doesnt_hold)
```
or this:
```
some_variable = (
some... |
How to run bash command inside python script? | 26,236,126 | 11 | 2014-10-07T12:33:55Z | 26,236,170 | 18 | 2014-10-07T12:35:57Z | [
"python",
"bash"
] | In a bash script I am trying to run python and bash command both.
In some where I want to execute some bash command inside a python loop.
```
#!/bin/bash
python << END
for i in range(1000):
#execute⬠some bash command such as echoing i
END
```
how can I do this? | The simplest way, not recommendable:
```
import os
# ...
os.system(commandString)
```
Better use [subprocess](https://docs.python.org/3/library/subprocess.html#subprocess.call), e.g.:
```
import subprocess
# ...
subprocess.call(["echo", i], shell=True)
```
Note that shell escaping is your job with these functions... |
How to run bash command inside python script? | 26,236,126 | 11 | 2014-10-07T12:33:55Z | 26,236,441 | 8 | 2014-10-07T12:50:43Z | [
"python",
"bash"
] | In a bash script I am trying to run python and bash command both.
In some where I want to execute some bash command inside a python loop.
```
#!/bin/bash
python << END
for i in range(1000):
#execute⬠some bash command such as echoing i
END
```
how can I do this? | Look in to the [subprocess](https://docs.python.org/3/library/subprocess.html) module. There is the Popen method and some wrapper functions like `call`.
* If you need to check the output (retrieve the result string):
```
output = subprocess.check_output(args ....)
```
* If you want to wait for execution to end ... |
Python does not create log file | 26,237,870 | 5 | 2014-10-07T14:02:09Z | 26,241,374 | 9 | 2014-10-07T17:03:41Z | [
"python",
"logging"
] | I am trying to implement some logging for recording messages. I am getting some weird behavior so I tried to find a minimal example, which I found [here](https://docs.python.org/2/howto/logging.html#logging-to-a-file). When I just copy the easy example described there into my interpreter the file is not created as you ... | The reason for your unexpected result is that you are using something on top of Python (looks like IPython) which configures the root logger itself. As per [the documentation for basicConfig()](https://docs.python.org/2/library/logging.html#logging.basicConfig),
> This function does nothing if the root logger already ... |
Counting total number of tasks executed in a multiprocessing.Pool during execution | 26,238,691 | 6 | 2014-10-07T14:42:00Z | 26,239,072 | 7 | 2014-10-07T15:00:36Z | [
"python",
"parallel-processing",
"multiprocessing"
] | I'd love to give an indication of the current talk in total that we are only. I'm farming work out and would like to know current progress. So if I sent `100` jobs to `10` processors, how can I show what the current number of jobs that have returned is. I can get the id's but but how do I count up the number of complet... | If you use `pool.map_async` you can pull this information out of the [`MapResult`](https://docs.python.org/2.7/library/multiprocessing.html#multiprocessing.pool.AsyncResult) instance that gets returned. For example:
```
import multiprocessing
import time
def worker(i):
time.sleep(i)
return i
if __name__ == ... |
Django and Middleware which uses request.user is always Anonymous | 26,240,832 | 10 | 2014-10-07T16:30:51Z | 26,246,463 | 7 | 2014-10-07T22:40:02Z | [
"python",
"django",
"django-rest-framework"
] | I'm trying to make middleware which alters some fields for the user based on subdomain, etc...
The only problem is the request.user always comes in as AnonymousUser within the middleware, but is then the correct user within the views. I've left the default authentication and session middleware django uses within the s... | Hey guys I've solved this problem by getting DRF token from the requests and loading request.user to the user associated to that model.
I had the default django authentication and session middleware, but it seems DRF was using it's token auth after middleware to resolve the user (All requests were CORS requests, this ... |
Create celery tasks then run synchronously | 26,241,381 | 9 | 2014-10-07T17:04:30Z | 28,450,072 | 7 | 2015-02-11T08:52:57Z | [
"python",
"django",
"celery",
"django-celery"
] | My app gathers a bunch of phone numbers on a page. Once the user hits the submit button I create a celery task to call each number and give a reminder message then redirect them to a page where they can see the live updates about the call. I am using web sockets to live update the status of each call and need the tasks... | If you look at the [celery DOCS on tasks](http://docs.celeryproject.org/en/latest/reference/celery.app.task.html) you see that to call a task synchronosuly, you use the apply() method as opposed to the apply\_async() method.
So in your case you could use:
```
reminder.apply(args=[number])
```
The DOCS also note tha... |
How to analyze all duplicate entries in this Pandas DataFrame? | 26,244,309 | 12 | 2014-10-07T20:04:08Z | 26,244,925 | 20 | 2014-10-07T20:42:19Z | [
"python",
"pandas",
"dataframe"
] | I'd like to be able to compute descriptive statistics on data in a Pandas DataFrame, but I only care about duplicated entries. For example, let's say I have the DataFrame created by:
```
import pandas as pd
data={'key1':[1,2,3,1,2,3,2,2],'key2':[2,2,1,2,2,4,2,2],'data':[5,6,2,6,1,6,2,8]}
frame=pd.DataFrame(data,column... | **EDIT for *Pandas 0.17* or later:**
As the `take_last` argument of the `duplicated()` method was [deprecated](http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#whatsnew-0170-deprecations) in favour of the new `keep` argument since *Pandas 0.17*, please refer to [this answer](http://stackoverflow.com/a/3338115... |
NumPy: Return 0 with divide by zero | 26,248,654 | 10 | 2014-10-08T03:13:05Z | 26,248,892 | 7 | 2014-10-08T03:43:38Z | [
"python",
"arrays",
"numpy",
"error-handling",
"divide-by-zero"
] | I'm trying to perform an element wise divide in python, but if a zero is encountered, I need the quotient to just be zero.
For example:
```
array1 = np.array([0, 1, 2])
array2 = np.array([0, 1, 1])
array1 / array2 # should be np.array([0, 1, 2])
```
I could always just use a for-loop through my data, but to really ... | Try doing it in two steps. Division first, then replace.
```
with numpy.errstate(divide='ignore'):
result = numerator / denominator
result[denominator == 0] = 0
```
The `numpy.errstate` line is optional, and just prevents numpy from telling you about the "error" of dividing by zero, since you're already inten... |
NumPy: Return 0 with divide by zero | 26,248,654 | 10 | 2014-10-08T03:13:05Z | 32,106,804 | 15 | 2015-08-19T22:41:36Z | [
"python",
"arrays",
"numpy",
"error-handling",
"divide-by-zero"
] | I'm trying to perform an element wise divide in python, but if a zero is encountered, I need the quotient to just be zero.
For example:
```
array1 = np.array([0, 1, 2])
array2 = np.array([0, 1, 1])
array1 / array2 # should be np.array([0, 1, 2])
```
I could always just use a for-loop through my data, but to really ... | Building on the other answers, and improving on:
* `0/0` handling by adding `invalid='ignore'` to [`numpy.errstate()`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.errstate.html)
* introducing [`numpy.nan_to_num()`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.nan_to_num.html) to convert `np.... |
NumPy: Return 0 with divide by zero | 26,248,654 | 10 | 2014-10-08T03:13:05Z | 35,696,047 | 10 | 2016-02-29T09:34:14Z | [
"python",
"arrays",
"numpy",
"error-handling",
"divide-by-zero"
] | I'm trying to perform an element wise divide in python, but if a zero is encountered, I need the quotient to just be zero.
For example:
```
array1 = np.array([0, 1, 2])
array2 = np.array([0, 1, 1])
array1 / array2 # should be np.array([0, 1, 2])
```
I could always just use a for-loop through my data, but to really ... | Building on @Franck Dernoncourt's answer, fixing -1 / 0:
```
def div0( a, b ):
""" ignore / 0, div0( [-1, 0, 1], 0 ) -> [0, 0, 0] """
with np.errstate(divide='ignore', invalid='ignore'):
c = np.true_divide( a, b )
c[ ~ np.isfinite( c )] = 0 # -inf inf NaN
return c
div0( [-1, 0, 1], 0 )
ar... |
Can I use multiprocessing.Pool in a method of a class? | 26,249,442 | 3 | 2014-10-08T04:57:34Z | 26,249,586 | 9 | 2014-10-08T05:09:36Z | [
"python",
"python-3.x",
"multiprocessing",
"python-multiprocessing"
] | I am tring to use `multiprocessing` in my code for better performance.
However, I got an error as follows:
```
Traceback (most recent call last):
File "D:\EpubBuilder\TinyEpub.py", line 49, in <module>
e.epub2txt()
File "D:\EpubBuilder\TinyEpub.py", line 43, in epub2txt
tempread = self.get_text()
File "... | The issue is that you've got an unpicklable instance variable (`namelist`) in the `Book` instance. Because you're calling `pool.map` on an instance method, and you're running on Windows, the entire instance needs to be picklable in order for it to be passed to the child process. `Book.namelist` is a open file object (`... |
How to use wxPython for Python 3? | 26,251,030 | 6 | 2014-10-08T07:08:24Z | 26,252,046 | 7 | 2014-10-08T08:09:34Z | [
"python",
"osx",
"python-3.x",
"wxpython"
] | I installed `wxPython 3.0.1.1`, but I'm unable to `import wx` using `Python 3.4.1`. I am getting the following error:
```
Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 00:54:21)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import wx... | You have two different pythons installed on your machine (3.4.1 and 2.7.5). Do not expect to be able to use one package installed in one python (wxPython 3.0.1.1 at python 2.7.5) automatically to be available in another python.
Additionally wxPython (classic) does not work for Python 3. You need [*wxPython Phoenix*](h... |
Pythonic way to write long import statements | 26,253,545 | 2 | 2014-10-08T09:29:20Z | 26,253,564 | 7 | 2014-10-08T09:30:14Z | [
"python",
"django",
"python-import"
] | What is the pythonic way to import from Models (or Forms or views) in Django?
To say it frank I bluntly do this:
```
from myapp.models import foo, bar, foobar, barfoo, foofoo, barbar, barfoobar, thelistgoeson, and, on, andon...
```
It is far longer than the maximum of 79 characters - but what is the better way to do... | Use parentheses to group your imports together:
```
from myapp.models import (foo, bar, foobar, barfoo, foofoo,
barbar, barfoobar, thelistgoeson, and, on, and, so, on)
```
This is in accordance with [PEP-328 Rationale for parentheses](http://legacy.python.org/dev/peps/pep-0328/#rationale-for-parentheses):
> Curr... |
Django 1.7 makemigrations - ValueError: Cannot serialize function: lambda | 26,256,450 | 4 | 2014-10-08T12:02:49Z | 27,250,585 | 8 | 2014-12-02T13:23:51Z | [
"python",
"django"
] | I switch to Django 1.7. When I try makemigrations for my application, it crash. The crash report is:
```
Migrations for 'roadmaps':
0001_initial.py:
- Create model DataQualityIssue
- Create model MonthlyChange
- Create model Product
- Create model ProductGroup
- Create model RecomendedStack
-... | We had this issue with using lambda in the custom field definition.
It is than hard to spot as it is not listed in traceback and the error is not raised on the particular model which uses such custom field.
Our way to fix:
* check all your custom fields (even in the 3rd party libraries)
* change the lambda to callab... |
Two different submit buttons in same form in Django | 26,256,469 | 4 | 2014-10-08T12:04:01Z | 26,256,936 | 7 | 2014-10-08T12:29:54Z | [
"python",
"django",
"forms",
"django-models",
"django-forms"
] | I have an UpdateView in Django.
I have just a normal submit button. When the object is updated correctly it redirects to an object list via `success_url`.
Can I make two different submit buttons: One button which submits and redirects to objects list page (ListView) and another button which submits and redirects to t... | Since you're submitting to the same place, and only want to change the redirect destination after save, this is simple. Submit buttons are just like any other input controls in that they have a name and a value, and you receive these in the POST data. So, in your template you can have:
```
<input type="submit" name="l... |
The view didn't return an HttpResponse object. It returned None instead | 26,258,905 | 5 | 2014-10-08T14:04:27Z | 26,258,906 | 20 | 2014-10-08T14:04:27Z | [
"python",
"django",
"django-views"
] | I have the following simple view. Why is it resulting in this error?
`The view auth_lifecycle.views.user_profile didn't return an HttpResponse object. It returned None instead.`
```
"""Renders web pages for the user-authentication-lifecycle project."""
from django.shortcuts import render
from django.tem... | Because the view must *return* `render`, not just call it. Change the last line to
```
return render(request, 'auth_lifecycle/user_profile.html',
context_instance=RequestContext(request))
``` |
How can I randomly choose a maths operator and ask recurring maths questions with it? | 26,260,950 | 3 | 2014-10-08T15:39:04Z | 26,261,125 | 16 | 2014-10-08T15:47:40Z | [
"python",
"math",
"random"
] | I have a simple maths task I'm having problems executing, involving the random import.
The idea is that there is a quiz of 10 randomly generated questions. I've got the numbers ranging from (0,12) using the random.randint function, that works fine. Its the next bit of choosing a random operator I'm having problems with... | How about you make a dictionary that maps the operator's character (e.g. '+') to the operator (e.g. `operator.add`). Then sample that, format you string, and perform the operation.
```
import random
import operator
```
Generating a random mathematical expression
```
def randomCalc():
ops = {'+':operator.add,
... |
Python Add to dictionary loop | 26,263,682 | 2 | 2014-10-08T18:15:28Z | 26,263,716 | 7 | 2014-10-08T18:17:17Z | [
"python",
"dictionary"
] | This function is supposed to add a name and number to the dictionary 'phoneBook' when I run the loop, but for some reason I can't get it to work. Any ideas on why not? Thanks a lot!
```
phoneBook = dict()
def addNumber(name, number):
for i in phoneBook:
if i == name:
print 'err... | You don't really need the loop at all, you can just use the `in` keyword to check the name against the existing keys in the dictionary.
```
phoneBook = dict()
def addNumber(name, number):
if name in phoneBook:
print 'error'
else:
phoneBook[name] = number
``` |
Time zone field in isoformat | 26,264,897 | 6 | 2014-10-08T19:28:55Z | 26,266,741 | 9 | 2014-10-08T21:25:23Z | [
"python",
"datetime"
] | I have a timestamp that is supposed to be in EST:
```
2014-10-06T18:06:40-04:56
```
I understand this first part: `2014-10-06T18:06:40`, but not `-04:56`.
What does `-04:56` mean here?`
Here is how I got that timestamp:
```
import datetime
start_time = datetime.datetime(year = 2014,
... | The problem is that [`pytz`](http://pytz.sourceforge.net/):
> â¦Â differs from the documented Python API for tzinfo implementations; if you want to create local wallclock times you need to use the `localize()` method documented in this document â¦
Further down, it says:
> Unfortunately using the tzinfo argument of... |
Easiest way to create a color gradient on excel using python/pandas? | 26,265,403 | 7 | 2014-10-08T20:00:14Z | 26,268,303 | 8 | 2014-10-09T00:02:12Z | [
"python",
"excel",
"pandas"
] | So I have data that I am outputting to an excel file using pandas' ExcelWriter. After the entire data is outputted to the Excel file, what is the easiest way to apply conditional formatting to it programmatically using Python?
I want to be able to do the equivalent (through Python) of selecting (in Excel) all the fill... | Here is an example of how to apply a conditional format to the XlsxWriter Excel file created by Pandas:
```
import pandas as pd
# Some sample data to plot.
list_data = [30, 40, 50, 40, 20, 10, 5]
# Create a Pandas dataframe from the data.
df = pd.DataFrame(list_data)
# Create a Pandas Excel writer using XlsxWriter ... |
How to count the Nan values in the column in Panda Data frame | 26,266,362 | 53 | 2014-10-08T21:00:19Z | 26,266,439 | 18 | 2014-10-08T21:05:52Z | [
"python",
"pandas"
] | I have data, in which I want to find number of NaN, so that if it is less than some threshold, I will drop this columns. I looked, but didn't able to find any function for this. there is count\_values(), but it would be slow for me, because most of values are distinct and I want count of NaN only. | You could subtract the total length from the [count](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.count.html) of non-nan values:
```
count_nan = len(df) - df.count()
```
You should time it on your data. For small Series got a 3x speed up in comparison with the `isnull` solution. |
How to count the Nan values in the column in Panda Data frame | 26,266,362 | 53 | 2014-10-08T21:00:19Z | 26,266,451 | 97 | 2014-10-08T21:06:28Z | [
"python",
"pandas"
] | I have data, in which I want to find number of NaN, so that if it is less than some threshold, I will drop this columns. I looked, but didn't able to find any function for this. there is count\_values(), but it would be slow for me, because most of values are distinct and I want count of NaN only. | You can use the `isnull()` method and then sum to count the nan values. For one column:
```
In [1]: s = pd.Series([1,2,3, np.nan, np.nan])
In [4]: s.isnull().sum()
Out[4]: 2
```
For several columns, it also works:
```
In [5]: df = pd.DataFrame({'a':[1,2,np.nan], 'b':[np.nan,1,np.nan]})
In [6]: df.isnull().sum()
Ou... |
How to count the Nan values in the column in Panda Data frame | 26,266,362 | 53 | 2014-10-08T21:00:19Z | 26,272,425 | 8 | 2014-10-09T07:14:27Z | [
"python",
"pandas"
] | I have data, in which I want to find number of NaN, so that if it is less than some threshold, I will drop this columns. I looked, but didn't able to find any function for this. there is count\_values(), but it would be slow for me, because most of values are distinct and I want count of NaN only. | Since pandas 0.14.1 my suggestion [here](https://github.com/pydata/pandas/issues/5569) to have a keyword argument in the value\_counts method has been implemented:
```
import pandas as pd
df = pd.DataFrame({'a':[1,2,np.nan], 'b':[np.nan,1,np.nan]})
for col in df:
print df[col].value_counts(dropna=False)
2 1
... |
TypeError: list indices must be integers, not dict | 26,266,425 | 4 | 2014-10-08T21:04:58Z | 26,266,465 | 9 | 2014-10-08T21:07:32Z | [
"python"
] | My json file look likes this and I'm trying to access the element `syslog` in a for loop.
```
{
"cleanup":{
"folderpath":"/home/FBML7HR/logs",
"logfilename":""
},
"preparation":{
"configuration":{
"src_configfile":"src.cfg",
"dest_configfile":"/var/home/FBML7HR/etc/vxn.cfg"
},
"ex... | You are looping over the *values* in the list referenced by `data['execution']`, *not* indices.
Just use those values (dictionaries) **directly**:
```
for i in data['execution']:
cmd = i['test_case']['scriptname']
```
You probably want to give that a more meaningful loop name:
```
for entry in data['execution']... |
how to use python2.7 pip instead of default pip | 26,266,437 | 5 | 2014-10-08T21:05:41Z | 26,267,333 | 20 | 2014-10-08T22:12:30Z | [
"python",
"linux",
"django",
"centos",
"pip"
] | I just installed python 2.7 and also pip to the 2.7 site package.
When I get the version with:
```
pip -V
```
It shows:
```
pip 1.3.1 from /usr/lib/python2.6/site-packages (python 2.6)
```
How do I use the 2.7 version of pip located at:
```
/usr/local/lib/python2.7/site-packages
``` | There should be a binary called "pip2.7" installed at some location included within your $PATH variable.
You can find that out by typing
```
which pip2.7
```
This should print something like '/usr/local/bin/pip2.7' to your stdout. If it does not print anything like this, it is not installed. In that case, install it... |
WebdriverWait failing even though element is present | 26,267,565 | 2 | 2014-10-08T22:37:53Z | 26,267,615 | 7 | 2014-10-08T22:43:08Z | [
"python",
"html",
"selenium",
"xpath",
"selenium-webdriver"
] | Here is my code:
```
def CheckQueue(driver):
qdone = False
qID_xpath_start = "/html/body/div[5]/form/table/tbody[1]/tr["
qID_xpath_end = "]/td[2]/a"
qIDindex = 1
while qdone == False:
print "enter loop"
print driver.find_element_by_xpath(qID_xpath_start+str(qIDindex)+qID_xpath_end).... | The [`WebDriverWait` expression syntax](http://selenium-python.readthedocs.org/en/latest/waits.html#explicit-waits) is not correct, it should be:
```
WebDriverWait(driver, 60).until(ec.presence_of_element_located((By.XPATH, qID_xpath_start+str(qIDindex)+qID_xpath_end)))
```
Note the tuple passed into the `presence_of... |
Recursive definitions in Pandas | 26,267,809 | 8 | 2014-10-08T23:01:56Z | 26,268,338 | 10 | 2014-10-09T00:06:30Z | [
"python",
"numpy",
"pandas"
] | I have a time-series `A` holding several values. I need to obtain a series `B` that is defined algebraically as follows:
```
B[t] = a * A[t] + b * B[t-1]
```
where we can assume `B[0] = 0`, and `a` and `b` are real numbers.
Is there any way to do this type of recursive computation in Pandas? Or do I have no choice b... | As I noted in a comment, you can use [`scipy.signal.lfilter`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.lfilter.html). In this case (assuming `A` is a one-dimensional numpy array), all you need is:
```
B = lfilter([a], [1.0, -b], A)
```
Here's a complete script:
```
import numpy as np
from sci... |
How do you catch this exception? | 26,270,042 | 52 | 2014-10-09T03:54:44Z | 26,270,785 | 112 | 2014-10-09T05:12:33Z | [
"python",
"django",
"exception"
] | This code is in django/db/models/fields.py It creates/defines an exception?
```
class ReverseSingleRelatedObjectDescriptor(six.with_metaclass(RenameRelatedObjectDescriptorMethods)):
# This class provides the functionality that makes the related-object
# managers available as attributes on a model class, for fi... | If your related model is called Foo you can just do:
```
except Foo.DoesNotExist:
```
Django is amazing when its not terrifying. `RelatedObjectDoesNotExist` is a property that returns a type that is figured out dynamically at runtime. That type uses `self.field.rel.to.DoesNotExist` as a base class. According to Djang... |
Can an asyncio event loop run in the background without suspending the Python interpreter? | 26,270,681 | 8 | 2014-10-09T05:03:09Z | 26,270,790 | 15 | 2014-10-09T05:13:02Z | [
"python",
"concurrency",
"python-asyncio"
] | The documentation for asyncio gives two examples for how to print "Hello World" every two seconds:
<https://docs.python.org/3/library/asyncio-eventloop.html#asyncio-hello-world-callback>
<https://docs.python.org/3/library/asyncio-task.html#asyncio-hello-world-coroutine>
I can run those from the interpreter, but if I d... | You can run the event loop inside a background thread:
```
>>> import asyncio
>>>
>>> @asyncio.coroutine
... def greet_every_two_seconds():
... while True:
... print('Hello World')
... yield from asyncio.sleep(2)
...
>>> def loop_in_thread(loop):
... asyncio.set_event_loop(loop)
... loop.... |
What are the Spark transformations that causes a Shuffle? | 26,273,664 | 17 | 2014-10-09T08:25:10Z | 26,281,444 | 21 | 2014-10-09T14:47:20Z | [
"python",
"scala",
"apache-spark"
] | I have trouble to find in the Spark documentation operations that causes a shuffle and operation that does not. In this list, which ones does cause a shuffle and which ones does not?
Map and filter does not. However, I am not sure with the others.
```
map(func)
filter(func)
flatMap(func)
mapPartitions(func)
mapPartit... | It is actually extremely easy to find this out, without the documentation. For any of these functions just create an RDD and call to debug string, here is one example you can do the rest on ur own.
```
scala> val a = sc.parallelize(Array(1,2,3)).distinct
scala> a.toDebugString
MappedRDD[5] at distinct at <console>:12... |
Android compile error using buildozer | 26,277,154 | 5 | 2014-10-09T11:23:12Z | 26,308,042 | 10 | 2014-10-10T21:06:58Z | [
"android",
"python",
"osx",
"kivy"
] | I am trying to compile an .apk ([this one](https://github.com/tshirtman/kivy_service_osc)) using buildozer. But I keep getting the command failed message below. I've tried using a different python path, reinstalling buildozer, using different buildozer paths, compiling from Mac HD, compiling with sudo, a different cyth... | This is an issue with buildozer failing to install the Android build-tools. Hopefully this will be fixed soon. Until then, you can work around this by manually installing build-tools (<https://github.com/kivy/buildozer/issues/146#issuecomment-57061269>):
1. Run `~/.buildozer/android/platform/android-sdk-21/tools/andro... |
What's an elegant way to catch the same exception multiple times? | 26,277,455 | 3 | 2014-10-09T11:38:07Z | 26,277,643 | 7 | 2014-10-09T11:46:35Z | [
"python"
] | I have some Python code that tries a bunch of different database queries before it concludes that the database is empty and gives up. Here is a simplified example:
```
try:
result = Object.get(name="requested_object")
except Object.DoesNotExist:
try:
result = Object.get(name="default_object")
excep... | ```
for args in [{'name':"requested_object"}, {'name':"default_object"}, {'pk':1}]:
try:
result = Object.get(**args)
except Object.DoesNotExist as e:
continue
else:
break
else:
raise e
```
It's not clear what exception you want to raise if you never find what you want, you might... |
Pandas to_html() truncates string contents | 26,277,757 | 17 | 2014-10-09T11:52:12Z | 26,301,947 | 26 | 2014-10-10T14:38:22Z | [
"python",
"html",
"pandas"
] | I have a Python Pandas `DataFrame` object containing textual data. My problem is, that when I use [`to_html()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_html.html) function, it truncates the strings in the output.
For example:
```
import pandas
df = pandas.DataFrame({'text': ['Lorem i... | What you are seeing is pandas truncating the output for display purposes only.
The default `max_colwidth` value is 50 which is what you are seeing.
You can set this value to whatever you desire or you can set it to -1 which effectively turns this off:
```
pd.set_option('display.max_colwidth', -1)
```
Although I wou... |
Unexpected result with += on NumPy arrays | 26,278,241 | 24 | 2014-10-09T12:17:49Z | 26,278,583 | 23 | 2014-10-09T12:35:05Z | [
"python",
"arrays",
"numpy"
] | I am creating symmetric matrices/arrays in Python with NumPy, using a standard method:
```
x = rand(500,500)
x = (x+x.T)
all(x==x.T)
> True
```
Now let's be clever:
```
x = rand(500,500)
x += x.T
all(x==x.T)
> False
```
Wait, what?
```
x==x.T
> array([[ True, True, True, ..., False, False, False],
[ True,... | The [`transpose` operation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.matrix.transpose.html) returns a *view* of the array, which means that no new array is allocated. Which, in turn, means that you are reading and modifying the array at the same time. It's hard to tell why some sizes or some areas of t... |
Unexpected result with += on NumPy arrays | 26,278,241 | 24 | 2014-10-09T12:17:49Z | 26,280,846 | 19 | 2014-10-09T14:19:43Z | [
"python",
"arrays",
"numpy"
] | I am creating symmetric matrices/arrays in Python with NumPy, using a standard method:
```
x = rand(500,500)
x = (x+x.T)
all(x==x.T)
> True
```
Now let's be clever:
```
x = rand(500,500)
x += x.T
all(x==x.T)
> False
```
Wait, what?
```
x==x.T
> array([[ True, True, True, ..., False, False, False],
[ True,... | The implementation detail mentioned by others is called **buffering**. You can read more about it in [the docs on array iteration](http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html#buffering-the-array-elements).
If you look at your failing example in a little more detail:
```
>>> a = np.random.rand(91, 91)... |
How do I mark a Python package as Python 2 only? | 26,278,348 | 6 | 2014-10-09T12:22:49Z | 26,336,151 | 7 | 2014-10-13T08:54:24Z | [
"python",
"package"
] | I have a [Python package](https://github.com/flycheck/sphinxcontrib-emacs) that only runs on Python 2. It has the following classifiers in its setup.py:
```
setup(
# ...
classifiers=[
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Pytho... | In setup.py, add this:
```
import sys
if sys.version_info[0] != 2:
sys.stderr.write("This package only supports Python 2.\n")
sys.exit(1)
``` |
error installing psycopg2, library not found for -lssl | 26,288,042 | 10 | 2014-10-09T21:17:31Z | 39,244,687 | 34 | 2016-08-31T08:39:07Z | [
"python",
"postgresql",
"psycopg2"
] | I run "sudo pip install psycopg2" and I get a bunch of output that looks like
```
cc -DNDEBUG -g -fwrapv -Os .....
.....
cc -DNDEBUG -g -fwrapv -Os .....
.....
```
And at the end it says:
```
ld: library not found for -lssl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: comm... | For anyone looking for a solution for this on macOS Sierra 10.12: I fixed this by installing the command line tools:
```
xcode-select --install
```
After that, `pip install psycopg2` should work.
If it doesn't, you could also try to link against brew's openssl:
```
env LDFLAGS="-I/usr/local/opt/openssl/include -L/u... |
Sorting sets (not a single set) | 26,288,405 | 3 | 2014-10-09T21:42:48Z | 26,288,422 | 7 | 2014-10-09T21:44:01Z | [
"python",
"python-2.7",
"data-structures",
"set"
] | I have a number of sets with each set representing the unique items in a single data file. Some of these sets are subsets of others, some are identical.
Is there a primitive or a module that lets me sort the sets such that I get something like
```
A <= B <= C <= D <= E
```
et cetera for the sets A, B, C, D, E, F?
I... | Just put them in a list and sort them. If a set is a *subset* of another set their `<=` relationship is True, and this extends to sorting. In other words, what you want is the *default sort order already*.
Demo:
```
>>> A = {1, 2}
>>> B = A | {3}
>>> C = B.copy()
>>> D = C | {4}
>>> A <= D
True
>>> [B, C, D, A]
[set(... |
Use numpy to multiply a matrix across an array of points? | 26,289,972 | 3 | 2014-10-10T00:14:05Z | 26,290,110 | 7 | 2014-10-10T00:32:08Z | [
"python",
"numpy"
] | I've got an array which contains a bunch of points (3D vectors, specifically):
```
pts = np.array([
[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5],
])
```
And I would like to multiply each one of those points by a transformation matrix:
```
pts[0] = np.dot(transform_matrix, pts[0])
pts[1] ... | I find it helps to write the `einsum` version first-- after you see the indices you can often recognize that there's a simpler version. For example, starting from
```
>>> pts = np.random.random((5,3))
>>> transform_matrix = np.random.random((3,3))
>>>
>>> pts_brute = pts.copy()
>>> for i in range(len(pts_brute)):
...... |
Missing Dependencies Installing NumPy 1.9 for Python 3.4.1 32-bit on Windows 7 | 26,290,259 | 2 | 2014-10-10T00:48:32Z | 26,290,335 | 7 | 2014-10-10T01:01:09Z | [
"python",
"windows",
"numpy",
"amd-processor"
] | I cannot seem to get NumPy 1.9.0 with Python 3.4.1 32-bit on a Windows 7 AMD 64-bit machine.
I download the package, unzip it, and run:
```
python setup.py install
```
Here are the messages I get in the command shell.
I think my problem is that I don't have an Intel machine with MLK binaries available, because I ha... | If you want to try and build NumPy on a Windows machine, good for you. However, if you want to **use** NumPy on a Windows machine, head over to Christoph Gohlke's [Python Extension Packages for Windows](http://www.lfd.uci.edu/~gohlke/pythonlibs/) repository. He has MKL-linked versions of NumPy for 32- and 64-bit Intel/... |
Fail to install lxml using pip | 26,291,396 | 6 | 2014-10-10T03:14:41Z | 27,178,619 | 10 | 2014-11-27T21:03:45Z | [
"python",
"lxml"
] | This is the command I used to install lxml:
```
sudo pip install lxml
```
And I got the following message in the Cleaning Up stage:
```
Cleaning up...
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/tmp/pip_build_root/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__... | Just install these stuff :
```
sudo apt-get install libxml2-dev libxslt-dev python-dev python-setuptools
```
Then try again :
```
pip install lxml
``` |
Changing the color of matplotlib's violin plots | 26,291,479 | 4 | 2014-10-10T03:26:04Z | 26,291,582 | 12 | 2014-10-10T03:42:19Z | [
"python",
"matplotlib"
] | Is there a way to change the color of the `violin` plots in matplotlib?
The default color is this "brownish" color, which is not too bad, but I'd like to color e.g., the first 3 violins differently to highlight them. I don't find any parameter in the [documentation](http://matplotlib.org/api/pyplot_api.html#matplotlib... | [`matplotlib.pyplot.violinplot()`](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.violinplot) says it returns:
> A dictionary mapping each component of the violinplot to a list of the corresponding collection instances created. The dictionary has the following keys:
>
> * `bodies`: A list of the `matplotl... |
Pyplot: using percentage on x axis | 26,294,360 | 6 | 2014-10-10T07:46:59Z | 26,294,785 | 13 | 2014-10-10T08:14:30Z | [
"python",
"matplotlib"
] | I have a line chart based on a simple list of numbers. By default the x-axis is just the an increment of 1 for each value plotted. I would like to be a percentage instead but can't figure out how. So instead of having an x-axis from 0 to 5, it would go from 0% to 100% (but keeping reasonably spaced tick marks. Code bel... | The code below will give you a simplified x-axis which is percentage based, it assumes that each of your values are spaces equally between 0% and 100%.
It creates a `perc` array which holds evenly-spaced percentages that can be used to plot with. It then adjusts the formatting for the x-axis so it includes a percentag... |
Read multiple times lines of the same file Python | 26,294,912 | 2 | 2014-10-10T08:21:37Z | 26,294,982 | 8 | 2014-10-10T08:26:00Z | [
"python",
"file",
"for-loop"
] | I'm trying to read lines of some files multiple times in Python.
I'm using this basic way :
```
with open(name, 'r+') as file:
for line in file:
# Do Something with line
```
And that's working fine, but if I want to iterate a second time each lines while I'm still with my file op... | Use [file.seek()](https://docs.python.org/2/library/stdtypes.html#file.seek) to jump to a specific position in a file. However, think about whether it is really necessary to go through the file again. Maybe there is a better option.
```
with open(name, 'r+') as file:
for line in file:
# Do Something with l... |
How does everything is an object even work? | 26,294,953 | 7 | 2014-10-10T08:24:23Z | 26,295,381 | 10 | 2014-10-10T08:47:31Z | [
"python",
"oop"
] | I understand the principal theory behind *Everything is an Object* but I really don't understand how it is implemented under the hood.
## Functions
So: `foo(4)` is the same as `foo.__call__(4)`. But what is stopping me from doing `foo.__call__.__call__(4)`?
`foo` is a function and `foo.__call__...` are all method wr... | I think what you're getting confused with is that although all of Python's variables might be objects and all the properties of those variables might be objects, there is a limit. I mean, for a normal class the structure usually goes:
```
myclass -> classobj -> type
```
which you can see if you try this in the consol... |
Linking Django and Postgresql with Docker | 26,295,061 | 8 | 2014-10-10T08:30:02Z | 26,296,462 | 8 | 2014-10-10T09:45:18Z | [
"python",
"django",
"postgresql",
"docker"
] | I have two Docker containers. The first one is Postgresql container, which I run using the following command.
```
sudo docker run -v /home/mpmsp/project/ezdict/postgresql/data:/var/lib/postgresql/data -p 127.0.0.1:5432:5432 -name my-postgres -d postgres
```
It is based on [official image](https://registry.hub.docker... | The *Dockerfile* for your Django image should not expose port `5432` as no Postgresql server will be running in any container created from that image:
```
FROM python:3-onbuild
EXPOSE 8000
CMD ["/bin/bash"]
```
Then as you are running the Django container linking it with
`--link my-postgres:my-postgres`
your settin... |
ValueError: total size of new array must be unchanged | 26,295,491 | 3 | 2014-10-10T08:52:43Z | 32,071,181 | 7 | 2015-08-18T11:29:37Z | [
"python",
"opencv",
"image-processing"
] | I am trying to execute the code from this [URL](http://stackoverflow.com/questions/13379909/compare-similarity-of-images-using-opencv-with-python). However, I started getting this error:
```
des = np.array(des,np.float32).reshape((1,128))
ValueError: total size of new array must be unchanged
```
I have not made any m... | I had same issue. I found that I changed data length. Product of `reshape` arguments should be equal to length of array which you changing.
In your case:
```
des = np.array(des,np.float32).reshape(1, len(des))
``` |
Django testing model with ImageField | 26,298,821 | 11 | 2014-10-10T11:53:30Z | 26,307,916 | 12 | 2014-10-10T20:56:27Z | [
"python",
"django",
"testing",
"mocking"
] | I need to test the Photo model of my Django application. How can I mock the ImageField with a test image file?
**tests.py**
```
class PhotoTestCase(TestCase):
def test_add_photo(self):
newPhoto = Photo()
newPhoto.image = # ??????
newPhoto.save()
self.assertEqual(Photo.objects.coun... | For future users, I've solved the problem.
You can mock an ImageField with an SimpleUploadedFile instance.
**test.py**
```
from django.core.files.uploadedfile import SimpleUploadedFile
newPhoto.image = SimpleUploadedFile(name='test_image.jpg', content=open(image_path, 'rb').read(), content_type='image/jpeg')
``` |
pip broken after upgrading | 26,302,805 | 10 | 2014-10-10T15:23:55Z | 26,302,888 | 30 | 2014-10-10T15:27:50Z | [
"python",
"bash",
"pip",
"easy-install"
] | I did pip install -U easyinstall, and then pip install -U pip to upgrade my pip. However, I get this error now when trying to use pip:
```
root@d8fb98fc3a66:/# which pip
/usr/local/bin/pip
root@d8fb98fc3a66:/# pip
bash: /usr/bin/pip: No such file or directory
```
This is on an ubuntu 12.04 in a docker image. | One reason can be remembed locations.
You can clear the cached locations by issuing following command:
```
hash -r
```
SIDENOTE: Instead of `which`, using `type` command, you can see the hashed location:
```
$ type pip
pip is /usr/local/bin/pip
$ pip -V
pip 1.5.6 from /usr/local/lib/python2.7/dist-packages (python ... |
How to ensure that a python function generates its output based only on its input? | 26,303,021 | 7 | 2014-10-10T15:35:28Z | 26,351,899 | 8 | 2014-10-14T02:51:34Z | [
"python",
"database",
"functional-programming",
"filesystems",
"httpwebrequest"
] | To generate an output a function usually uses only values of its arguments. However, there are also cases in which function, to generate its output, reads something from a file system or from a database or from the web. I would like to have a simple and reliable way to ensure that something like that does not happen.
... | The answer to this is *no*. What you are looking for is a function that tests for `functional purity`. But, as demonstrated in this code, there's no way to guarantee that no side effects are actually being called.
```
class Foo(object):
def __init__(self, x):
self.x = x
def __add__(self, y):
pr... |
Python Mixed Integer Linear Programming | 26,305,704 | 17 | 2014-10-10T18:22:08Z | 26,314,315 | 22 | 2014-10-11T11:50:32Z | [
"python",
"linear-programming",
"glpk",
"integer-programming"
] | Are there any Mixed Integer Linear Programming(MILP) solver for Python?
Can GLPK python solve MILP problem? I read that it can solve Mixed integer problem.
I am very new to linear programming problem. So i am rather confused and cant really differentiate if Mixed Integer Programming is different from Mixed Integer L... | [**Pulp**](https://pythonhosted.org/PuLP/) is a python modeling interface that hooks up to solvers like the open source [**CBC**](https://projects.coin-or.org/Cbc), [**CPLEX**](http://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/), [**Gurobi**](http://www.gurobi.com/), [**XPRESS-MP**](http://www.fico.c... |
Why is the dict literal syntax preferred over the dict constructor? | 26,309,291 | 16 | 2014-10-10T23:09:31Z | 26,309,314 | 20 | 2014-10-10T23:12:37Z | [
"python",
"dictionary",
"object-literal"
] | Why is the Python dict constructor slower than the using literal syntax?
After hot debate with my colleague, I did some comparison and got the following statistics:
```
python2.7 -m timeit "d = dict(x=1, y=2, z=3)"
1000000 loops, best of 3: 0.47 usec per loop
python2.7 -m timeit "d = {'x': 1, 'y': 2, 'z': 3}"
100000... | The constructor is slower because it creates the object by calling the `dict()` function, whereas the compiler turns the dict literal into [`BUILD_MAP`](https://docs.python.org/2/library/dis.html#opcode-BUILD_MAP) bytecode, saving the function call. |
python: Faster local maximum in 2-d matrix | 26,309,635 | 2 | 2014-10-10T23:56:51Z | 26,309,701 | 8 | 2014-10-11T00:05:42Z | [
"python",
"numpy",
"matrix",
"scipy"
] | Given: R is an mxn float matrix
Output: O is an mxn matrix where O[i,j] = R[i,j] if (i,j) is a local max and O[i,j] = 0 otherwise. Local maximum is defined as the maximum element in a 3x3 block centered at i,j.
What's a faster way to do this operation on python using numpy and scipy.
```
m,n = R.shape
for i in range... | You can use [`scipy.ndimage.maximum_filter`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.filters.maximum_filter.html):
```
In [28]: from scipy.ndimage import maximum_filter
```
Here's a sample `R`:
```
In [29]: R
Out[29]:
array([[3, 3, 0, 0, 3],
[0, 0, 2, 1, 3],
[0, 1, 1, 1, 2],
... |
NLP project on Comment Summarization | 26,312,002 | 3 | 2014-10-11T06:59:47Z | 26,323,566 | 7 | 2014-10-12T09:13:05Z | [
"python",
"machine-learning",
"nlp",
"nltk",
"summarization"
] | I am planning to do my final year project on **Natural Language Processing** (using NLTK) and my area of interest is **Comment Summarization from Social media websites** such as Facebook. For example, I am trying to do something like this:
Random Facebook comments in a picture :
1. Wow! Beautiful.
2. Looking really b... | Topic model clustering is what you are looking for.
A search on Google Scholars for "topic model clustering will give you lots of references on topic model clustering.
To understand them, you need to be familiar with approaches for the following tasks, apart from basics of Machine Learning in general.
1. Clustering:... |
OperationalError, no such column. Django | 26,312,219 | 17 | 2014-10-11T07:27:33Z | 26,353,756 | 20 | 2014-10-14T06:09:18Z | [
"python",
"django",
"django-rest-framework"
] | I am very new to django and was able to finish the tutorial on djangoproject.com without any errors. I am now going through the Django REST framework tutorial found at <http://www.django-rest-framework.org/>
I am almost finished with it and just added authentication. Now I am getting :
```
OperationalError at /snippet... | As you went through the tutorial you must have come across the section on migration, as this was one of the major changes in django 1.7
Prior to django 1.7, the syncdb command never made any change that had a chance to destroy data currently in the database. This meant that if you did syncdb for a model, then added a ... |
Find if 24 hrs have passed between datetimes - Python | 26,313,520 | 6 | 2014-10-11T10:11:09Z | 26,313,848 | 9 | 2014-10-11T10:53:05Z | [
"python",
"datetime",
"timezone"
] | I have the following method:
```
# last_updated is a datetime() object, representing the last time this program ran
def time_diff(last_updated):
day_period = last_updated.replace(day=last_updated.day+1, hour=1,
minute=0, second=0,
microsecond=... | If `last_updated` is a naive datetime object representing the time in UTC:
```
from datetime import datetime, timedelta
if (datetime.utcnow() - last_updated) > timedelta(1):
# more than 24 hours passed
```
If `last_updated` is the local time (naive (timezone-unaware) datetime object):
```
import time
DAY = 86... |
How to launch and configure an EMR cluster using boto | 26,314,316 | 6 | 2014-10-11T11:50:34Z | 27,768,332 | 15 | 2015-01-04T17:34:31Z | [
"python",
"amazon-web-services",
"boto",
"amazon-emr"
] | I'm trying to launch a cluster and run a job all using boto.
I find lot's of examples of creating job\_flows. But I can't for the life of me, find an example that shows:
1. How to define the cluster to be used (by clusted\_id)
2. How to configure an launch a cluster (for example, If I want to use spot instances for so... | Boto and the underlying EMR API is currently mixing the terms *cluster* and *job flow*, and job flow is being [deprecated](http://docs.aws.amazon.com/ElasticMapReduce/latest/API/API_DescribeJobFlows.html). I consider them synonyms.
You create a new cluster by calling the `boto.emr.connection.run_jobflow()` function. I... |
In heroku python tutorial, virtualenv issues installing wsgiref (ez_setup syntax error?) | 26,315,455 | 3 | 2014-10-11T14:01:28Z | 26,321,863 | 7 | 2014-10-12T04:26:58Z | [
"python",
"heroku",
"virtualenv",
"wsgi"
] | I'm going through the Heroku tutorial "Getting Started with Python." I'm at the step where I want to build my environment locally with virtualenv so I can run the test app locally. The requirements.txt includes
```
wsgiref==0.1.2
```
and upon getting to that step virutalenv outputs the following error message
```
Do... | Heroku supports Python versions [2.4.4 through 3.4.1](https://github.com/heroku/heroku-buildpack-python/tree/master/builds/runtimes).
The tutorial you are using is designed for Python 2.7.8, which is why your application is not working locally. If you were to push the application up to Heroku, it would work properly.
... |
Remote debuging on Pycharm is concatenating local and remote path | 26,316,212 | 4 | 2014-10-11T15:27:30Z | 26,349,764 | 9 | 2014-10-13T22:18:08Z | [
"python",
"raspberry-pi",
"pycharm",
"remote-debugging"
] | I have a problem that I haven't been able to find an answer to.
I'm trying to debug a Python program running on a Raspberry Pi. The source code is located on my PC.
I set up an FTPS deployment and remote interpreter over ssh.
When I just run the app on the remote Raspberry Pi there are no problems.
The problem appe... | From [Run/Debug Configuration: Python](http://www.jetbrains.com/pycharm/webhelp/run-debug-configuration-python.html) section of PyCharm 3.4.0 Web Help:
> Path mappings â
> This field appears, if a remote interpreter has been
> selected in the field Python interpreter. Click the browse button
> to define the required... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.