title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
AttributeError: can't set attribute in python | 22,562,425 | 12 | 2014-03-21T15:06:27Z | 22,562,689 | 11 | 2014-03-21T15:18:34Z | [
"python",
"attributes"
] | Here is my code
```
N = namedtuple("N", ['ind', 'set', 'v'])
def solve()
items=[]
stack=[]
R = set(range(0,8))
for i in range(0,8):
items.append(N(i,R,8))
stack.append(N(0,R-set(range(0,1)),i))
while(len(stack)>0):
node = stack.pop()
print node
print i... | `namedtuple`s are immutable, just like standard tuples. You have two choices:
1. Use a different data structure, e.g. a class (or just a dictionary); or
2. Instead of updating the structure, replace it.
The former would look like:
```
class N(object):
def __init__(self, ind, set, v):
self.ind = ind
... |
Pandas dataset into an array for modelling in Scikit-Learn | 22,562,540 | 8 | 2014-03-21T14:31:31Z | 22,562,541 | 7 | 2014-03-21T14:44:25Z | [
"python",
"pandas",
"scikit-learn"
] | Can we run scikit-learn models on Pandas DataFrames or do we need to convert DataFrames into NumPy arrays? | You can use `pandas.DataFrame` with `sklearn`, for example:
```
import pandas as pd
from sklearn.cluster import KMeans
data = [(0.2, 10),
(0.3, 12),
(0.24, 14),
(0.8, 30),
(0.9, 32),
(0.85, 33.3),
(0.91, 31),
(0.1, 15),
(-0.23, 45)]
p_df = pd.DataFrame(... |
Why can two functions with the same `id` have different attributes? | 22,566,983 | 16 | 2014-03-21T18:40:11Z | 22,567,094 | 18 | 2014-03-21T18:45:49Z | [
"python",
"function",
"dynamic",
"methods",
"types"
] | Why can two functions with the same `id` value have differing attributes like `__doc__` or `__name__`?
Here's a toy example:
```
some_dict = {}
for i in range(2):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function... | You are comparing **methods**, and method objects are created anew each time you access one on an instance or class (via the [descriptor protocol](http://docs.python.org/2/howto/descriptor.html)).
Once you tested their `id()` you discard the method again (there are no references to it), so Python is free to reuse the ... |
python requests file upload | 22,567,306 | 3 | 2014-03-21T18:57:13Z | 22,567,429 | 21 | 2014-03-21T19:04:01Z | [
"python",
"file-upload",
"python-requests"
] | I'm performing a simple task of uploading a file using Python requests library. I searched Stack Overflow and no one seemed to have the same problem, namely, that the file is not received by the server:
```
import requests
url='http://nesssi.cacr.caltech.edu/cgi-bin/getmulticonedb_release2.cgi/post'
files={'files': op... | If `upload_file` is meant to be the file, use:
```
files = {'upload_file': open('file.txt','rb')}
values = {'DB': 'photcat', 'OUT': 'csv', 'SHORT': 'short'}
r = requests.post(url, files=files, data=values)
```
and `requests` will send a multi-part form POST body with the `upload_file` field set to the contents of th... |
Is it possible to delete an instance of a class that automatically removes it from all lists in which it is an element? | 22,568,690 | 4 | 2014-03-21T20:16:05Z | 22,568,845 | 7 | 2014-03-21T20:25:11Z | [
"python",
"list",
"class",
"instance",
"del"
] | I have an instance of a `class` which appears as an element in multiple `list`s. I want to delete the instance and simultaneously remove it from every `list` in which it is an element. Is this possible? | One answer to this is to always allow the objects that you are putting into the lists to manage list membership. For example, rather than saying
```
listA.append(objectA)
```
you would use
```
objectA.addToList(listA)
```
This would allow you to internally store a list of all list that contain `objectA`. Then, when... |
The number 0.6 is not converting to a fraction correctly | 22,569,181 | 2 | 2014-03-21T20:43:58Z | 22,569,240 | 9 | 2014-03-21T20:47:16Z | [
"python",
"fractions"
] | I am trying to create a program that factors quadratics. I was mostly successful, but I am having trouble with the quadratic `5x^2 -13x + 6` in which one of the roots is 0.6, or 3/5. I want to write it as a fraction, but it is not working correctly. It is giving me the following:
`5(x - 2)(x - 5404319552844595/9007199... | ```
>>> from fractions import Fraction
>>> Fraction(0.6)
Fraction(5404319552844595, 9007199254740992)
>>> Fraction("0.6")
Fraction(3, 5)
```
`0.6` can't be represented exactly as a binary `float`. See [Floating Point Arithmetic: Issues and Limitations](http://docs.python.org/3/tutorial/floatingpoint.html). This is not... |
Python dictionary "plus-equal" behavior | 22,569,981 | 2 | 2014-03-21T21:37:38Z | 22,570,023 | 7 | 2014-03-21T21:40:58Z | [
"python",
"dictionary",
"operators",
"magic-methods",
"mutators"
] | I'm trying to understand the exact mechanism behind updating a python dictionary using `d[key] += diff`. I have some helper classes to trace magic method invocations:
```
class sdict(dict):
def __setitem__(self, *args, **kargs):
print "sdict.__setitem__"
return super(sdict, self).__setitem__(*args,... | In the first example, you didn't apply the `+=` operator to the dictionary. You applied it to the value stored in the `d['a']` key, and that's a different object altogether.
In other words, Python will retrieve `d['m']` (a `__getitem__` call), apply the `+=` operator to that, then set the result of that expression bac... |
How to store argparse values in variables? | 22,570,407 | 3 | 2014-03-21T22:09:02Z | 22,570,801 | 8 | 2014-03-21T22:42:12Z | [
"python",
"argparse"
] | I am trying to add command line options to my script, using the following code:
```
import argparse
parser = argparse.ArgumentParser('My program')
parser.add_argument('-x', '--one')
parser.add_argument('-y', '--two')
parser.add_argument('-z', '--three')
args = vars(parser.parse_args())
foo = args['one']
bar = args[... | That will work, but you can simplify it a bit like this:
```
args = parser.parse_args()
foo = args.one
bar = args.two
cheese = args.three
``` |
Installing Python Requests | 22,570,929 | 5 | 2014-03-21T22:54:22Z | 22,571,009 | 7 | 2014-03-21T23:00:52Z | [
"python",
"python-requests"
] | So I'm trying to download requests using pip and am getting the error below. I've checked the error log but it's largely incomprehensible to me.
Any suggestions? I'm getting a similar issue when trying to use pip for beautifulsoup4.
```
~ â´ pip install requests
Downloading/unpacking requests
Downloading requests-... | You are trying to install the package in '/Library/Python/2.7/site-packages/requests' but it requires root permissions to do so. This should do the trick:
```
$ sudo pip install requests
``` |
Can Django manage.py custom commands return a value? How, or why not? | 22,571,320 | 9 | 2014-03-21T23:31:03Z | 22,571,528 | 7 | 2014-03-21T23:51:21Z | [
"python",
"django",
"django-commands"
] | Following the documentation:
<https://docs.djangoproject.com/en/dev/howto/custom-management-commands/>
I created my own custom command (called something else but example shown below):
```
from django.core.management.base import BaseCommand, CommandError
from polls.models import Poll
class Command(BaseCommand):
a... | If you want to get the output of `call_command()`, you need to capture stdout. Here's how you can do it:
```
out = StringIO()
call_command('call_custom_command', stdout=out)
value = out.getvalue()
print value
```
This technique is actually used in [django tests](https://github.com/django/django/blob/master/tests/use... |
Debugging the error "gcc: error: x86_64-linux-gnu-gcc: No such file or directory" | 22,571,848 | 15 | 2014-03-22T00:30:58Z | 22,696,574 | 33 | 2014-03-27T18:49:49Z | [
"python",
"gcc",
"makefile",
"autotools"
] | I'm trying to build:
<https://github.com/kanzure/nanoengineer>
But it looks like it errors out on:
```
gcc -DHAVE_CONFIG_H -I. -I../.. -I/usr/include/python2.7 -std=c99 x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -MT libstructcompare_a-structcompare.o -... | After a fair amount of work, I was able to get it to build on Ubuntu 12.04 x86 and Debian 7.4 x86\_64. I wrote up a guide below. Can you please try following it to see if it resolves the issue?
If not please let me know where you get stuck.
**Install Common Dependencies**
```
sudo apt-get install build-essential aut... |
Debugging the error "gcc: error: x86_64-linux-gnu-gcc: No such file or directory" | 22,571,848 | 15 | 2014-03-22T00:30:58Z | 25,249,486 | 9 | 2014-08-11T17:55:43Z | [
"python",
"gcc",
"makefile",
"autotools"
] | I'm trying to build:
<https://github.com/kanzure/nanoengineer>
But it looks like it errors out on:
```
gcc -DHAVE_CONFIG_H -I. -I../.. -I/usr/include/python2.7 -std=c99 x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -MT libstructcompare_a-structcompare.o -... | ```
apt-get install python-dev
```
...solved the problem for me. |
why zip object vanished? | 22,572,653 | 3 | 2014-03-22T02:22:39Z | 22,572,679 | 7 | 2014-03-22T02:26:20Z | [
"python",
"python-3.x"
] | Please see the code,why list(w) properly display ,and `h` display nothing ?
```
>>> x=[1,2,3]
>>> y=[4,5,6]
>>> w=zip(x,y)
>>> list(w)
[(1, 4), (2, 5), (3, 6)]
>>> h=list(w)
>>> h
[]
``` | *In Python 3*, [`zip`](http://docs.python.org/3/library/functions.html#zip) returns an [iterator](http://docs.python.org/3/library/stdtypes.html#iterator-types)1.
> Make an *iterator* that aggregates elements from each of the iterables.
An iterator remembers to where it has been iterated; at the `h=list(w)` line, the... |
Why is numpy.dtype('float64') special? | 22,575,050 | 8 | 2014-03-22T07:59:29Z | 22,575,236 | 7 | 2014-03-22T08:25:19Z | [
"python",
"numpy"
] | Can someone explain the logic behind the output of the following script?
```
import numpy
if(numpy.dtype(numpy.float64) == None):
print "Surprise!!!!"
```
Thanks :) | Looks like an unfortunate accident: someone decided that `dtype(None)` would "default" to float (though `dtype()` is an error). Then someone else wrote `dtype.__eq__` such that it converts its second argument to a dtype before comparing. So `dtype(float) == None` is `dtype(float) == dtype(None)` which is true.
You can... |
Is `type` really a function, or not? | 22,575,391 | 3 | 2014-03-22T08:44:57Z | 22,575,427 | 10 | 2014-03-22T08:48:49Z | [
"python",
"function",
"object",
"python-3.x",
"types"
] | First, I'm sorry if I'm asking something dumb, because I'm new to Python...
I was reading <http://docs.python.org/3.1/reference/datamodel.html#objects-values-and-types> and saw that phrase:
> The type() function returns an objectâs type (which is an object itself)
Of course, I decided to check this:
```
>>> def ... | Yes, `type` is a function, but it is implemented in C.
It also **has** to be it's own type, otherwise you could not do:
```
>>> def foo(): pass
...
>>> type(foo)
<type 'function'>
>>> type(type)
<type 'type'>
>>> isinstance(type(foo), type)
True
```
e.g. you could not test if a type is a type, if `type`'s type was ... |
Theano: Get matrix dimension and value of matrix (SharedVariable) | 22,579,246 | 6 | 2014-03-22T15:00:41Z | 22,612,374 | 13 | 2014-03-24T14:33:38Z | [
"python",
"theano"
] | I would like to know how to retrieve the dimension of a SharedVariable from theano.
This here e.g. does not work:
```
from theano import *
from numpy import *
import numpy as np
w = shared( np.asarray(zeros((1000,1000)), np.float32) )
print np.asarray(w).shape
print np.asmatrix(w).shape
```
and only returns
```
... | You can get the value of a shared variable like this:
```
w.get_value()
```
Then this would work:
```
w.get_value().shape
```
But this will copy the shared variable content. To remove the copy you can use the borrow parameter like this:
```
w.get_value(borrow=True).shape
```
But if the shared variable is on the G... |
Python: finding the intersection point of two gaussian curves | 22,579,434 | 4 | 2014-03-22T15:15:16Z | 22,579,904 | 9 | 2014-03-22T15:55:10Z | [
"python",
"gaussian"
] | I have two gaussian plots:
```
x = np.linspace(-5,9,10000)
plot1=plt.plot(x,mlab.normpdf(x,2.5,1))
plot2=plt.plot(x,mlab.normpdf(x,5,1))
```
and I want to find the point at where the two curves intersect. Is there a way of doing this? In particular I want to find the value of the x-coordinate where they meet. | You want to find the x's such that both gaussian functions have the same height.(i.e intersect)
You can do so by equating two gaussian functions and solve for x. In the end you will get a quadratic equation with coefficients relating to the gaussian means and variances. Here is the final result:
```
import numpy as n... |
python-numpy: Apply a function to each row of a ndarray | 22,581,763 | 8 | 2014-03-22T18:29:34Z | 22,581,833 | 14 | 2014-03-22T18:36:26Z | [
"python",
"arrays",
"numpy",
"vectorization"
] | I have this function to calculate squared Mahalanobis distance of vector x to mean:
```
def mahalanobis_sqdist(x, mean, Sigma):
'''
Calculates squared Mahalanobis Distance of vector x
to distibutions' mean
'''
Sigma_inv = np.linalg.inv(Sigma)
xdiff = x - mean
sqmdist = np.dot(np.dot(xdiff, Sig... | To apply a function to each row of an array, you could use:
```
np.apply_along_axis(mahalanobis_sqdist, 1, d1, mean1, Sig1)
```
In this case, however, there is a better way. You don't have to apply a function to each row. Instead, you can apply NumPy operations to the entire `d1` array to calculate the same result. [... |
Why does classifier.predict() method expects the number of features in the test data to be the same as in training data? | 22,581,838 | 2 | 2014-03-22T18:36:52Z | 22,591,911 | 9 | 2014-03-23T14:04:37Z | [
"python",
"machine-learning",
"scikit-learn",
"svm"
] | I am trying to build a simple SVM document classifier using scikit-learn and I am using the following code :
```
import os
import numpy as np
import scipy.sparse as sp
from sklearn.metrics import accuracy_score
from sklearn import svm
from sklearn.metrics import classification_report
from sklearn.feature_extract... | To ensure that you have the same feature representation, you should not fit\_transform your test data, but only transform it.
```
x_train=vectorizer.fit_transform(f1)
x_test=vectorizer.transform(data2)
```
A similar transformation into homogeneous features should be applied to your labels. |
Django ImageField default | 22,581,877 | 9 | 2014-03-22T18:40:06Z | 22,582,430 | 10 | 2014-03-22T19:27:08Z | [
"python",
"django",
"django-models",
"imagefield",
"django-imagekit"
] | **models.py:**
```
class UserProfile(models.Model):
photo = models.ImageField(upload_to = get_upload_file_name,
storage = OverwriteStorage(),
default = os.path.join(settings.STATIC_ROOT,'images','generic_profile_photo.jpg'),
... | If you don't define the default attribute, does image uploading work successfully? When I implemented an ImageField in my own django project, I didn't use the default attribute. Instead I wrote this method to get the path to the default image:
```
def image_url(self):
"""
Returns the URL of the image associated with t... |
Python Anaconda - How to Safely Uninstall | 22,585,235 | 61 | 2014-03-22T23:51:54Z | 22,585,265 | 66 | 2014-03-22T23:55:31Z | [
"python",
"anaconda"
] | I installed Python Anaconda on Mac (OS Mavericks). I wanted to revert to the default version of Python on my Mac. What's the best way to do this? Should I delete the ~/anaconda directory? Any other changes required?
Currently when I run which python I get this path:
/Users/username/anaconda/bin/python | The anaconda installer adds a line in your `~/.bash_profile` script that prepends the anaconda bin directory to your `$PATH` environment variable. Deleting the anaconda directory should be all you need to do, but it's good housekeeping to remove this line from your setup script too. |
Python Anaconda - How to Safely Uninstall | 22,585,235 | 61 | 2014-03-22T23:51:54Z | 37,377,981 | 10 | 2016-05-22T18:25:08Z | [
"python",
"anaconda"
] | I installed Python Anaconda on Mac (OS Mavericks). I wanted to revert to the default version of Python on my Mac. What's the best way to do this? Should I delete the ~/anaconda directory? Any other changes required?
Currently when I run which python I get this path:
/Users/username/anaconda/bin/python | Removing the Anaconda directory helps, but I don't think that's a good idea as you might need to use anaconda sometimes in near future. So, as suggested by [mwaskom](http://stackoverflow.com/users/1533576/mwaskom), anaconda installer automatically adds PATH variable which points to `anaconda/bin` directory in the `~/.b... |
Using twisted on OS X Mavericks | 22,585,393 | 3 | 2014-03-23T00:12:59Z | 22,585,567 | 9 | 2014-03-23T00:36:25Z | [
"python",
"osx",
"twisted"
] | I am trying to use twisted on OS X Mavericks, but I get this error message when I try to import it.
```
christohersmbp2:~ christopherspears$ python
Python 2.7.6 (default, Mar 8 2014, 09:29:01)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin
Type "help", "copyright", "credits" or "license... | If you're using `virtualenv`, it doesn't matter whether you are using the system python or not.
Simply `pip install twisted` in your virtualenv, like:
```
$ workon MyTwistedEnv
$ pip install twisted
```
Currently, [due to a bug in Xcode that affects all projects which contain extension modules](https://twistedmatrix... |
Python: Is there an equivalent of mid, right, and left from BASIC? | 22,586,286 | 9 | 2014-03-23T02:20:52Z | 22,586,341 | 14 | 2014-03-23T02:29:52Z | [
"python",
"basic"
] | I want to do something like this:
```
>>> mystring = "foo"
>>> print(mid(mystring))
```
Help! | slices to the rescue :)
```
def left(s, amount):
return s[:amount]
def right(s, amount):
return s[-amount:]
def mid(s, offset, amount):
return s[offset:offset+amount]
``` |
Best way of using google translation by Python | 22,587,001 | 6 | 2014-03-23T04:14:49Z | 26,085,878 | 9 | 2014-09-28T14:28:34Z | [
"python",
"translation"
] | I am trying to translate large number of text files from English to other several languages. And we use Python in our project, we try to use Google translation service to translate them first then we will correct the mistakes manually.
I have come up with two ways to translate:
1. Use Python Google translation API. H... | I made my own google translate function for python ;)
try it <https://github.com/mouuff/Google-Translate-API> |
How to use full_clean() for data validation before saving in Django 1.5 gracefully? | 22,587,019 | 6 | 2014-03-23T04:18:00Z | 23,684,110 | 8 | 2014-05-15T16:40:10Z | [
"python",
"django",
"validation",
"model"
] | I think Django's model validation is a little inconvenient for those models that don't use built-in ModelForm, though not knowing why.
Firstly, `full_clean()` needs called manually.
> Note that full\_clean() will not be called automatically when you call
> your modelâs save() method, nor as a result of ModelForm va... | Even though the idea of enforcing validation on Model level seems right, Django does not do this by default for various reasons. Except for some backward-compatibility problems, the authors probably don't want to support this because they fear this could create a false feeling of safety when in fact your data are not g... |
if python assignments don't return a value how can we do a = b = c = 42 | 22,587,239 | 4 | 2014-03-23T04:54:17Z | 22,587,255 | 8 | 2014-03-23T04:56:18Z | [
"python"
] | After looking at these two questions,
* [Why does Python assignment not return a value?](http://stackoverflow.com/questions/4869770/why-does-python-assignment-not-return-a-value)
* [Python assigning multiple variables to same list value?](http://stackoverflow.com/questions/10857654/python-assigning-multiple-variables-... | Because of [a special exception in the syntax](http://docs.python.org/3.4/reference/simple_stmts.html#assignment-statements), carved out for that exact use case. See the BNF:
```
assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)
```
Note the `(target_list "=")+`. |
Tracking white color using python opencv | 22,588,146 | 6 | 2014-03-23T07:23:39Z | 22,588,395 | 8 | 2014-03-23T08:01:09Z | [
"python",
"opencv",
"computer-vision",
"hsv",
"color-tracking"
] | I would like to track white color using webcam and python opencv. I already have the code to track blue color.
```
_, frame = cap.read()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([110,100,100])
upper_blue = np.array([130,255,255])
#How to define this range... | Let's take a look at HSV color space:

You need white, which is close to the center and rather high. Start with
```
sensitivity = 15
lower_white = np.array([0,0,255-sensitivity])
upper_white = np.array([255,sensitivity,255])
```
and then adjust the ... |
pandas applying regex to replace values | 22,588,316 | 11 | 2014-03-23T07:48:50Z | 22,588,340 | 8 | 2014-03-23T07:51:48Z | [
"python",
"regex",
"pandas"
] | I have read some pricing data into a pandas dataframe the values appear as:
```
$40,000*
$40000 conditions attached
```
I want to strip it down to just the numeric values.
I know I can loop through and apply regex
```
[0-9]+
```
to each field then join the resulting list back together but is there a not loopy way?
... | You could remove all the non-digits using `re.sub()`:
```
value = re.sub(r"[^0-9]+", "", value)
```
[regex101 demo](http://regex101.com/r/yS7lG7) |
pandas applying regex to replace values | 22,588,316 | 11 | 2014-03-23T07:48:50Z | 22,591,024 | 30 | 2014-03-23T12:39:48Z | [
"python",
"regex",
"pandas"
] | I have read some pricing data into a pandas dataframe the values appear as:
```
$40,000*
$40000 conditions attached
```
I want to strip it down to just the numeric values.
I know I can loop through and apply regex
```
[0-9]+
```
to each field then join the resulting list back together but is there a not loopy way?
... | You could use [`Series.str.replace`](http://pandas.pydata.org/pandas-docs/stable/basics.html#vectorized-string-methods):
```
import pandas as pd
df = pd.DataFrame(['$40,000*','$40000 conditions attached'], columns=['P'])
print(df)
# P
# 0 $40,000*
# 1 $40000 conditions ... |
Call python code from c via cython | 22,589,868 | 8 | 2014-03-23T10:51:26Z | 22,593,900 | 7 | 2014-03-23T16:52:31Z | [
"python",
"c",
"cython"
] | So I'd like to call some python code from c via cython. I've managed to call cython code from c. And I can also call python code from cython. But when I add it all together, some things are missing.
Here is my python code (`quacker.pyx`):
```
def quack():
print "Quack!"
```
Here is my cython "bridge" (`caller.py... | If you rename the `quacker.pyx` to `quacker.py`, everything is actually correct. The only problem is that your program won't search for python modules in the current directory, resulting in the output:
```
Exception NameError: "name 'quack' is not defined" in 'caller.call_quack' ignored
```
If you add the current dir... |
pandas: multiple conditions while indexing data frame - unexpected behavior | 22,591,174 | 9 | 2014-03-23T12:52:59Z | 22,591,267 | 24 | 2014-03-23T13:02:05Z | [
"python",
"pandas",
"boolean-logic"
] | I am filtering rows in a dataframe by values in two columns.
For some reason the OR operator behaves like I would expect AND operator to behave and vice versa.
My test code:
```
import pandas as pd
df = pd.DataFrame({'a': range(5), 'b': range(5) })
# let's insert some -1 values
df['a'][1] = -1
df['b'][1] = -1
df['... | > As you can see, the AND operator drops every row in which at least one
> value equals -1. On the other hand, the OR operator requires both
> values to be equal to -1 to drop them.
That's right. Remember that you're writing the condition in terms of what you want to *keep*, not in terms of what you want to drop. For ... |
Save Contents of QTextEdit as *.pdf? | 22,591,865 | 3 | 2014-03-23T14:01:20Z | 22,596,108 | 7 | 2014-03-23T19:42:36Z | [
"python",
"python-2.7",
"pyqt",
"reportlab",
"qtextedit"
] | I am trying to save the contents of a Text Editor as a pdf file. The text Editor has been made using PyQt (i didn't make the text Editor), i got the code of the text editor from [here](http://thecodeinn.blogspot.in/2013/07/fully-functional-pyqt-text-editor.html). I have done some changes to the editor but that wont be ... | The source code for the Text Editor already has a `PDF` method, but it is unused, and possibly won't work properly as it stands.
A basic re-write of the method that should work on all platforms, would look like this:
```
def SavetoPDF(self):
filename = QtGui.QFileDialog.getSaveFileName(self, 'Save to PDF')
if... |
Compiling Python 3.4 is not copying pip | 22,592,686 | 29 | 2014-03-23T15:12:38Z | 22,594,608 | 40 | 2014-03-23T17:46:38Z | [
"python",
"compilation",
"pip",
"python-3.4"
] | I have compiled Python 3.4 from the sources on Linux Mint, but for some reason it is not copying `pip` to its final compiled folder (after the `make install`).
Any ideas? | Just sorted it out. Here it is how to compile python from the sources.
```
$ ./configure --prefix=/home/user/sources/compiled/python3.4_dev --with-ensurepip=install
$ make
$ make install
```
If you get "Ignoring ensurepip failure: pip 1.5.4 requires SSL/TLS" error:
```
$ sudo apt-get install libssl-dev openssl
$ ls... |
Upgrade path for re-usable apps with South AND django 1.7 migrations | 22,597,240 | 8 | 2014-03-23T21:22:52Z | 25,614,320 | 7 | 2014-09-02T00:33:35Z | [
"python",
"django",
"django-models",
"django-south"
] | Or: can Django 1.7 users still use South?
I'm the maintainer of a re-usable app. Our policy is to always support the latest two versions of Django. We have an extensive set of South migrations and, we want to support the new Django 1.7 migration system going forward.
What I'm confused with is how I can allow develope... | **South 1.0** provides the solution. It will look first in a `south_migrations/` folder and fallback to `migrations/`. So in your case of third-party libraries needing to support older and newer Djangos: move South files to `south_migration/` and create new 1.7 migrations in `migrations/`.
* South 1.0 [release notes](... |
Why do 'and' & 'or' return operands in Python? | 22,598,547 | 10 | 2014-03-23T23:31:20Z | 22,598,675 | 9 | 2014-03-23T23:43:48Z | [
"python",
"boolean",
"operands"
] | I'm going through the LPTHW and I came across something I cannot understand. When will it ever be the case that you want your boolean `and` or `or` to return something other than the boolean? The LPTHW text states that all languages like python have this behavior. Would he mean interpreted vs. compiled languages or duc... | I think you're somehow confused about what the docs says. Take a look at these two docs sections: [Truth Value Testing and Boolean Operators](http://docs.python.org/2/library/stdtypes.html?highlight=short%20circuit#truth-value-testing). To quote the last paragraph on the fist section:
> Operations and built-in functio... |
How do I create a bar chart in python ggplot? | 22,599,521 | 6 | 2014-03-24T01:18:56Z | 22,600,411 | 12 | 2014-03-24T03:05:58Z | [
"python",
"pandas",
"python-ggplot"
] | I'm using yhat's [ggplot library](https://github.com/yhat/ggplot/). I have the following pandas DataFrame:
```
degree observed percent observed expected percent expected
0 0 0 0.0 0 0.044551
1 1 1 0.1 1 0.138604
2 2 ... | Use `weight`, here is an example:
```
from ggplot import *
import pandas as pd
df = pd.DataFrame({"x":[1,2,3,4], "y":[1,3,4,2]})
ggplot(aes(x="x", weight="y"), df) + geom_bar()
```
the output looks like:
 |
json.loads() is returning a unicode object instead of a dictionary | 22,600,128 | 3 | 2014-03-24T02:32:54Z | 22,600,161 | 9 | 2014-03-24T02:36:19Z | [
"python",
"json",
"python-2.7",
"unicode",
"fabric"
] | I'm reading in json from a file on a remote server using fabric:
```
from StringIO import StringIO
output = StringIO()
get(file_name, output)
output = output.getvalue()
```
The value of `output` is now:
`'"{\\n \\"status\\": \\"failed\\", \\n \\"reason\\": \\"Record already
exists.\\"\\n}"'`
When I try to parse t... | The solution here would be to figure out why your file is being doubly JSON encoded in the first place, but given that data passing it through `json.loads` twice is the right approach. |
Is __init__ always required? | 22,601,253 | 4 | 2014-03-24T04:43:56Z | 22,601,327 | 7 | 2014-03-24T04:52:56Z | [
"python",
"class",
"init",
"self"
] | Okay. So I saw someone using this code, and I understand it, I so I'm going to use it.
Is it necessary to have `__init__`?
```
class A(object):
def __init__(self):
self.x = 'Hello'
def method_a(self, foo):
print self.x + ' ' + foo
a = A()
a.method_a('Sailor!')
```
Couldn't I just do somethi... | What you're asking doesn't have much to do with `__init__`. You can do what you say in your second example, but it doesn't do the same thing as the first example.
In the first example, it prints "Hello" followed by an argument that you pass to the method. You can make it print something besides "sailor" by passing som... |
How to create a Pandas DataFrame from String | 22,604,564 | 46 | 2014-03-24T08:43:29Z | 22,605,281 | 75 | 2014-03-24T09:21:27Z | [
"python",
"csv",
"pandas"
] | In order to test some functionality I would like to create a `DataFrame` from a string. Let's say my testdata looks like:
```
TESTDATA="""col1;col2;col3
1;4.4;99
2;4.5;200
3;4.7;65
4;3.2;140
"""
```
What is the simplest way to read that data into a Pandas `DataFrame`? | Simple way to do this was to use [`StringIO`](https://docs.python.org/2/library/io.html#io.StringIO) and pass that to the [`pandas.read_csv`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html#pandas.read_csv) function. E.g:
```
import sys
if sys.version_info[0] < 3:
from StringIO import S... |
Start, End and Duration of Maximum Drawdown in Python | 22,607,324 | 5 | 2014-03-24T10:53:06Z | 22,607,546 | 16 | 2014-03-24T11:02:25Z | [
"python",
"numpy",
"time-series",
"algorithmic-trading"
] | Given a time series, I want to calculate the maximum drawdown, and I also want to locate the beginning and end points of the maximum drawdown so I can calculate the duration. I want to mark the beginning and end of the drawdown on a plot of the timeseries like this:

S... | Just find out where running maximum minus current value is largest:
```
n = 1000
xs = np.random.randn(n).cumsum()
i = np.argmax(np.maximum.accumulate(xs) - xs) # end of the period
j = np.argmax(xs[:i]) # start of period
plt.plot(xs)
plt.plot([i, j], [xs[i], xs[j]], 'o', color='Red', markersize=10)
```
, reverse=True)
```
Strings are compared lexicographically, so:
```
>>> '2' > '100'
True
```
Conversion to `int` fixes this issue:
```
>>> int('2') > int('100')
False
``` |
Should I use `app.exec()` or `app.exec_()` in my PyQt application? | 22,610,720 | 4 | 2014-03-24T13:24:11Z | 22,614,643 | 13 | 2014-03-24T16:08:20Z | [
"python",
"qt",
"python-3.x",
"pyqt",
"pyqt5"
] | I use Python 3 and PyQt5. Here's my test PyQt5 program, focus on the last 2 lines:
```
from PyQt5.QtCore import *
from PyQt5.QtWidgets import *
import sys
class window(QWidget):
def __init__(self,parent=None):
super().__init__(parent)
self.setWindowTitle('test')
self.resize(250,200)
app=QApplication(sys.... | That's because until Python 3, `exec` [was a reserved keyword](http://docs.python.org/2.7/reference/lexical_analysis.html#keywords), so the PyQt devs added underscore to it. From Python 3 onwards, `exec` is [no longer a reserved keyword](http://docs.python.org/3/reference/lexical_analysis.html#keywords) (because it is ... |
Python select ith element in OrderedDict | 22,610,896 | 2 | 2014-03-24T13:32:45Z | 22,610,981 | 7 | 2014-03-24T13:35:50Z | [
"python",
"python-3.x",
"ordereddictionary"
] | I have a snippet of code which orders a dictionary alphabetically.
Is there a way to select the ith key in the ordered dictionary and return its corresponding value? i.e.
```
import collections
initial = dict(a=1, b=2, c=2, d=1, e=3)
ordered_dict = collections.OrderedDict(sorted(initial.items(), key=lambda t: t[0]))
p... | Using [`itertools.islice`](http://docs.python.org/3/library/itertools.html#itertools.islice) is efficient here, because we don't have to create any intermediate lists, for the sake of subscripting.
```
from itertools import islice
print(next(islice(ordered_dict.items(), 2, None)))
```
If you want just the value, you ... |
How to access the real value of a cell using the openpyxl module for python | 22,613,272 | 17 | 2014-03-24T15:11:47Z | 22,804,454 | 14 | 2014-04-02T07:31:53Z | [
"python",
"cell",
"openpyxl"
] | I am having real trouble with this, since the cell.value function returns the formula used for the cell, and I need to extract the result Excel provides after operating.
Thank you.
---
Ok, I think I ahve found a way around it; apparently to access cell.internal value you have to use the iter\_rows() in your workshee... | From the code it looks like you're using the optimised reader: `read_only=True`. You can switch between extracting the formula and its result by using the `data_only=True` flag when opening the workbook.
`internal_value` was a private attribute that used to refer only to the (untyped) value that Excel uses, ie. number... |
How to access the real value of a cell using the openpyxl module for python | 22,613,272 | 17 | 2014-03-24T15:11:47Z | 28,100,021 | 28 | 2015-01-22T22:43:59Z | [
"python",
"cell",
"openpyxl"
] | I am having real trouble with this, since the cell.value function returns the formula used for the cell, and I need to extract the result Excel provides after operating.
Thank you.
---
Ok, I think I ahve found a way around it; apparently to access cell.internal value you have to use the iter\_rows() in your workshee... | Like Charlie Clark already suggest you can set `data_only` on `True` when you load your workbook:
```
from openpyxl import load_workbook
wb = load_workbook("file.xlsx", data_only=True)
sh = wb["Sheet_name"]
print(sh["x10"].value)
```
Good luck :) |
Python convert pairs list to dictionary | 22,614,980 | 10 | 2014-03-24T16:21:44Z | 22,614,996 | 22 | 2014-03-24T16:22:28Z | [
"python",
"list",
"dictionary"
] | I have a list of about 50 strings with an integer representing how frequently they occur in a text document. I have already formatted it like shown below, and am trying to create a dictionary of this information, with the first word being the value and the key is the number beside it.
```
string = [('limited', 1), ('a... | Like this, Python's [`dict()`](http://docs.python.org/3.4/library/functions.html#func-dict) function is perfectly designed for converting a `list` of `tuple`s, which is what you have:
```
>>> string = [('limited', 1), ('all', 16), ('concept', 1), ('secondly', 1)]
>>> my_dict = dict(string)
>>> my_dict
{'all': 16, 'sec... |
Python convert pairs list to dictionary | 22,614,980 | 10 | 2014-03-24T16:21:44Z | 22,614,997 | 9 | 2014-03-24T16:22:39Z | [
"python",
"list",
"dictionary"
] | I have a list of about 50 strings with an integer representing how frequently they occur in a text document. I have already formatted it like shown below, and am trying to create a dictionary of this information, with the first word being the value and the key is the number beside it.
```
string = [('limited', 1), ('a... | Just call [`dict()`](http://docs.python.org/2/library/stdtypes.html#dict):
```
>>> string = [('limited', 1), ('all', 16), ('concept', 1), ('secondly', 1)]
>>> dict(string)
{'limited': 1, 'all': 16, 'concept': 1, 'secondly': 1}
``` |
Use cases for property vs. descriptor vs. __getattribute__ | 22,616,559 | 6 | 2014-03-24T17:34:08Z | 22,617,259 | 7 | 2014-03-24T18:07:51Z | [
"python",
"properties",
"descriptor",
"getattribute"
] | The question refers to **which one is preferable** to be used in which use case, not about the technical background.
In python, you can control the access of attributes via a **property**, a **descriptor**, or **magic methods**. Which one is most pythonic in which use case? All of them seem to have the same effect (se... | Basically, use the simplest one you can. Roughly speaking, the order of complexity/heavy-duty-ness goes: regular attribute, `property`, `__getattr__`, `__getattribute__`/descriptor. (`__getattribute__` and custom descriptors are both things you probably won't need to do very often.) This leads to some simple rules of t... |
IOError: [Errno 22] invalid mode ('wb') or filename: | 22,620,965 | 7 | 2014-03-24T21:28:52Z | 22,620,985 | 8 | 2014-03-24T21:29:53Z | [
"python",
"file-io",
"ioerror"
] | I keep getting the following error.
```
IOError: [Errno 22] invalid mode ('wb') or filename: 'C:\\Users\\Viral Patel\\Documents\\GitHub\\3DPhotovoltaics\\Data_Output\\Simulation_Data\\Raw_Data\\Raw_Simulation_Data_2014-03-24 17:21:20.545000.csv'
```
I think it is due to the timestamp at the end of the filename. Any i... | You cannot use `:` in Windows filenames, see [Naming Files, Paths, and Namespaces](http://msdn.microsoft.com/en-us/library/windows/desktop/aa365247%28v=vs.85%29.aspx) ; it is one of the reserved characters:
> * The following reserved characters:
>
> + `<` (less than)
> + `>` (greater than)
> + `:` (colon)
> + ... |
Creating new binary columns from single string column in pandas | 22,621,716 | 7 | 2014-03-24T22:14:32Z | 22,621,979 | 8 | 2014-03-24T22:33:40Z | [
"python",
"pandas"
] | I've seen this before and simply can't remember the function.
Say I have a column "Speed" and each row has 1 of these values:
```
'Slow', 'Normal', 'Fast'
```
How do I create a new dataframe with all my rows except the column "Speed" which is now 3 columns: "Slow" "Normal" and "Fast" which has all of my rows labeled... | You can do this easily with `pd.get_dummies` ([docs](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html)):
```
In [37]: df = pd.DataFrame(['Slow', 'Normal', 'Fast', 'Slow'], columns=['Speed'])
In [38]: df
Out[38]:
Speed
0 Slow
1 Normal
2 Fast
3 Slow
In [39]: pd.get_dummies(df... |
Difference between import numpy and import numpy as np | 22,622,571 | 19 | 2014-03-24T23:19:42Z | 22,622,708 | 14 | 2014-03-24T23:33:20Z | [
"python",
"numpy"
] | I understand that when possible one should use
```
import numpy as np
```
This helps keep away any conflict due to namespaces. But I have noticed that while the command below works
```
import numpy.f2py as myf2py
```
the following does not
```
import numpy as np
np.f2py #throws no module named f2py
```
Can someon... | **numpy** is the top package name, and doing `import numpy` doesn't import submodule `numpy.f2py`.
When you do `import numpy` it creats a link that points to `numpy`, but `numpy` is not further linked to `f2py`. The link is established when you do `import numpy.f2py`
In your above code:
```
import numpy as np # np i... |
Difference between import numpy and import numpy as np | 22,622,571 | 19 | 2014-03-24T23:19:42Z | 22,623,539 | 7 | 2014-03-25T00:59:23Z | [
"python",
"numpy"
] | I understand that when possible one should use
```
import numpy as np
```
This helps keep away any conflict due to namespaces. But I have noticed that while the command below works
```
import numpy.f2py as myf2py
```
the following does not
```
import numpy as np
np.f2py #throws no module named f2py
```
Can someon... | The `import as` syntax was introduced in [PEP 221](http://legacy.python.org/dev/peps/pep-0221/) and is well documented there.
When you import a module via
```
import numpy
```
the numpy package is bound to the local variable `numpy`. The `import as` syntax simply allows you to bind the import to the local variable n... |
Run code before and after each test in py.test? | 22,627,659 | 9 | 2014-03-25T07:09:51Z | 22,638,709 | 9 | 2014-03-25T15:12:39Z | [
"python",
"py.test"
] | I want to run additional setup and teardown checks before and after each test in my test suite. I've looked at fixtures but not sure on whether they are the correct approach. I need to run the setup code prior to each test and I need to run the teardown checks after each test.
My use-case is checking for code that doe... | py.test fixtures are a technically adequate method to achieve your purpose.
You just need to define a fixture like that:
```
@pytest.yield_fixture(autouse=True)
def run_around_tests():
# Code that will run before your test, for example:
files_before = # ... do something to check the existing files
# A tes... |
How does dict.pop() detect if an optional argument has been passed? | 22,629,362 | 3 | 2014-03-25T08:47:54Z | 22,629,449 | 8 | 2014-03-25T08:52:25Z | [
"python"
] | ```
d = dict()
d.pop('hello', None) # No exception thrown
d.pop('hello', 0) # No exception thrown
d.pop('hello') # KeyError
```
I had thought that in Python we usually tested whether a default argument was passed by testing the argument with some sort of default value.
I can't think of any other 'natural' de... | Well, `dict` is implemented in C, so Python semantics don't really apply. Otherwise, I'd say look at the source in any case.
However, a good pattern for this is to use a sentinel as the default value. That will never match against anything passed in, so you can be sure that if you get that value, you only have one arg... |
How to pass command line arguments to ipython | 22,631,845 | 12 | 2014-03-25T10:33:51Z | 22,632,197 | 23 | 2014-03-25T10:46:56Z | [
"python",
"ipython"
] | Is there any way that I pass arguments to my python script through command line while using ipython? Ideally I want to call my script as:
```
ipython -i script.py --argument blah
```
and I want to be able to have `--argument` and `blah` listed in my `sys.argv`. | You can use one `--` more option before that:
```
ipython script.py -- --argument blah
```
Help of `Ipython`:
```
ipython [subcommand] [options] [-c cmd | -m mod | file] [--] [arg] ...
If invoked with no options, it executes the file and exits, passing the
remaining arguments to the script, just as if you had spec... |
How to re-use Django Admin login form from custom LoginRequired middleware | 22,632,229 | 5 | 2014-03-25T10:48:02Z | 22,642,071 | 8 | 2014-03-25T17:28:35Z | [
"python",
"django",
"django-authentication"
] | ### Background & attempted code:
I'm using Django 1.6.2 (on Python 3.3) and would like all views to effectively have the `login_required` decorator applied to them and re-use the actual admin login form. Note that the admin app will also be in use with the default urls. I have looked at [Redirect to admin for login](h... | # Not yet available
admin.site.urls have been provided with a login endpoint [only very recently](https://github.com/django/django/commit/5848bea9dc9458a9517d4c98993d742976771342) which explains the 404 you've experienced : django does not try to resolve it until you've successfully logged in.
# Workaround
One first... |
Python: Concatenate (or clone) a numpy array N times | 22,634,265 | 6 | 2014-03-25T12:13:45Z | 22,634,481 | 7 | 2014-03-25T12:22:40Z | [
"python",
"arrays",
"numpy",
"append",
"concatenation"
] | I want to create an MxN numpy array by cloning a Mx1 ndarray N times. Is there an efficient pythonic way to do that instead of looping?
Btw the following way doesn't work for me (X is my Mx1 array) :
```
numpy.concatenate((X, numpy.tile(X,N)))
```
since it created a [M\*N,1] array instead of [M,N] | You are close, you want to use `np.tile`, but like this:
```
a = np.array([0,1,2])
np.tile(a,(3,1))
```
Result:
```
array([[0, 1, 2],
[0, 1, 2],
[0, 1, 2]])
```
If you call `np.tile(a,3)` you will get `concatenate` behavior like you were seeing
```
array([0, 1, 2, 0, 1, 2, 0, 1, 2])
```
<http://docs.scipy.o... |
can not install psycopg2 2.5+ on mac osx 10.9 | 22,637,372 | 3 | 2014-03-25T14:21:24Z | 23,091,928 | 7 | 2014-04-15T18:47:19Z | [
"python",
"osx"
] | Here is the error log:
```
building 'psycopg2._psycopg' extension
creating build/temp.macosx-10.9-intel-2.7
creating build/temp.macosx-10.9-intel-2.7/psycopg
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DM... | This one works for me:
export CFLAGS=-Qunused-arguments
export CPPFLAGS=-Qunused-arguments
[clang error: unknown argument: '-mno-fused-madd' (python package installation failure)](http://stackoverflow.com/questions/22313407/clang-error-unknown-argument-mno-fused-madd-python-package-installation-fa) |
random.seed(): What does it do? | 22,639,587 | 25 | 2014-03-25T15:46:46Z | 22,639,752 | 35 | 2014-03-25T15:52:57Z | [
"python",
"random",
"random-seed"
] | I am a bit confused on what `random.seed()` does in Python. For example, why does the below trials do what they do (consistently)?
```
>>> import random
>>> random.seed(9001)
>>> random.randint(1, 10)
1
>>> random.randint(1, 10)
3
>>> random.randint(1, 10)
6
>>> random.randint(1, 10)
6
>>> random.randint(1, 10)
7
```
... | Pseudo-random number generators work by performing some operation on a value. Generally this value is the previous number generated by the generator. However, the first time you use the generator, there is no previous value.
Seeding a pseudo-random number generator gives it its first "previous" value. Each seed value ... |
random.seed(): What does it do? | 22,639,587 | 25 | 2014-03-25T15:46:46Z | 31,683,870 | 13 | 2015-07-28T18:01:07Z | [
"python",
"random",
"random-seed"
] | I am a bit confused on what `random.seed()` does in Python. For example, why does the below trials do what they do (consistently)?
```
>>> import random
>>> random.seed(9001)
>>> random.randint(1, 10)
1
>>> random.randint(1, 10)
3
>>> random.randint(1, 10)
6
>>> random.randint(1, 10)
6
>>> random.randint(1, 10)
7
```
... | **So all the above answers don't seem to explain the use of random.seed().
Here is the simple example. Hope it helps someone looking to quickly understand the use of random.seed()**
```
import random
random.seed( 3 )
print "Random number with seed 3 : ", random.random() #will generate a random number
#if you want to ... |
How to run a python program in a shell without typing "python" | 22,639,909 | 4 | 2014-03-25T15:58:30Z | 22,639,940 | 8 | 2014-03-25T16:00:21Z | [
"python"
] | I am new to python. I wrote a program which can be executed by typing `python Filecount.py ${argument}` Why I see my teacher can run a program by only typing `Filecount.py ${argument}`. How to achieve that? | Make it executable
```
chmod +x Filecount.py
```
and add a [hashbang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29) to the top of `Filecount.py` which [lets the os know](http://linux.die.net/man/2/execve) that you want to use the [python interpreter](http://docs.python.org/2/tutorial/interpreter.html#executable-py... |
How to install PyQt4 on Windows using pip? | 22,640,640 | 22 | 2014-03-25T16:26:07Z | 22,651,895 | 25 | 2014-03-26T05:01:49Z | [
"python",
"python-3.x",
"pyqt",
"pyqt4",
"pip"
] | I'm using Python 3.4 on Windows. When I run a script, it complains
```
ImportError: No Module named 'PyQt4'
```
So I tried to install it, but `pip install PyQt4` gives
> Could not find any downloads that satisfy the requirement PyQt4
although it does show up when I run `pip search PyQt4`. I tried to `pip install py... | Here are PyQt installers from the site - [RiverBank Computing - PyQt Binary Downloads](http://www.riverbankcomputing.com/software/pyqt/download)
Here are Windows wheel packages built by Chris Golke - [Python Windows Binary packages - PyQt](http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyqt4)
Since Qt is a more complicat... |
run multiple tornado processess | 22,641,015 | 4 | 2014-03-25T16:41:17Z | 26,997,641 | 11 | 2014-11-18T15:14:36Z | [
"python",
"tornado"
] | I've read various articles and tutorials on how to run N number of Tornado processes, where N=number of cores. My code was working, running on all 16 cores but I somehow managed to screw it up and I need fresh eyes on this.
```
import tornado.ioloop
import tornado.web
import tornado.httpserver
from core import settin... | This exception raises with debug mode of tornado.web.Application.
```
application = tornado.web.Application([
(r"/", hello),
],
debug=False)
```
Set debug to False to fix this problem.
You can start several processe to listen each port:
```
server = tornado.httpserver.HTTPServer(application)
server.bind(1234) ... |
The `uwsgi_modifier1 30` directive is not removing the SCRIPT_NAME from PATH_INFO as documented | 22,642,124 | 6 | 2014-03-25T17:31:39Z | 22,642,691 | 9 | 2014-03-25T18:00:05Z | [
"python",
"nginx",
"wsgi",
"uwsgi"
] | This is my nginx virtual host configuration.
```
debian:~# cat /etc/nginx/sites-enabled/mybox
server {
listen 8080;
root /www;
index index.html index.htm;
server_name mybox;
location /foo {
uwsgi_pass unix:/tmp/uwsgi.sock;
include uwsgi_params;
uwsgi_param SCRIPT_NAME /foo;
... | After reading <http://gh.codehum.com/unbit/uwsgi/pull/19> I understood that using `uwsgi_modifier1 30;` is deprecated.
So this is how I solved the problem.
First of all I removed SCRIPT\_NAME handling in nginx by removing these two lines:
```
uwsgi_param SCRIPT_NAME /foo;
uwsgi_modifier1 30;
```
The resulti... |
Python: Divide each row of a DataFrame by another DataFrame vector | 22,642,162 | 10 | 2014-03-25T17:33:43Z | 22,642,484 | 7 | 2014-03-25T17:50:11Z | [
"python",
"pandas"
] | I have a DataFrame (df1) with a dimension `2000 rows x 500 columns` (excluding the index) for which I want to divide each row by another DataFrame (df2) with dimension `1 rows X 500 columns`. Both have the same column headers. I tried:
`df.divide(df2)` and
`df.divide(df2, axis='index')` and multiple other solutions an... | You can divide by the *series* i.e. the first row of df2:
```
In [11]: df = pd.DataFrame([[1., 2.], [3., 4.]], columns=['A', 'B'])
In [12]: df2 = pd.DataFrame([[5., 10.]], columns=['A', 'B'])
In [13]: df.div(df2)
Out[13]:
A B
0 0.2 0.2
1 NaN NaN
In [14]: df.div(df2.iloc[0])
Out[14]:
A B
0 0.2... |
Python: Divide each row of a DataFrame by another DataFrame vector | 22,642,162 | 10 | 2014-03-25T17:33:43Z | 22,643,040 | 11 | 2014-03-25T18:16:30Z | [
"python",
"pandas"
] | I have a DataFrame (df1) with a dimension `2000 rows x 500 columns` (excluding the index) for which I want to divide each row by another DataFrame (df2) with dimension `1 rows X 500 columns`. Both have the same column headers. I tried:
`df.divide(df2)` and
`df.divide(df2, axis='index')` and multiple other solutions an... | In `df.divide(df2, axis='index')`, you need to provide the axis/row of df2 (ex. `df2.ix[0]`).
```
import pandas as pd
data1 = {"a":[1.,3.,5.,2.],
"b":[4.,8.,3.,7.],
"c":[5.,45.,67.,34]}
data2 = {"a":[4.],
"b":[2.],
"c":[11.]}
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
d... |
Python - concatenate 2 lists | 22,642,261 | 7 | 2014-03-25T17:38:17Z | 22,642,314 | 12 | 2014-03-25T17:40:37Z | [
"python",
"list",
"python-2.7",
"concatenation"
] | Hi I am new to both Python and this forum.
My question:
I have two lists:
```
list_a = ['john','peter','paul']
list_b = [ 'walker','smith','anderson']
```
I succeeded in creating a list like this using `zip`:
```
list_c = zip(list_a, list_b)
print list_c
# [ 'john','walker','peter','smith','paul','anderson']
```
... | You are getting zipped names from both the lists, simply join each pair, like this
```
print map(" ".join, zip(list_a, list_b))
# ['john walker', 'peter smith', 'paul anderson']
``` |
Python - CSV Reader List Comprehension | 22,642,674 | 3 | 2014-03-25T17:59:08Z | 22,642,707 | 7 | 2014-03-25T18:01:07Z | [
"python",
"list",
"csv",
"list-comprehension"
] | I am trying to read the columns in a file efficiently using CSV reader. The code is:
```
import csv
csv.register_dialect('csvrd', delimiter='\t', quoting=csv.QUOTE_NONE)
with open('myfile.txt', 'rb') as f:
reader = csv.reader(f,'csvrd')
a0=[x[0] for x in reader]
a1=[x[1] for x in reader]
```
I obtain th... | You cannot loop over the `reader` more than once, not without rewinding the underlying file to the start again.
Don't do that, however; transpose the rows to columns using `zip(*reader)` instead:
```
a0, a1, a2 = zip(*reader)
```
Demo:
```
>>> import csv
>>> csv.register_dialect('csvrd', delimiter='\t', quoting=csv... |
Unable to generate refresh token for AdWords account using OAuth2 | 22,643,886 | 3 | 2014-03-25T18:58:27Z | 22,647,779 | 7 | 2014-03-25T22:36:53Z | [
"python",
"google-oauth",
"google-adwords",
"adwords-apiv201402"
] | I am having trouble generating a refresh token using Python for the AdWords API & need some help. Here is the situation:
* I have a client on AdWords that I want to pull reports for through the AdWords API (we have a developer token now for this). Let's say that, in AdWords, the clients account is 521-314-0974 (making... | After some work, I was able to successfully navigate through this issue. Here are the detailed steps that I took to get to the point where I could successfully pull data through the API. In my situation, I manage an AdWords MCC with multiple accounts. Thus, I went back to the beginning of many of the help manuals and d... |
windows scrapyd-deploy is not recognized | 22,646,323 | 3 | 2014-03-25T21:11:17Z | 22,662,909 | 13 | 2014-03-26T13:45:56Z | [
"python",
"python-2.7",
"scrapy",
"scrapyd"
] | I have install the scrapyd like this
```
pip install scrapyd
```
I want to use scrapyd-deploy
when i type scrapyd
i got this exception in cmd:
> 'scrapyd' is not recognized as an internal or external command, operable program or batch file. | I ran into the same issue, and I also read some opinions that scrapyd isn't available / can't run on windows and nearly gave it up (didn't really need it as I intend on deploying to a linux machine, wanted scrapyd on windows for debug purposes). However, after some research I found a way. As I haven't found any clear i... |
Difference between 'and' (boolean) vs. '&' (bitwise) in python. Why difference in behavior with lists vs numpy arrays? | 22,646,463 | 44 | 2014-03-25T21:18:23Z | 22,647,006 | 40 | 2014-03-25T21:47:12Z | [
"python",
"numpy",
"bit-manipulation",
"boolean-expression",
"ampersand"
] | **What explains the difference in behavior of boolean and bitwise operations on lists vs numpy.arrays?**
I'm getting confused about the appropriate use of the '`&`' vs '`and`' in python, illustrated in the following simple examples.
```
mylist1 = [True, True, True, False, True]
mylist2 = [False, True, Fal... | `and` tests whether both expressions are logically `True` while `&` (when used with `True`/`False` values) tests if both are `True`.
In Python, empty built-in objects are typically treated as logically `False` while non-empty built-ins are logically `True`. This facilitates the common use case where you want to do som... |
Difference between 'and' (boolean) vs. '&' (bitwise) in python. Why difference in behavior with lists vs numpy arrays? | 22,646,463 | 44 | 2014-03-25T21:18:23Z | 22,647,040 | 10 | 2014-03-25T21:49:54Z | [
"python",
"numpy",
"bit-manipulation",
"boolean-expression",
"ampersand"
] | **What explains the difference in behavior of boolean and bitwise operations on lists vs numpy.arrays?**
I'm getting confused about the appropriate use of the '`&`' vs '`and`' in python, illustrated in the following simple examples.
```
mylist1 = [True, True, True, False, True]
mylist2 = [False, True, Fal... | The short-circuiting boolean operators (`and`, `or`) can't be overriden because there is no satisfying way to do this without introducing new language features or sacrificing short circuiting. As you may or may not know, they evaluate the first operand for its truth value, and depending on that value, either evaluate a... |
Difference between 'and' (boolean) vs. '&' (bitwise) in python. Why difference in behavior with lists vs numpy arrays? | 22,646,463 | 44 | 2014-03-25T21:18:23Z | 22,653,528 | 9 | 2014-03-26T06:50:25Z | [
"python",
"numpy",
"bit-manipulation",
"boolean-expression",
"ampersand"
] | **What explains the difference in behavior of boolean and bitwise operations on lists vs numpy.arrays?**
I'm getting confused about the appropriate use of the '`&`' vs '`and`' in python, illustrated in the following simple examples.
```
mylist1 = [True, True, True, False, True]
mylist2 = [False, True, Fal... | # About `list`
First a very important point, from which everything will follow (I hope).
In ordinary Python, `list` is not special in any way (except having cute syntax for constructing, which is mostly a historical accident). Once a list `[3,2,6]` is made, it is for all intents and purposes just an ordinary Python o... |
Understanding callbacks in Scrapy | 22,648,475 | 6 | 2014-03-25T23:27:35Z | 22,651,460 | 10 | 2014-03-26T04:26:25Z | [
"python",
"callback",
"scrapy"
] | I am new to Python and Scrapy. I have not used callback functions before. However, I do now for the code below. The first request will be executed and the response of that will be sent to the callback function defined as second argument:
```
def parse_page1(self, response):
item = MyItem()
item['main_url'] = r... | Read the [docs](http://doc.scrapy.org/en/latest/topics/spiders.html):
> For spiders, the scraping cycle goes through something like this:
>
> 1. You start by generating the initial Requests to crawl the first URLs, and specify a callback function to be called with the response
> downloaded from those requests.
>
> ... |
How does HAProxy achieves its speed? | 22,648,722 | 5 | 2014-03-25T23:50:11Z | 22,657,778 | 8 | 2014-03-26T10:16:06Z | [
"python",
"twisted",
"haproxy"
] | How does `HAProxy` avoids request time overhead when doing the load balancing?
I tested `HAProxy` and for fun compared it to a simple port forwarder written in `Twisted` (`Python`). In my preliminary tests, making a `HTTP request` through an `HAProxy` load balancer adds no `overhead[1]` in request time compared to mak... | Turns out that the [`HAProxy`](http://haproxy.1wt.eu/#perf) website already covers this area (my mistake of overlooking it). The answer is basically a lot of low level optimizations. Directly copied from the HAProxy website:
HAProxy involves several techniques commonly found in Operating Systems architectures to achie... |
How to run a background procedure while constantly checking for input - threading? | 22,648,765 | 3 | 2014-03-25T23:54:00Z | 22,679,451 | 7 | 2014-03-27T06:01:02Z | [
"python",
"multithreading",
"sockets",
"concurrency",
"python-multithreading"
] | I have small server and client Python scripts where the client sends a string and the server responds with the reverse. When the client enters a quit string, the client exits and then the server exits.
I want the server's "receive, reverse, send" procedure running in the background while the program is constantly chec... | Since you want the server process to be able to handle the client while in the same time receiving input from the server's `stdin`, you can just put the whole current server code in a `Thread`, then wait input from `stdin`.
```
import socket
from time import sleep
import threading
def process():
sock = socket.soc... |
Drop rows with all zeros in pandas data frame | 22,649,693 | 16 | 2014-03-26T01:20:10Z | 22,650,075 | 21 | 2014-03-26T01:59:04Z | [
"python",
"pandas"
] | I can use `pandas` `dropna()` functionality to remove rows with some or all columns set as `NA`'s. Is there an equivalent function for dropping rows with all columns having value 0?
```
P kt b tt mky depth
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 0 0 0 0 0
5 1.1 3 4.5 2.3... | It turns out this can be nicely expressed in a vectorized fashion:
```
> df = pd.DataFrame({'a':[0,0,1,1], 'b':[0,1,0,1]})
> df = df[(df.T != 0).any()]
> df
a b
1 0 1
2 1 0
3 1 1
``` |
Drop rows with all zeros in pandas data frame | 22,649,693 | 16 | 2014-03-26T01:20:10Z | 22,650,162 | 23 | 2014-03-26T02:07:25Z | [
"python",
"pandas"
] | I can use `pandas` `dropna()` functionality to remove rows with some or all columns set as `NA`'s. Is there an equivalent function for dropping rows with all columns having value 0?
```
P kt b tt mky depth
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 0 0 0 0 0
5 1.1 3 4.5 2.3... | One-liner. No transpose needed:
```
df.loc[~(df==0).all(axis=1)]
```
And for those who like symmetry, this also works...
```
df.loc[(df!=0).any(axis=1)]
``` |
Very slow regular expression search | 22,650,098 | 10 | 2014-03-26T02:02:15Z | 22,650,620 | 12 | 2014-03-26T02:52:38Z | [
"python",
"regex",
"string",
"performance"
] | I'm not sure I completely understand what is going on with the following regular expression search:
```
>>> import re
>>> template = re.compile("(\w+)+\.")
>>> target = "a" * 30
>>> template.search(target)
```
`search()` call takes minutes to complete, CPU usage goes to 100%. The behavior is reproduceable for both 2.... | Understanding this problem requires understanding how NFA works under RegExp.
Elaborating the definition of NFA may be a mission too heavy for me. Search NFA on wiki it will gives you a better explanation. Here just think NFA is a robot finding patterns you give.
Crudely implemented NFA is somewhat dumb, it just look... |
Pandas groupby cumulative sum | 22,650,833 | 8 | 2014-03-26T03:17:42Z | 22,651,188 | 18 | 2014-03-26T03:56:25Z | [
"python",
"pandas"
] | I would like to add a cumulative sum column to my Pandas dataframe so that:
```
Jack | Monday | 10
Jack | Tuesday | 20
Jack | Tuesday | 10
Jack | Wednesday | 50
Jill | Monday | 40
Jill Wednesday | 110
```
becomes:
```
Jack | Monday | 10 | 10
Jack | Tuesday | 30 | 40
Jack | Wednesday | 50 | 100
Jill | Monday | 40 | 4... | This should do it, need `groupby()` twice.
```
In [52]:
print df
name day no
0 Jack Monday 10
1 Jack Tuesday 20
2 Jack Tuesday 10
3 Jack Wednesday 50
4 Jill Monday 40
5 Jill Wednesday 110
In [53]:
print df.groupby(by=['name','day']).sum().groupby(level=[0]).cumsum()
... |
jinja2 - how to put a block in an if statement? | 22,655,031 | 16 | 2014-03-26T08:17:43Z | 22,667,404 | 22 | 2014-03-26T16:32:07Z | [
"python",
"python-2.7",
"flask",
"jinja2"
] | I am trying to use an if to determine which block should fill my {% block content %}.
I have a base.html which has a default {% block content %} and this template is extending base.html.So I tried this:
```
{% extends "base.html" %}
{% if condition == True %}
{% block content %}
<div>blah blah blah blah</div>... | You cannot make a `{% block %}` conditional; once you use the tag, the block is always going to be filled in.
Put your conditional **inside** the block instead, and use `super()` to instruct Jinja to use the original contents of the block as defined in the template:
```
{% extends "base.html" %}
{% block content %}
... |
Why only one warning in a loop? | 22,661,745 | 9 | 2014-03-26T12:57:19Z | 22,661,838 | 17 | 2014-03-26T13:00:37Z | [
"python",
"loops",
"warnings"
] | I want a warning raise for each problem detected in a loop, but the warning is only raised once, the first time. For example :
```
import warnings
for i in range(10):
print i
warnings.warn('this is a warning message')
```
I expect :
```
0
UserWarning: this is a warning message
1
UserWarning: this is a warning... | It is by design. See the docs at <http://docs.python.org/2/library/warnings.html>:
*Repetitions of a particular warning for the same source location are typically suppressed.*
You can override this behavior by adding a filter with the keyword `always`, as in:
```
import warnings
warnings.simplefilter('always', User... |
Web.py / No module named 'utils'` | 22,663,983 | 9 | 2014-03-26T14:25:11Z | 22,664,643 | 7 | 2014-03-26T14:49:38Z | [
"python"
] | I'm trying to install [web.py](http://webpy.org/install), and I did the next steps:
* Download `web.py-0.3.7`, and extract it on `c://web.py-0.3.7`
* Run the next command:`C:\>python C:\web.py-0.37\setup.py install`
* and it gives me the next error: `import utils, db, net, wsgi, http, webapi, httpserver, debugerror Im... | The issue is web.py is native for python 2.7+, however, there are several options.
* Install python 2.7+ (recommend using [virtualenv](https://pypi.python.org/pypi/virtualenv))
* Check out [this](https://groups.google.com/forum/#!topic/webpy/NvDqKEEEMEI) group that is porting web.py to python 3.x
* Use [bottle.py](htt... |
Django rest framework auto-populate filed with user.id | 22,668,674 | 3 | 2014-03-26T17:28:06Z | 22,670,973 | 8 | 2014-03-26T19:16:03Z | [
"python",
"django",
"rest",
"user",
"auto-populate"
] | I cant find a way to auto-populate the field owner of my model.I am using the DRF .If i use ForeignKey the user can choose the owner from a drop down box , but there is no point in that.PLZ HELP i cant make it work.The views.py is not include cause i think there is nothing to do with it.
models.py
```
class Note(mode... | Django Rest Framework provides a pre\_save() method (in generic views & mixins) which you can override.
```
class NoteSerializer(serializers.ModelSerializer):
owner = serializers.Field(source='owner.username') # Make sure owner is associated with the User model in your models.py
```
Then something like this in yo... |
AttributeError: 'Series' object has no attribute 'searchsorted' pandas | 22,669,208 | 9 | 2014-03-26T17:52:43Z | 22,669,229 | 13 | 2014-03-26T17:53:56Z | [
"python",
"pandas",
"series"
] | I reproduce the code of book python for data analysis in page 38
I write
```
prop_cumsum = df.sort_index(by='prop', ascending=False).prop.cumsum()
and prop_cumsum.searchsorted(0.5)
```
Then there is an error say:
```
AttributeError Traceback (most recent call last)
<ipython-input-30-f2e2... | You are probably using a version that is 0.13.0 or later where Series now subclasses `NDFrame`, you have to now do this to return a numpy array:
```
prop_cumsum.values.searchsorted(0.5)
```
as searchsorted is a numpy function and not a Pandas Series function.
See the [online docs](http://pandas.pydata.org/pandas-doc... |
How exactly does the "reflect" mode for scipys ndimage filters work? | 22,669,252 | 7 | 2014-03-26T17:55:06Z | 22,670,830 | 19 | 2014-03-26T19:08:17Z | [
"python",
"image-processing",
"scipy",
"filtering"
] | I'm failing to understand exactly how the reflect mode handles my arrays. I have this very simple array:
```
import numpy as np
from scipy.ndimage.filters import uniform_filter
from scipy.ndimage.filters import median_filter
vector = np.array([[1.0,1.0,1.0,1.0,1.0],[2.0,2.0,2.0,2.0,2.0],[4.0,4.0,4.0,4.0,4.0],[5.0,5.0... | Suppose the data in one axis is `1 2 3 4 5 6 7 8`. The following table shows how the data is extended for each mode (assuming `cval=0`):
```
mode | Ext | Input | Ext
-----------+---------+------------------------+---------
'mirror' | 4 3 2 | 1 2 3 4 5 6 7 8 | 7 6 5... |
Securely storing environment variables in GAE with app.yaml | 22,669,528 | 15 | 2014-03-26T18:08:23Z | 25,174,905 | 9 | 2014-08-07T05:37:25Z | [
"python",
"google-app-engine",
"python-2.7",
"environment-variables"
] | I need to store API keys and other sensitive information in `app.yaml` as environment variables for deployment on GAE. The issue with this is that if I push `app.yaml` to GitHub, this information becomes public (not good). I don't want to store the info in a datastore as it does not suit the project. Rather, I'd like t... | My approach is to store client secrets *only* within the App Engine app itself. The client secrets are neither in source control nor on any local computers. This has the benefit that *any* App Engine collaborator can deploy code changes without having to worry about the client secrets.
I store client secrets directly ... |
Linear fitting in python with uncertainty in both x and y coordinates | 22,670,057 | 6 | 2014-03-26T18:32:26Z | 22,670,095 | 15 | 2014-03-26T18:34:20Z | [
"python",
"linear"
] | Hi I would like to ask my fellow python users how they perform their linear fitting.
I have been searching for the last two weeks on methods/libraries to perform this task and I would like to share my experience:
If you want to perform a linear fitting based on the least-squares method you have many options. For exam... | [Orthogonal distance regression](http://docs.scipy.org/doc/scipy/reference/odr.html) in Scipy allows you to do non-linear fitting using errors in both `x` and `y`.
Shown below is a simple example based on the example given on the scipy page. It attempts to fit a quadratic function to some randomised data.
```
import ... |
Inverse of numpy's bincount function | 22,671,192 | 2 | 2014-03-26T19:28:00Z | 22,671,394 | 10 | 2014-03-26T19:38:50Z | [
"python",
"numpy"
] | Given an array of integer counts `c`, how can I transform that into an array of integers `inds` such that `np.all(np.bincount(inds) == c)` is true?
For example:
```
>>> c = np.array([1,3,2,2])
>>> inverse_bincount(c) # <-- what I need
array([0,1,1,1,2,2,3,3])
```
Context: I'm trying to keep track of the location o... | using [`numpy.repeat`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html) :
```
np.repeat(np.arange(c.size), c)
``` |
Syntax for a dictionary of functions? | 22,672,121 | 3 | 2014-03-26T20:17:14Z | 22,672,132 | 7 | 2014-03-26T20:17:54Z | [
"python",
"dictionary"
] | I'm trying to test the concept of using a dictionary to call functions, since python doesn't have a `case switch` and I don't want write out a slew of `if` statements. However, whenever I try to put the function in to a dict, I get the following:
```
def hello():
... print 'hello world'
...
>>> fundict = {'hello':... | You call the returned object:
```
fundict['hello']()
```
You are storing function objects correctly; what is stored is just a reference, just like the original name `hello` is a reference to the function. Simply call the reference by adding `()` (with arguments if the function takes it).
Demo:
```
>>> def hello(nam... |
RaspiStill - Quality/Size Miss Match - File too big | 22,675,502 | 2 | 2014-03-26T23:51:02Z | 22,722,920 | 7 | 2014-03-28T20:48:22Z | [
"python",
"camera",
"jpeg",
"raspberry-pi"
] | Using RaspiStill through a python process shell to take a picture with JPG encoding. I am getting a file that is over two megs. When I reduce the quality down to 50% the file size of the picture taken only drops the by about 200K, or less then a ten percent decrease. When I take that original 2 meg+ file and save it th... | This one took a lot of experimentation and an argument over at the Raspberry Pi Forum to solve.
Here is the link to the entire discussion: <http://www.raspberrypi.org/forum/viewtopic.php?f=43&t=73174&p=527300#p527300>
The stats on my final solution are as follows: I dropped the resolution down to 640x480, this produc... |
Pandas - The difference between join and merge | 22,676,081 | 18 | 2014-03-27T00:42:47Z | 22,676,213 | 15 | 2014-03-27T00:55:03Z | [
"python",
"pandas"
] | Suppose I have two DataFrames like so:
```
left = pd.DataFrame({'key1': ['foo', 'bar'], 'lval': [1, 2]})
right = pd.DataFrame({'key2': ['foo', 'bar'], 'rval': [4, 5]})
```
I want to merge them, so I try something like this:
```
pd.merge(left, right, left_on='key1', right_on='key2')
```
And I'm happy
```
key1 ... | I always use `join` on indices:
```
import pandas as pd
left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]}).set_index('key')
right = pd.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]}).set_index('key')
left.join(right, lsuffix='_l', rsuffix='_r')
lval rval
key
foo 1 4
bar 2 ... |
Pandas - The difference between join and merge | 22,676,081 | 18 | 2014-03-27T00:42:47Z | 37,891,437 | 11 | 2016-06-17T22:51:58Z | [
"python",
"pandas"
] | Suppose I have two DataFrames like so:
```
left = pd.DataFrame({'key1': ['foo', 'bar'], 'lval': [1, 2]})
right = pd.DataFrame({'key2': ['foo', 'bar'], 'rval': [4, 5]})
```
I want to merge them, so I try something like this:
```
pd.merge(left, right, left_on='key1', right_on='key2')
```
And I'm happy
```
key1 ... | `pandas.merge()` is the underlying function used for all merge/join behavior.
DataFrames provide the `pandas.DataFrame.merge()` and `pandas.DataFrame.join()` methods as a convenient way to access the capabilities of `pandas.merge()`. For example, `df1.merge(right=df2, ...)` is equivalent to `pandas.merge(left=df1, rig... |
pandas to_sql truncates my data | 22,676,170 | 2 | 2014-03-27T00:51:29Z | 22,685,355 | 8 | 2014-03-27T10:50:40Z | [
"python",
"mysql",
"sql",
"pandas"
] | I was using `df.to_sql(con=con_mysql, name='testdata', if_exists='replace', flavor='mysql')` to export a data frame into mysql. However, I discovered that the columns with long string content (such as url) is truncated to 63 digits. I received the following warning from ipython notebook when I exported:
> /usr/local/l... | If you are using pandas **0.13.1 or older**, this limit of 63 digits is indeed hardcoded, because of this line in the code: <https://github.com/pydata/pandas/blob/v0.13.1/pandas/io/sql.py#L278>
As a workaround, you could maybe monkeypatch that function `get_sqltype`:
```
from pandas.io import sql
def get_sqltype(pyt... |
Checking call order across multiple mocks | 22,677,280 | 11 | 2014-03-27T02:42:39Z | 22,677,452 | 12 | 2014-03-27T03:02:56Z | [
"python",
"function",
"mocking",
"python-mock"
] | I have three functions that I'm trying to test the call order of.
Let's say that in module module.py I have the following
```
# module.py
def a(*args):
# do the first thing
def b(*args):
# do a second thing
def c(*args):
# do a third thing
def main_routine():
a_args = ('a')
b_args = ('b')... | Define a `Mock` manager and attach mocks to it via [`attach_mock()`](http://www.voidspace.org.uk/python/mock/mock.html#mock.Mock.attach_mock). Then check for the `mock_calls`:
```
@patch('module.a')
@patch('module.b')
@patch('module.c')
def test_main_routine(c, b, a):
manager = Mock()
manager.attach_mock(a, 'a... |
Python accessing the list while being sorted | 22,678,141 | 15 | 2014-03-27T04:12:35Z | 22,678,228 | 13 | 2014-03-27T04:22:52Z | [
"python",
"list",
"sorting",
"python-internals"
] | Can I access a list while it is being sorted in the `list.sort()`
```
b = ['b', 'e', 'f', 'd', 'c', 'g', 'a']
f = 'check this'
def m(i):
print i, b, f
return None
b.sort(key=m)
print b
```
this returns
```
b [] check this
e [] check this
f [] check this
d [] check this
c [] check this
g [] check this
a [] ... | Looking at the [source code](http://svn.python.org/view/python/trunk/Objects/listobject.c?revision=69227&view=markup) (of CPython, maybe different behaviour for other implementations) the strange output of your script becomes obvious:
```
/* The list is temporarily made empty, so that mutations performed
* by comparis... |
What is absolute import in python? | 22,678,919 | 4 | 2014-03-27T05:21:56Z | 22,679,558 | 8 | 2014-03-27T06:07:36Z | [
"python",
"python-2.7",
"python-3.x"
] | I am new to python,I am developing a small project.I need to follow coding standards from starting on wards.How to use import statements in a proper way.Now i am working on python 2.7,If i move to 3.x is there any conflicts with absolute imports.And what is the difference between absolute and relative imports.? | The distinction between `absolute` and `relative` that's being drawn here is very similar to the way we talk about absolute and relative file paths or even URLs.
An absolute {import, path, URL} tells you **exactly** how to get the thing you are after, usually by specifying every part:
```
import os, sys
from datetime... |
Copying code into word document and keeping formatting | 22,681,832 | 8 | 2014-03-27T08:21:08Z | 22,793,098 | 18 | 2014-04-01T17:44:34Z | [
"python",
"ms-word"
] | I need to get my code (Python 2.7 written in the Python IDE) into a word document for my dissertation but I am struggling to find a way of copying it in and keeping the formatting, I've tried paste special and had no luck. The only way I've found so far is screenshoting but with just over 1000 lines of code this is pro... | After trying every method i still had problems then came across this! basically copy and past your code and select what language it is and it will kick out the formatted and coloured version ready to be pasted into word :-)
[Code to Word and keep formatting](http://www.planetb.ca/2008/11/syntax-highlight-code-in-word-... |
Concatenate custom features with CountVectorizer | 22,687,365 | 10 | 2014-03-27T12:16:25Z | 22,710,579 | 10 | 2014-03-28T10:44:22Z | [
"python",
"machine-learning",
"scikit-learn"
] | I have a bunch of files with articles. For each article there should be some features, like: **text length**, **text\_spam** (all are ints or floats, and in most cases they should be loaded from csv). And what I want to do is - to combine these features with CountVectorizer and then classify those texts.
I have watche... | You're misunderstanding `FeatureUnion`. It's supposed to take two transformers, not two batches of samples.
You can force it into dealing with the vectorizers you have, but it's much easier to just throw all your features into one big bag per sample and use a single `DictVectorizer` to make vectors out of those bags.
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.