title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Homebrew broken link to Python in a virtualenv | 26,034,246 | 5 | 2014-09-25T08:48:37Z | 26,037,382 | 8 | 2014-09-25T11:23:11Z | [
"python",
"virtualenv",
"homebrew"
] | I recently ran (OS X 10.6.8) `brew update` and `brew upgrade`and am working in a virtualenv that now fails. I've tried:
```
$ brew unlink python && brew link python
Unlinking /usr/local/Cellar/python/2.7.8_1... 38 symlinks removed
Linking /usr/local/Cellar/python/2.7.8_1... 35 symlinks created
```
But in the virtuale... | I'll put my comment as an answer, for clarity for future visitors with the same problem.
The related links you gave tell you exactly what's the problem: because virtualenv creates symbolic links to files, and homebrew replaces those files when upgrading with differently named files, you end up with broken links.
Both ... |
Is it more memory-efficient to set variables to `None` in python? | 26,046,924 | 7 | 2014-09-25T19:49:50Z | 26,047,022 | 8 | 2014-09-25T19:56:46Z | [
"python",
"memory"
] | It is a simple question, but since I didn't find any answers for it, I assume the answer would be negative. However, to make sure, I'm asking it:
Does it make the python code more efficient to set the variables to `None` after we're done with them in a function?
So as an example:
```
def foo(fname):
temp_1, temp... | It depends on what you mean by "more efficient".
Setting the variables to `None`, assuming they're the only references to their values, will allow the garbage collector to collect them. And in CPython (which uses ref counting for its garbage collector), it will even do so right away.
But on the other hand, you're als... |
What is the difference between a pandas Series and a single-column DataFrame? | 26,047,209 | 20 | 2014-09-25T20:09:21Z | 26,240,208 | 28 | 2014-10-07T15:55:09Z | [
"python",
"pandas"
] | Why does pandas make a distinction between a `Series` and a single-column `DataFrame`?
In other words: what is the reason of existence of the `Series` class?
I'm mainly using time series with datetime index, maybe that helps to set the context. | Quoting the [Pandas docs](http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.DataFrame.html)
> pandas.**DataFrame**(*data=None, index=None, columns=None, dtype=None, copy=False*)
>
> Two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes
> (rows and columns).... |
counting the unique items in a numpy array: why is scipy.stats.itemfreq so slow? | 26,047,560 | 2 | 2014-09-25T20:30:34Z | 26,047,775 | 7 | 2014-09-25T20:44:03Z | [
"python",
"python-2.7",
"numpy",
"scipy"
] | I'm trying to count the unique values in a numpy array.
```
import numpy as np
from collections import defaultdict
import scipy.stats
import time
x = np.tile([1,2,3,4,5,6,7,8,9,10],20000)
for i in [44,22,300,403,777,1009,800]:
x[i] = 11
def getCounts(x):
counts = defaultdict(int)
for item in x:
c... | If you can use numpy 1.9, you can use `numpy.unique` with the argument `return_counts=True`. I.e.
```
unique_items, counts = np.unique(x, return_counts=True)
```
In fact, `itemfreq` was updated to use `np.unique`, but scipy currently supports numpy versions back to 1.5, so it doesn't use the `return_counts` argument.... |
Why is the mean larger than the max in this array? | 26,048,299 | 3 | 2014-09-25T21:20:32Z | 26,048,546 | 7 | 2014-09-25T21:36:56Z | [
"python",
"numpy",
"mean"
] | I have found myself with a very confusing array in Python. There following is the output from iPython when I work with it (with the pylab flag):
```
In [1]: x = np.load('x.npy')
In [2]: x.shape
Out[2]: (504000,)
In [3]: x
Out[3]:
array([ 98.20354462, 98.26583099, 98.26529694, ..., 98.20297241,
98.1987686... | Probably due to accumulated rounding error in computing mean(). float32 relative precision is ~ 1e-7, and you have 500000 elements -> ~ 5% rounding in direct computation of sum().
The algorithm for computing sum() and mean() is more sophisticated (pairwise summation) in the latest Numpy version 1.9.0:
```
>>> import ... |
line contains NULL byte error in python csv reader | 26,050,968 | 2 | 2014-09-26T02:07:58Z | 26,051,176 | 7 | 2014-09-26T02:33:47Z | [
"python",
"csv",
null
] | I am trying to read each line of a csv file and get a "line contains NULL byte" error.
```
reader = csv.reader(open(mycsv, 'rU'))
for line in reader:
print(line)
Traceback (most recent call last):
File "<stdin>", line 1, in <module
_csv.Error: line contains NULL byte
```
Using the below I found that I have n... | Try this!
```
import csv
def mycsv_reader(csv_reader):
while True:
try:
yield next(csv_reader)
except csv.Error:
# error handling what you want.
pass
continue
return
if __name__ == '__main__':
reader = mycsv_reader(csv.reader(open(mycsv, 'rU')))
fo... |
What is the priority of importing a name, submodule or subpackage from a package in python 2.7? | 26,051,874 | 5 | 2014-09-26T04:07:29Z | 26,068,320 | 8 | 2014-09-26T21:06:15Z | [
"python",
"python-2.7",
"import",
"python-internals"
] | In the [official Python 2 tutorial](https://docs.python.org/2/tutorial/modules.html#packages) it says:
> Note that when using **from a\_package import an\_item**, the item can be either a submodule (or subpackage) of the package, or some other name defined in the package, like a function, class or variable.
Then what... | The next sentence after the one you quote in your question [confirms](https://docs.python.org/2/tutorial/modules.html#packages) that names defined within a package ("variables", to use your wording) take precedence over submodules/packages:
> The `import` statement first tests whether the item is defined in the packag... |
Counting non zero values in each column of a dataframe in python | 26,053,849 | 5 | 2014-09-26T07:04:13Z | 34,156,147 | 10 | 2015-12-08T12:39:13Z | [
"python",
"pandas",
"dataframe"
] | I have a python-pandas-dataframe in which first column is user\_id and rest of the columns are tags(tag\_0 to tag\_122).
I have the data in the following format:
```
UserId Tag_0 Tag_1
7867688 0 5
7867688 0 3
7867688 3 0
7867688 3.5 3.5
7867688 4 4
7867688 3.5 0
```
My aim is to achieve `Sum(Tag)/Count(Non... | My favorite way of getting number of nonzeros in each row is
```
df.astype(bool).sum(axis=1)
``` |
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 | 26,053,982 | 95 | 2014-09-26T07:12:25Z | 26,057,301 | 36 | 2014-09-26T10:19:58Z | [
"python",
"compiler-errors",
"odoo"
] | When I try to install odoo-server I got the following error could anyone help me to resolve this?
```
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
``` | ```
$ sudo apt-get install gcc
$ sudo apt-get install python-dateutil python-docutils python-feedparser python-gdata python-jinja2 python-ldap python-libxslt1 python-lxml python-mako python-mock python-openid python-psycopg2 python-psutil python-pybabel python-pychart python-pydot python-pyparsing python-reportlab pyth... |
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 | 26,053,982 | 95 | 2014-09-26T07:12:25Z | 26,333,011 | 89 | 2014-10-13T04:44:52Z | [
"python",
"compiler-errors",
"odoo"
] | When I try to install odoo-server I got the following error could anyone help me to resolve this?
```
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
``` | Try installing these packages.
```
sudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-imaging python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl li... |
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 | 26,053,982 | 95 | 2014-09-26T07:12:25Z | 30,279,877 | 77 | 2015-05-16T19:30:15Z | [
"python",
"compiler-errors",
"odoo"
] | When I try to install odoo-server I got the following error could anyone help me to resolve this?
```
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
``` | > Python.h is nothing but a header file. It is used by gcc to build applications. You need to install a package called python-dev. This package includes header files, a static library and development tools for building Python modules, extending the Python interpreter or embedding Python in applications.
enter:
```
$ ... |
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 | 26,053,982 | 95 | 2014-09-26T07:12:25Z | 33,874,511 | 12 | 2015-11-23T15:19:25Z | [
"python",
"compiler-errors",
"odoo"
] | When I try to install odoo-server I got the following error could anyone help me to resolve this?
```
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
``` | I encountered the same problem in college having installed Linux Mint for the main project of my final year, the first solution below worked for me.
```
$ sudo apt-get install python3-dev
```
**Other Solutions:**
```
$ sudo apt-get install python-dev
``` |
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 | 26,053,982 | 95 | 2014-09-26T07:12:25Z | 35,164,888 | 21 | 2016-02-02T22:19:31Z | [
"python",
"compiler-errors",
"odoo"
] | When I try to install odoo-server I got the following error could anyone help me to resolve this?
```
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
``` | You need to install these packages:
```
sudo apt-get install libpq-dev python-dev libxml2-dev libxslt1-dev libldap2-dev libsasl2-dev libffi-dev
``` |
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 | 26,053,982 | 95 | 2014-09-26T07:12:25Z | 35,908,925 | 16 | 2016-03-10T06:11:10Z | [
"python",
"compiler-errors",
"odoo"
] | When I try to install odoo-server I got the following error could anyone help me to resolve this?
```
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
``` | For my case, it was missing package **libffi-dev**.
What worked:
```
sudo apt-get install libffi-dev
``` |
Correct fitting with scipy curve_fit including errors in x? | 26,058,792 | 7 | 2014-09-26T11:45:16Z | 26,085,136 | 8 | 2014-09-28T13:07:42Z | [
"python",
"numpy",
"scipy",
"curve-fitting"
] | I'm trying to fit a histogram with some data in it using `scipy.optimize.curve_fit`. If I want to add an error in `y`, I can simply do so by applying a `weight` to the fit. But how to apply the error in `x` (i. e. the error due to binning in case of histograms)?
My question also applies to errors in `x` when making a ... | `scipy.optmize.curve_fit` uses standard non-linear least squares optimization and therefore only minimizes the deviation in the response variables. If you want to have an error in the independent variable to be considered you can try `scipy.odr` which uses orthogonal distance regression. As its name suggests it minimiz... |
Build a wheel/egg and all dependencies for a python project | 26,059,111 | 5 | 2014-09-26T12:02:46Z | 26,066,961 | 7 | 2014-09-26T19:28:39Z | [
"python",
"windows",
"egg",
"python-wheel"
] | In order to stage python project within our corporation I need to make an installable distribution.
This should include:
* An egg or whl for my project
* An egg or whl for every dependency of the project
* (optionally) produce a requirements.txt file listing all the installable components for this release
Is there a... | You will need to create a `setup.py` file for your package. Make sure you have the latest setuptools and pip installed. Then run the following:
```
python setup.py bdist_wheel
```
This will create a wheel file for your package. This assumes you don't have C/C++ headers, DLLs, etc. If you do, then you'll probably have... |
Adding modules from opencv_contrib to OpenCV | 26,059,134 | 5 | 2014-09-26T12:04:04Z | 28,156,123 | 7 | 2015-01-26T18:13:15Z | [
"python",
"opencv"
] | I'm trying to add the `xfeatures2d` module from `opencv_contrib` to an existing OpenCV/Python project.
I've downloaded the latest version of the module from [the repo](https://github.com/itseez/opencv_contrib), and built OpenCV again with the following additional params:
```
OPENCV_EXTRA_MODULES_PATH=/path/to/opencv_... | I encountered this same issue. I'm using python 2.7.6 and OpenCv 3.0 with the additional non-free modules. I do have xfeatures2d present in available modules and can import it, however it was as though xfeatures2d didn't contain SIFT or SURF. No matter how I called them it was the same Error:
> "AttributeError: 'modul... |
On Error Resume Next in Python | 26,059,424 | 4 | 2014-09-26T12:18:59Z | 26,059,587 | 9 | 2014-09-26T12:28:18Z | [
"python",
"exception",
"error-handling"
] | ### Snippet 1
```
do_magic() # Throws exception, doesn't execute do_foo and do_bar
do_foo()
do_bar()
```
### Snippet 2
```
try:
do_magic() # Doesn't throw exception, doesn't execute do_foo and do_bar
do_foo()
do_bar()
except:
pass
```
### Snippet 3
```
try: do_magic(); except: pass
try: do_foo() ... | In Python 3.4 onwards, you can use [`contextlib.suppress`](https://docs.python.org/3.4/library/contextlib.html#contextlib.suppress):
```
from contextlib import suppress
with suppress(Exception): # or, better, a more specific error (or errors)
do_magic()
with suppress(Exception):
do_foo()
with suppress(Excepti... |
Python multiprocessing with pathos | 26,059,764 | 6 | 2014-09-26T12:37:22Z | 26,619,456 | 7 | 2014-10-28T22:16:08Z | [
"python",
"multiprocessing",
"pool",
"pathos"
] | I am trying to use Python's pathos to designate computations into separate processes in order to accelerate it with multicore processor. My code is organized like:
```
class:
def foo(self,name):
...
setattr(self,name,something)
...
def boo(self):
for name in list:
self.foo(name)
```
A... | I'm the `pathos` author. I'm not sure what you want to do from your code above.
However, I can maybe shed some light. Here's some similar code:
```
>>> from pathos.multiprocessing import ProcessingPool
>>> class Bar:
... def foo(self, name):
... return len(str(name))
... def boo(self, things):
... for thin... |
Python multi-thread multi-interpreter C API | 26,061,298 | 8 | 2014-09-26T13:54:44Z | 26,570,708 | 7 | 2014-10-26T07:19:02Z | [
"python",
"multithreading",
"python-c-api"
] | I'm playing around with the C API for Python, but it is quite difficult to understand some corner cases. I could test it, but it seems a bug-prone and time consuming. So I come here to see if somebody has already done this.
The question is, which is the correct way to manage a multi-thread with sub-interpreters, with ... | Sub interpreters in Python are not well documented or even well supported. The following is to the best of my undestanding. It seems to work well in practice.
Threre are two important concepts to understand when dealing with threads and sub interpreters in Python. First, the Python interpreter is not really multi thre... |
"relation already exists" after adding a Many2many field in odoo | 26,062,915 | 4 | 2014-09-26T15:19:10Z | 26,101,401 | 10 | 2014-09-29T13:43:52Z | [
"python",
"postgresql",
"openerp",
"relationship",
"odoo"
] | I've defined the following two odoo ORM models:
```
class Weekday(models.Model):
_name = 'ludwik.offers.weekday'
name = fields.Char()
class Duration(models.Model):
_name = 'ludwik.offers.duration'
weekday = fields.Many2many('ludwik.offers.weekday')
```
When I try to start odoo I get the following mes... | I figured this out. I have to say, I think this technically qualifies as a bug in Odoo.
**Summary**
The names of my models were too long. Every time you set a `_name` property longer than 16 characters you are setting yourself up to potentially experience this problem.
**Details**
When you create a `Many2many` rela... |
How does `numpy.einsum` work? | 26,064,594 | 3 | 2014-09-26T16:56:12Z | 26,065,148 | 9 | 2014-09-26T17:29:20Z | [
"python",
"arrays",
"numpy"
] | The correct way of writing a summation in terms of Einstein summation is a puzzle to me, so I want to try it in my code. I have succeeded in a few cases but mostly with trial and error.
Now there is a case that I cannot figure out. First, a basic question. For two matrices `A` and `B` that are `Nx1` and `1xN`, respect... | ```
In [22]: a
Out[22]: array([[1, 2]])
In [23]: b
Out[23]: array([[2, 3]])
In [24]: np.einsum('ij,ij->ij',a,b)
Out[24]: array([[2, 6]])
In [29]: a*b
Out[29]: array([[2, 6]])
```
Here the repetition of the indices in all parts, including output, is interpreted as element by element multiplication. Nothing is summed. `... |
should pytest et al. go in tests_require[] or extras_require{testing[]}? | 26,064,738 | 17 | 2014-09-26T17:05:34Z | 32,513,360 | 7 | 2015-09-10T23:52:50Z | [
"python",
"setuptools",
"setup.py"
] | I am writing a python program which uses py.test for testing and now one test also depends on numpy. Where in my setup.py should I add those dependencies?
Currently the relevant part of my setup.py looks something like this:
```
[...]
'version': '0.0.1',
'install_requires': [],
'tests_require': ['pytest'],
'cmdclass'... | ## According to the docs
**[tests\_require](https://pythonhosted.org/setuptools/setuptools.html)** are additional packages that are obtained when using [*setuptools*'s *test*](https://pythonhosted.org/setuptools/setuptools.html#test-build-package-and-run-a-unittest-suite) command. They are not installed on the system.... |
What is the "&=" operator and why does Twilio use it when comparing strings? | 26,066,846 | 7 | 2014-09-26T19:20:59Z | 26,066,879 | 12 | 2014-09-26T19:23:32Z | [
"python",
"operators",
"twilio"
] | What is `&=` in python?
For example:
```
for c1, c2 in izip(string1, string2):
result &= c1 == c2
```
I found it in the twilio python library:
<https://github.com/twilio/twilio-python/blob/master/twilio/util.py#L62>
Why don't they just compare the strings directly `return string1 == string2` and compare each ch... | See the [secure\_compare](https://github.com/twilio/twilio-python/blob/master/twilio/util.py#L62) doctring:
> Compare two strings while protecting against [Timing Attacks](http://en.wikipedia.org/wiki/Timing_attack)
By *forcing* evaluation of every character an attacker can't use the time it took to guess where the d... |
How to pass model fields to a JsonResponse object | 26,067,369 | 8 | 2014-09-26T19:57:08Z | 26,067,930 | 8 | 2014-09-26T20:38:46Z | [
"python",
"json",
"django",
"httpresponse"
] | Django 1.7 introduced the [JsonResponse objects](https://docs.djangoproject.com/en/1.7/ref/request-response/#jsonresponse-objects), which I try to use to return a list of values to my ajax request.
I want to pass
```
>>> Genre.objects.values('name', 'color')
[{'color': '8a3700', 'name': 'rock'}, {'color': 'ffff00', '... | For future reference, `.values()` returns a `ValuesQuerySet` that behaves like a iterable full of dictionaries, so using the `list()` will make a new instance of a `list` with all the dictionaries in it. With that, you can create a new dict and serialize that.
```
response = JsonResponse(dict(genres=list(Genre.objects... |
Numpy.dtype has the wrong size, try recompiling | 26,067,692 | 7 | 2014-09-26T20:18:10Z | 29,321,298 | 15 | 2015-03-28T19:01:17Z | [
"python",
"numpy",
"pandas"
] | When importing pandas I would get the following error:
`Numpy.dtype has the wrong size, try recompiling`
I am running Python 2.7.5, with Pandas 0.14.1, and Numpy 1.9.0. I have tried installing older versions of both using pip, with major errors every time. I am a beginner when it comes to Python so any help here woul... | I've seen this error before and it typically does have to do with pandas referencing an old version of numpy. But reinstalling may not help if your python path is still pointing to an old version of numpy.
When you install numpy via pip, pip will tell you where it was installed. Something like
```
pip install numpy==... |
Pip does not acknowledge Cython | 26,069,080 | 6 | 2014-09-26T22:16:56Z | 26,086,375 | 14 | 2014-09-28T15:19:44Z | [
"python",
"pip",
"pytables"
] | I just installed pip and Python via home-brew on a fresh Mac OS installation.
First of all, my pip is not installing dependencies at all - which forces me to re-run 'pip install tables' 3 times and every time it will tell me a dependency and I will install that and then rerun it again. Is this expected behavior?
Seco... | Upgrading Cython from the upstream Git repo will resolve the problem.
```
pip install --upgrade git+git://github.com/cython/cython@master
``` |
ImportError: No module named 'bottle' - PyCharm | 26,069,254 | 5 | 2014-09-26T22:35:46Z | 26,069,398 | 27 | 2014-09-26T22:51:47Z | [
"python",
"pycharm",
"bottle",
"importerror"
] | I installed bottle on python3.4 with pip install. In terminal, when I do:
```
$ python3.4
>>>import bottle # shows no import error
>>>
```
but when I do it in PyCharm, it says "import bottle ImportError: No module named 'bottle'" | in your PyCharm project:
* press `Ctrl`+`Alt`+`s` to open the settings
* on the left column, select *Project Interpreter*
* on the top right there is a list of python binaries found on your system, pick the right one
* eventually click the `+` button to install additional python modules
* validate
![enter image descr... |
Decoding base64 from POST to use in PIL | 26,070,547 | 6 | 2014-09-27T01:56:50Z | 26,079,673 | 13 | 2014-09-27T22:10:23Z | [
"python",
"flask",
"base64",
"python-imaging-library",
"pillow"
] | I'm making a simple API in Flask that accepts an image encoded in base64, then decodes it for further processing using Pillow.
I've looked at some examples ([1](http://stackoverflow.com/questions/19908975/loading-base64-string-into-python-image-library), [2](http://stackoverflow.com/questions/3715493/encoding-an-image... | You should try something like that:
I will change the image string for an example I took from google just for readability purposes.
```
from PIL import Image
from io import BytesIO
import base64
data['img'] = '''R0lGODlhDwAPAKECAAAAzMzM/////wAAACwAAAAADwAPAAACIISPeQHsrZ5ModrLlN48CXF8m2iQ3YmmKqVlRtW4MLwWACH+H09wdGlta... |
Decoding base64 from POST to use in PIL | 26,070,547 | 6 | 2014-09-27T01:56:50Z | 26,085,215 | 7 | 2014-09-28T13:16:24Z | [
"python",
"flask",
"base64",
"python-imaging-library",
"pillow"
] | I'm making a simple API in Flask that accepts an image encoded in base64, then decodes it for further processing using Pillow.
I've looked at some examples ([1](http://stackoverflow.com/questions/19908975/loading-base64-string-into-python-image-library), [2](http://stackoverflow.com/questions/3715493/encoding-an-image... | There is a metadata prefix of `data:image/jpeg;base64,` being included in the `img` field. Normally this metadata is used in a CSS or HTML data URI when embedding image data into the document or stylesheet. It is there to provide the MIME type and encoding of the embedded data to the rendering browser.
You can strip o... |
How to fix Selenium WebDriverException: The browser appears to have exited before we could connect? | 26,070,834 | 33 | 2014-09-27T02:55:08Z | 27,937,810 | 10 | 2015-01-14T07:32:03Z | [
"python",
"selenium",
"selenium-webdriver",
"webdriver"
] | I have installed firefox and Xvfb on my centos6.4 server to use selenium webdriver.
But, when I run the code, I got an error.
```
from selenium import webdriver
browser = webdriver.Firefox()
```
**Error**
```
selenium.common.exceptions.WebDriverException: Message:
'The browser appears to have exited before we coul... | Check your `DISPLAY` environment variable. Run `echo $DISPLAY` in the command line.
If nothing is printed, then you are running FireFox without any DISPLAY assigned. You should assign one! Run `export DISPLAY=:1` in the command line before running your python script.
Check this thread for more information: <http://ha... |
How to fix Selenium WebDriverException: The browser appears to have exited before we could connect? | 26,070,834 | 33 | 2014-09-27T02:55:08Z | 30,103,931 | 44 | 2015-05-07T14:24:42Z | [
"python",
"selenium",
"selenium-webdriver",
"webdriver"
] | I have installed firefox and Xvfb on my centos6.4 server to use selenium webdriver.
But, when I run the code, I got an error.
```
from selenium import webdriver
browser = webdriver.Firefox()
```
**Error**
```
selenium.common.exceptions.WebDriverException: Message:
'The browser appears to have exited before we coul... | for Googlers, this answer didn't work for me, and I had to use [this answer](http://stackoverflow.com/questions/13039530/unable-to-call-firefox-from-selenium-in-python-on-aws-machine) instead. I am using AWS Ubuntu.
Basically, I needed to install Xvfb and then pyvirtualdisplay:
```
sudo apt-get install xvfb
sudo pip ... |
How to fix Selenium WebDriverException: The browser appears to have exited before we could connect? | 26,070,834 | 33 | 2014-09-27T02:55:08Z | 37,760,053 | 20 | 2016-06-11T04:53:48Z | [
"python",
"selenium",
"selenium-webdriver",
"webdriver"
] | I have installed firefox and Xvfb on my centos6.4 server to use selenium webdriver.
But, when I run the code, I got an error.
```
from selenium import webdriver
browser = webdriver.Firefox()
```
**Error**
```
selenium.common.exceptions.WebDriverException: Message:
'The browser appears to have exited before we coul... | I was running into this on an (headless) Ubuntu 14.04 server with Jenkins and xvfb installed. I had installed the latest stable Firefox (47) which started a build failing that ran a python script which used the Firefox driver for selenium (version 2.53).
Apparently Firefox 47+ is not compatible with the driver used in... |
How to fix Selenium WebDriverException: The browser appears to have exited before we could connect? | 26,070,834 | 33 | 2014-09-27T02:55:08Z | 38,280,345 | 19 | 2016-07-09T09:28:51Z | [
"python",
"selenium",
"selenium-webdriver",
"webdriver"
] | I have installed firefox and Xvfb on my centos6.4 server to use selenium webdriver.
But, when I run the code, I got an error.
```
from selenium import webdriver
browser = webdriver.Firefox()
```
**Error**
```
selenium.common.exceptions.WebDriverException: Message:
'The browser appears to have exited before we coul... | I too had faced same problem. I was on Firefox 47 and Selenium 2.53; I downgraded Firefox to 45. This worked.
1. Remove Firefox 47 first :
```
sudo apt-get purge firefox
```
2. Check for available versions:
```
apt-cache show firefox | grep Version
```
It will show available firefox versions li... |
Python - JSON Load from file not working | 26,072,148 | 2 | 2014-09-27T06:56:02Z | 26,072,151 | 9 | 2014-09-27T06:56:48Z | [
"python",
"json",
"dictionary"
] | So I am writing a basic multipurpose script which uses json to import a dictionary from a file but for some reason it doesn't save properly. I've looked all over and can't find anything relating to my exact problem.
Here is my code:
```
import json
dicti = json.loads(open('database.db'))
print(str(dicti))
```
But th... | You want `json.load` for loading a file. `json.loads` is for loading from a string. |
import error django corsheaders | 26,072,426 | 3 | 2014-09-27T07:34:13Z | 26,744,116 | 20 | 2014-11-04T20:10:40Z | [
"python",
"django",
"amazon-ec2",
"django-cors-headers"
] | i am a beginner in django. i am doing project based on it. project has two stages. i completed first phase and uploaded the code to amazon ec2 instance. after completing second phase i added some packages like `python-social-auth`, `django-cors-headers`, `django-easy-maps`, `crispyforms`. but now it is showing import e... | ```
pip install django-cors-headers
``` |
Creating a pandas dataframe from a dictionary | 26,074,447 | 9 | 2014-09-27T12:00:45Z | 26,074,669 | 15 | 2014-09-27T12:28:24Z | [
"python",
"python-2.7",
"pandas"
] | I'd like to create a `DataFrame` from a `dict` where the `dict` `keys` will be the column names and the `dict` `values` will be the rows. I'm trying to use `pandas.DataFrame.from_dict()` to convert my dictionary. Here's my code:
```
import pandas as pd
import datetime
current_time1 = datetime.datetime.now()
record_1 ... | If each dict represents a row, you could pass a *list of dicts* to `pd.DataFrame`:
```
In [37]: pd.DataFrame([record_1])
Out[37]:
Date Difficulty Player Score
0 2014-09-27 08:26:16.950192 hard John 0
``` |
TypeError: takes exactly 1 argument (2 given) | 26,074,993 | 2 | 2014-09-27T13:08:23Z | 26,075,101 | 8 | 2014-09-27T13:21:39Z | [
"python",
"tkinter",
"typeerror"
] | I'm new to programming, learnt python syntax. Stuck at my first GUI program!
Here is my code:
```
#User name
userLabel = Label(self.signView, text="User Name")
userLabel.grid(sticky = E)
self.userEntry = Entry(self.signView)
self.userEntry.grid(row=0, column=1)
self.labelUserVar = StringVar()
self.validLabel = Labe... | The function you pass to `.bind()` [takes the event as an argument](https://docs.python.org/2/library/tkinter.html#bindings-and-events), and it will be passed in regardless of whether you need it.
Change the function definition to
```
def CheckUser(self, event):
self.labelUserVar.set("unavailable user name!")
``` |
Why am I not restricted from instantiating abstract classes in Python 3.4? | 26,077,258 | 4 | 2014-09-27T17:25:31Z | 26,077,273 | 8 | 2014-09-27T17:26:26Z | [
"python",
"abstraction",
"python-3.4"
] | I have written a Python script and just found out that Python 3.4 does not restrict an abstract class from being instantiated while Python 2.7.8 did.
Here is my abstract class in the file named `Shape.py`.
```
from abc import ABCMeta, abstractmethod
class Shape:
__metaclass__ = ABCMeta # Making the class abstra... | In Python 3, you declare a metaclass differently:
```
class Shape(metaclass=ABCMeta):
```
See the [*Customizing class creation* documentation](https://docs.python.org/3/reference/datamodel.html#metaclasses):
> The class creation process can be customised by passing the `metaclass` keyword argument in the class defin... |
Flask, how to return a success status code for ajax call | 26,079,754 | 10 | 2014-09-27T22:23:56Z | 26,080,784 | 16 | 2014-09-28T01:29:04Z | [
"jquery",
"python",
"ajax",
"flask"
] | On the server-side, I am just printing out the json-as-dictionary to the console
```
@app.route('/',methods=['GET','POST'])
@login_required
def index():
if request.method == "POST":
print request.json.keys()
return "hello world"
```
Now, whenever I make a post request via ajax, the console prints out ... | [About Responses](http://flask.pocoo.org/docs/0.10/quickstart/#about-responses) in Flask:
> # About Responses
>
> The return value from a view function is automatically converted into a response object for you. If the return value is a string itâs converted into a response object with the string as response body, an... |
ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value | 26,080,303 | 3 | 2014-09-27T23:57:43Z | 26,080,380 | 9 | 2014-09-28T00:12:56Z | [
"python",
"django",
"heroku",
"django-postgresql"
] | I am in the midst of setting up my django project on heroku. I have been following the documentation, but when I `foreman start` I receive an error that I can't quite figure out. I have set up my engine files, but it doesn't seem to want to work.
**Full traceback:**
```
Traceback (most recent call last):
File "mana... | You are using the [`dj-database-url` module](https://github.com/kennethreitz/dj-database-url) to set `DATABASES['default']`. Whatever comes before the line:
```
DATABASES['default'] = dj_database_url.config()
```
is meaningless as you replace your database configuration in its entirety. The `dj_database_url.config()... |
secret key not set in flask session | 26,080,872 | 17 | 2014-09-28T01:47:54Z | 26,080,974 | 18 | 2014-09-28T02:07:59Z | [
"python",
"session",
"flask"
] | I am having 0 luck getting a session working in Flask (a Python module).
Right now I am using a flask 3rd party library [Flask-Session](http://pythonhosted.org/Flask-Session/)
When I connect to my site, I get the following error:
> RuntimeError: the session is unavailable because no secret key was
> set. Set the sec... | The exception is raised by the `NullSessionInterface` session implementation, which is the *default session type* when you use Flask-Session. That's because you don't ever actually give the `SESSION_TYPE` configuration *to Flask*; it is *not enough* to set it as a global in your module.
This default doesn't make much ... |
secret key not set in flask session | 26,080,872 | 17 | 2014-09-28T01:47:54Z | 33,898,263 | 13 | 2015-11-24T15:58:13Z | [
"python",
"session",
"flask"
] | I am having 0 luck getting a session working in Flask (a Python module).
Right now I am using a flask 3rd party library [Flask-Session](http://pythonhosted.org/Flask-Session/)
When I connect to my site, I get the following error:
> RuntimeError: the session is unavailable because no secret key was
> set. Set the sec... | Set the secret key outside of `if __name__ == '__main__':`
```
from flask import Flask, session
app = Flask(__name__)
app.secret_key = "super secret key"
@app.route("/")
...
if __name__ == '__main__':
app.debug = True
app.run()
``` |
Pandas: Subtract row mean from each element in row | 26,081,300 | 5 | 2014-09-28T03:24:57Z | 26,084,093 | 14 | 2014-09-28T10:44:55Z | [
"python",
"pandas",
"dataframe"
] | I have a dataframe with rows indexed by chemical element type and columns representing different samples. The values are floats representing the degree of presence of the row element in each sample.
I want to compute the mean of each row and subtract it from each value in that specific row to normalize the data, and m... | You could use DataFrame's `sub` method and specify that the subtraction should happen row-wise (`axis=0`) as opposed to the default column-wise behaviour:
```
df.sub(df.mean(axis=1), axis=0)
```
Here's an example:
```
>>> df = pd.DataFrame({'a': [1.5, 2.5], 'b': [0.25, 2.75], 'c': [1.25, 0.75]})
>>> df
a b ... |
ImproperlyConfigured: You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings | 26,082,128 | 18 | 2014-09-28T06:00:13Z | 26,089,302 | 27 | 2014-09-28T20:27:16Z | [
"python",
"django",
"heroku",
"django-settings"
] | I was trying to configure my django project to deploy to heroku. I am getting the following error and I don't really know how to fix it.
Here is the **full traceback** and error:
```
22:46:15 web.1 | Traceback (most recent call last):
22:46:15 web.1 | File "/Users/nir/nirla/venv/lib/python2.7/site-packages/gunico... | I figured that the **DJANGO\_SETTINGS\_MODULE** had to be set some way, so I looked at the [documentation](https://docs.djangoproject.com/en/1.10/topics/settings/#designating-the-settings) (link updated) and found:
```
export DJANGO_SETTINGS_MODULE=mysite.settings
```
Though that is not enough if you are running a se... |
ImproperlyConfigured: You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings | 26,082,128 | 18 | 2014-09-28T06:00:13Z | 28,297,987 | 16 | 2015-02-03T11:39:28Z | [
"python",
"django",
"heroku",
"django-settings"
] | I was trying to configure my django project to deploy to heroku. I am getting the following error and I don't really know how to fix it.
Here is the **full traceback** and error:
```
22:46:15 web.1 | Traceback (most recent call last):
22:46:15 web.1 | File "/Users/nir/nirla/venv/lib/python2.7/site-packages/gunico... | From [The Definitive Guide to Django: Web Development Done Right](https://books.google.com/books?id=Gpr7J7-FFmwC&lpg=PA42&ots=_vTIfpKGYF&pg=PA42#v=onepage&q&f=false):
> If youâve used **Python** before, you may be wondering why weâre
> running **python manage.py shell** instead of just **python**. Both
> commands ... |
Calculating Autocorrelation of Pandas DataFrame along each Column | 26,083,293 | 2 | 2014-09-28T09:02:32Z | 27,164,416 | 8 | 2014-11-27T06:32:31Z | [
"python",
"numpy",
"pandas"
] | I want to calculate the autocorrelation coefficients of lag length one among columns of a Pandas DataFrame. A snippet of my data is:
```
RF PC C D PN DN P
year
1890 NaN NaN N... | This is a late answer, but for future users, you can also use the pandas.Series.autocorr(), which calculates lag-N (default=1) autocorrelation on Series:
```
df['C'].autocorr(lag=1)
```
<http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.autocorr.html#pandas.Series.autocorr> |
Upload Files To Google Cloud Storage With Google App Engine (Python) | 26,083,985 | 6 | 2014-09-28T10:30:49Z | 26,095,127 | 7 | 2014-09-29T08:00:41Z | [
"python",
"google-app-engine",
"google-cloud-storage"
] | I'm trying to set up a basic python-based google app engine site that allows users to upload files to google cloud storage (mostly images)
I've been going through the documentation for the JSON API and the GCS client library overview (as well as blobstore etc) and still don't have a good handle on which is the best me... | Google Cloud Storage has two APIs -- the [XML API](https://cloud.google.com/storage/docs/xml-api-overview) and the [JSON API](https://cloud.google.com/storage/docs/json_api/). The XML API is XML based and very like the Amazon S3 API. The JSON API is similar to many other Google APIs, and it works with the standard Goog... |
Python wx import fails Mac OS X 10.9.5 using Anaconda | 26,084,926 | 2 | 2014-09-28T12:39:41Z | 26,102,341 | 8 | 2014-09-29T14:31:31Z | [
"python",
"osx",
"wxpython",
"anaconda"
] | I installed Python wx from the official website. However when I try to import wx, I got the following error message:
```
import wx Traceback (most recent call last):
File "<stdin>", line 1,
in <module> ImportError: No module named wx
```
I tried to run 32bit and 64bit version, but it still does not work. I am using t... | Anaconda is not linked to the default python folder, so whether brew nor the installer could be linked to the anaconda distribution.
```
conda install wxpython
```
does the trick |
How to convert IETF BCP 47 language identifier to ISO-639-2? | 26,085,570 | 4 | 2014-09-28T13:53:16Z | 26,085,763 | 7 | 2014-09-28T14:14:14Z | [
"python",
"ios",
"iso-639-2",
"ietf-bcp-47"
] | I am writing a server API for an iOS application. As a part of the initialization process, the app should send the phone interface language to server via an API call.
The problem is that Apple uses something called [IETF BCP 47 language identifier](http://tools.ietf.org/html/bcp47) in its [`NSLocale preferredLanguages... | BCP 47 identifiers start with a 2 letter ISO 639-1 *or* 3 letter 639-2, 639-3 or 639-5 language code; see the [RFC 5646 Syntax section](http://tools.ietf.org/html/rfc5646#section-2.1):
> ```
> Language-Tag = langtag ; normal language tags
> / privateuse ; private use tag
> ... |
matplotlib font not found | 26,085,867 | 6 | 2014-09-28T14:27:15Z | 26,106,170 | 13 | 2014-09-29T18:11:09Z | [
"python",
"matplotlib",
"fonts"
] | I'm trying to use the font "Heuristica" in my matplotlib plots, but it won't show up.
I defined "Heuristica" on the first spot in the rcParameter font.serif --> no result
I changed font.family to "Heuristica" and got the message
```
findfont: FontFamily not found
```
that got me thinking, because Heuristica is inst... | Well, mdboom solved the problem over at [github](https://github.com/matplotlib/matplotlib/issues/3590), all the credit belongs to him:
> When you add new fonts to your system, you need to delete your fontList.cache file in order for matplotlib to find them.
>
> The reason it works on lines 4/5 in your example is becau... |
Understanding NumPy's einsum | 26,089,893 | 15 | 2014-09-28T21:33:51Z | 33,641,428 | 34 | 2015-11-10T23:10:37Z | [
"python",
"numpy",
"numpy-einsum"
] | I'm struggling to understand exactly how `einsum` works. I've looked at the documentation and a few examples, but it's not seeming to stick.
Here's an example we went over in class:
```
C = np.einsum("ij,jk->ki", A, B)
```
for two arrays`A` and `B`
I think this would take `A^T * B`, but I'm not sure (it's taking th... | (Note: this answer is based on a short [blog post](http://ajcr.net/Basic-guide-to-einsum/) about `einsum` I wrote a while ago.)
### What does `einsum` do?
Imagine that we have two multi-dimensional arrays, `A` and `B`. Now let's suppose we want to...
* *multiply* `A` with `B` in a particular way to create new array ... |
Python floating point number comparison | 26,091,689 | 6 | 2014-09-29T02:19:49Z | 26,091,722 | 9 | 2014-09-29T02:25:18Z | [
"python",
"floating-point"
] | I'm just reviewing some basics of Python and there's a tricky problem about comparing floating point numbers.
```
2.2 * 3.0 == 6.6
3.3 * 2.0 == 6.6
```
I thought these should both return a False. However, the second one gave me a True.

Please help m... | This might be illuminating:
```
>>> float.hex(2.2 * 3.0)
'0x1.a666666666667p+2'
>>> float.hex(3.3 * 2.0)
'0x1.a666666666666p+2'
>>> float.hex(6.6)
'0x1.a666666666666p+2'
```
Although they are all displayed in decimal as `6.6`, when you inspect the internal representation, two of them are represented in the same way, ... |
Is Python *with* statement exactly equivalent to a try - (except) - finally block? | 26,096,435 | 13 | 2014-09-29T09:18:48Z | 26,096,914 | 16 | 2014-09-29T09:44:11Z | [
"python",
"with-statement",
"contextmanager"
] | I know this was widely discussed, but I still can't find an answer to confirm this: is the *with* statement identical to calling the same code in a try - (except) -finally block, where whatever one defines in the `__exit__` function of the context manager is placed in the finally block?
For example -- are these 2 code... | I'm going to put aside mentions of scope, because it's really not very relevant.
According to [PEP 343](http://www.python.org/dev/peps/pep-0343/),
```
with EXPR as VAR:
BLOCK
```
translates to
```
mgr = (EXPR)
exit = type(mgr).__exit__ # Not calling it yet
value = type(mgr).__enter__(mgr)
exc = True
try:
t... |
python, best way to convert a pandas series into a pandas dataframe | 26,097,916 | 3 | 2014-09-29T10:38:16Z | 26,098,292 | 8 | 2014-09-29T10:59:01Z | [
"python",
"pandas",
"dataframe",
"series"
] | I have a Pandas series sf:
```
email
[email protected] [1.0, 0.0, 0.0]
[email protected] [2.0, 0.0, 0.0]
[email protected] [1.0, 0.0, 0.0]
[email protected] [4.0, 0.0, 0.0]
[email protected] [1.0, 0.0, 3.0]
[email protected] [1.0, 5.0, 0.0]
```
And I would like to transform it to the following DataFrame:
... | Rather than create 2 temporary dfs you can just pass these as params within a dict using the DataFrame constructor:
```
pd.DataFrame({'email':sf.index, 'list':sf.values})
```
There are lots of ways to construct a df, see the [docs](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html#pandas.Dat... |
Importing a CSV file in pandas into a pandas dataframe | 26,098,114 | 6 | 2014-09-29T10:49:22Z | 26,099,684 | 10 | 2014-09-29T12:14:03Z | [
"python",
"csv",
"pandas"
] | I have a CSV file taken from a SQL dump that looks like the below (first few lines using head file.csv from terminal):
```
??AANAT,AANAT1576,4
AANAT,AANAT1704,1
AAP,AAP-D-12-00691,8
AAP,AAP-D-12-00834,3
```
When I use the pd.read\_csv('file.csv') command I get an error "ValueError: No columns to parse from file".
An... | So the `??` characters you see are in fact non-printable characters which after looking at your raw csv file using a hex editor show that they are in fact [utf-16 little endian](http://en.wikipedia.org/wiki/Byte_order_mark#UTF-16) `\FFEE` which is the Byte-Order-Mark.
So all you need to do is to pass this as the encod... |
CSV read error: new-line character seen in unquoted field | 26,102,302 | 8 | 2014-09-29T14:29:19Z | 26,102,347 | 21 | 2014-09-29T14:32:07Z | [
"python",
"csv"
] | I created a python script which works with a test CSV dataset of 10 records. When I scaled this up to the actual datasets (a few thousand rows), I am getting the following error:
> \_csv.Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode?
The code is as follows:
... | From [PEP-0278](http://www.python.org/dev/peps/pep-0278/):
> In a Python with universal newline support open() the mode parameter
> can also be "U", meaning "open for input as a text file with universal
> newline interpretation". Mode "rU" is also allowed, for symmetry with
> "rb"
So try to change
```
with open('./D... |
django.core.exceptions.ImproperlyConfigured | 26,103,005 | 9 | 2014-09-29T15:04:54Z | 26,103,342 | 42 | 2014-09-29T15:22:31Z | [
"python",
"django",
"python-2.7",
"python-3.x",
"django-1.8"
] | I have recently moved to another machine and have had to check my project out from subversion again, but I am pretty sure this computer has Django 1.8 and the project is expecting 1.7.
I have tried to sync my db with the code to create the necessary tables but I get the following error.
```
C:\Users\jont\Documents\AT... | Like the error mentions, you need to **explicitly** specify the fields, or exclude.
Try this
```
class JobForm(models.ModelForm):
#fields
class Meta:
model = Job
fields = "__all__"
```
which would include all the fields
Here is the relevant [documentation (release notes 1.6)](https://docs.d... |
SqlAlchemy converting UTC DateTime to local time before saving | 26,105,730 | 2 | 2014-09-29T17:42:32Z | 26,106,482 | 8 | 2014-09-29T18:30:42Z | [
"python",
"datetime",
"sqlalchemy"
] | I have the following situation:
- Postgres backend with a field
```
timestamp without time zone
```
* Right before saving the datetime value, it looks like : 2014-09-29 06:00:00+00:00
* I then load the same row from the db and the value is : 2014-09-29 09:00:00
So in the database the date stored is no longer 6AM .. ... | What I suspect is happening is that you are storing aware `datetime`s correctly, but are not reading it back with a time zone because the column is `WITHOUT TIME ZONE`. Each PostgreSQL connection has an associated time zone that defaults to the system's time zone, so when you retrieve a particular `TIMESTAMP` it gets r... |
How do I get the giant component of a NetworkX graph? | 26,105,764 | 6 | 2014-09-29T17:45:31Z | 26,105,849 | 9 | 2014-09-29T17:50:19Z | [
"python",
"networkx"
] | I don't know if NetworkX recently tweaked one of the methods to be a generator instead of returning a list, but I'm looking for a good (rather, better) way to get the GC of a graph.
I have a working, but really inefficient-looking, snippet down:
```
# G = nx.Graph()
giant = sorted(nx.connected_component_subgraphs(G),... | In networkx 1.9, `connected_components_subgraphs` returns an iterator (instead of a sorted list). The values yielded by the iterator are [not in sorted order](http://stackoverflow.com/a/24378179/190597). So to find the largest, use `max`:
```
giant = max(nx.connected_component_subgraphs(G), key=len)
```
Sorting is O(... |
Emmet - Notepad++ "Unknown exception" | 26,110,973 | 17 | 2014-09-30T00:35:55Z | 26,128,628 | 41 | 2014-09-30T19:31:42Z | [
"python",
"notepad++",
"emmet"
] | Every time I'm trying to use [Emmet's `Expand Abbreviation`](http://docs.emmet.io/actions/expand-abbreviation/) it gives me errors. First a pop up with `Unknown Exception` and then another pop up with `python script plugin did not accept the script`.
I tried reinstalling Notepad++ but that didn't help.
I'm running Wi... | I had the same problem as you.
This is the solution which worked for me:
* `emmetio/npp` Bug ID #12: [*python script plugin did not accept the script*](https://github.com/emmetio/npp/issues/12)
Basically, the problem lies in the *Python Script* plugin which will be automatically installed with Emmet.
Go to this link... |
Emmet - Notepad++ "Unknown exception" | 26,110,973 | 17 | 2014-09-30T00:35:55Z | 26,302,711 | 8 | 2014-10-10T15:18:34Z | [
"python",
"notepad++",
"emmet"
] | Every time I'm trying to use [Emmet's `Expand Abbreviation`](http://docs.emmet.io/actions/expand-abbreviation/) it gives me errors. First a pop up with `Unknown Exception` and then another pop up with `python script plugin did not accept the script`.
I tried reinstalling Notepad++ but that didn't help.
I'm running Wi... | There is a newer version (PythonScript\_1.0.8.0.msi ) available here:
<http://sourceforge.net/projects/npppythonscript/files/latest/download?source=files>
I had the exact same issue on two different computers, one Windows 7 and one Windows 8, both 64bit. I installed the MSI on the Win8 computer and it fixed the probl... |
how to convert an array to a dict in python | 26,112,855 | 2 | 2014-09-30T04:34:54Z | 26,112,911 | 11 | 2014-09-30T04:41:06Z | [
"python"
] | Now, I wanna convert an array to a dict like this:
```
dict = {'item0': arr[0], 'item1': arr[1], 'item2': arr[2]...}
```
How to solve this problem elegantly in python? | You could use [`enumerate`](https://docs.python.org/3.4/library/functions.html#enumerate) and a dictionary comprehension:
```
>>> arr = ["aa", "bb", "cc"]
>>> {'item{}'.format(i): x for i,x in enumerate(arr)}
{'item2': 'cc', 'item0': 'aa', 'item1': 'bb'}
``` |
Why are tuples constructed from differently initialized sets equal? | 26,115,765 | 38 | 2014-09-30T08:06:35Z | 26,115,870 | 13 | 2014-09-30T08:12:02Z | [
"python",
"set",
"comparison",
"tuples",
"hashtable"
] | I expected the following two tuples
```
>>> x = tuple(set([1, "a", "b", "c", "z", "f"]))
>>> y = tuple(set(["a", "b", "c", "z", "f", 1]))
```
to compare unequal, but they don't:
```
>>> x == y
>>> True
```
Why is that? | There are two things at play here.
1. Sets are unordered. `set([1, "a", "b", "c", "z", "f"])) == set(["a", "b", "c", "z", "f", 1])`
2. When you convert a set to a tuple via the `tuple` constructor it essentially iterates over the set and adds each element returned by the iteration .
The constructor syntax for tuples ... |
Why are tuples constructed from differently initialized sets equal? | 26,115,765 | 38 | 2014-09-30T08:06:35Z | 26,116,307 | 64 | 2014-09-30T08:36:32Z | [
"python",
"set",
"comparison",
"tuples",
"hashtable"
] | I expected the following two tuples
```
>>> x = tuple(set([1, "a", "b", "c", "z", "f"]))
>>> y = tuple(set(["a", "b", "c", "z", "f", 1]))
```
to compare unequal, but they don't:
```
>>> x == y
>>> True
```
Why is that? | At first glance, it appears that `x` should always equal `y`, because two sets constructed from the same elements are always equal:
```
>>> x = set([1, "a", "b", "c", "z", "f"])
>>> y = set(["a", "b", "c", "z", "f", 1])
>>> x
{1, 'z', 'a', 'b', 'c', 'f'}
>>> y
{1, 'z', 'a', 'b', 'c', 'f'}
>>> x == y
True
```
**Howeve... |
Python 3 - Zip is an iterator in a pandas dataframe | 26,121,009 | 8 | 2014-09-30T12:34:15Z | 26,121,238 | 8 | 2014-09-30T12:44:58Z | [
"python",
"python-3.x",
"pandas"
] | I am following the [Pandas tutorials](http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/01%20-%20Lesson.ipynb)
The tutorials are written using python 2.7 and I am doing them in python 3.4
Here is my version details.
```
In [11]: print('Python version ' + sys.version)
Python versio... | You need to change this line:
```
BabyDataSet = zip(names,births)
```
to:
```
BabyDataSet = list(zip(names,births))
```
This is because `zip` now returns an iterator in python 3, hence your error message. For more details see: <http://www.diveintopython3.net/porting-code-to-python-3-with-2to3.html#zip> and <https:/... |
Python 2 and 3 compatible method to convert bytes to integer | 26,121,553 | 3 | 2014-09-30T13:01:38Z | 26,121,683 | 7 | 2014-09-30T13:07:18Z | [
"python",
"python-3.x",
"python-2.x"
] | I have a byte string similar to the following.
```
foo = b"\x00\xff"
```
I want to convert `foo` each hex value into an integer. I can use the following in Python 3.
```
In [0]: foo[0]
Out[0]: 0
In [1]: foo[1]
Out[1]: 255
```
Python 2 requires an `ord()` call.
```
In [0]: ord(foo[0])
Out[0]: 0
In [1]: ord(foo[... | You have three options:
* Use a [`bytearray()`](https://docs.python.org/2/library/functions.html#bytearray):
```
ba = bytearray(foo)
ba[0]
```
* Use the [`struct` module](https://docs.python.org/2/library/struct.html) to unpack your bytes into integers:
```
import struct
struct.unpack('{}B'.format(len(... |
Flask JSON serializable error because of flask babel | 26,124,581 | 6 | 2014-09-30T15:28:11Z | 26,130,041 | 7 | 2014-09-30T21:01:29Z | [
"python",
"json",
"unicode",
"flask"
] | I am using <https://pythonhosted.org/Flask-Babel/> extention for localization.
base form class
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from app import app
from flask.ext.wtf import Form
class BaseForm(Form):
@property
def error_list(self):
_errors = []
for fieldName, errorMessage... | The issue is that your error message is a `_LazyString` object returned by `lazy_gettext`, not a string. Normally, this wouldn't be an issue because displaying it in a template would call `str()` on it, causing it to evaluate the translation. However, you are collecting the objects in `error_list`, and then passing the... |
'str' object has no attribute 'decode' in Python3 | 26,125,141 | 2 | 2014-09-30T15:57:58Z | 26,125,214 | 8 | 2014-09-30T16:01:42Z | [
"python"
] | I've some problem with "decode" method in python 3.3.4. This is my code:
```
for lines in open('file','r'):
decodedLine = lines.decode('ISO-8859-1')
line = decodedLine.split('\t')
```
But I can't decode the line for this problem:
```
AttributeError: 'str' object has no attribute 'decode'
```
Do you have any... | One *encodes* strings, and one *decodes* bytes.
You should read bytes from the file and decode them:
```
for lines in open('file','rb'):
decodedLine = lines.decode('ISO-8859-1')
line = decodedLine.split('\t')
```
Luckily `open` has an encoding argument which makes this easy:
```
for decodedLine in open('fil... |
Combining text stemming and removal of punctuation in NLTK and scikit-learn | 26,126,442 | 10 | 2014-09-30T17:14:33Z | 26,132,560 | 11 | 2014-10-01T01:01:19Z | [
"python",
"text",
"scikit-learn",
"nltk"
] | I am using a combination of NLTK and `scikit-learn`'s `CountVectorizer` for stemming words and tokenization.
Below is an example of the plain usage of the `CountVectorizer`:
```
from sklearn.feature_extraction.text import CountVectorizer
vocab = ['The swimmer likes swimming so he swims.']
vec = CountVectorizer().fit... | There are several options, try remove the punctuation before tokenization. But this would mean that `don't` -> `dont`
```
import string
def tokenize(text):
text = "".join([ch for ch in text if ch not in string.punctuation])
tokens = nltk.word_tokenize(text)
stems = stem_tokens(tokens, stemmer)
return ... |
Verifying PEP8 in iPython notebook code | 26,126,853 | 10 | 2014-09-30T17:41:27Z | 26,202,979 | 9 | 2014-10-05T13:27:38Z | [
"python",
"ipython",
"ipython-notebook",
"pep8"
] | Is there an easy way to check that iPython notebook code, while it's being written, is compliant with PEP8? | Install the pep8 extension for ipython notebook using the following command :
```
%install_ext https://raw.githubusercontent.com/SiggyF/notebooks/master/pep8_magic.py
```
Refer [the official docs](https://github.com/ipython/ipython/wiki/Extensions-Index#pep8) for more info.
After that use the `%%pep8` Cell magic fun... |
How do I use reportlab's drawImage with an image url? | 26,128,462 | 6 | 2014-09-30T19:20:39Z | 26,129,420 | 10 | 2014-09-30T20:23:16Z | [
"python",
"python-2.7",
"python-imaging-library",
"reportlab"
] | When I try the following:
```
from reportlab.lib.pagesizes import letter
from reportlab.pdfgen.canvas import Canvas
import urllib
import StringIO
import PIL.Image
image_file = urllib.urlopen('https://www.google.com/images/srpr/logo11w.png')
image_string = StringIO.StringIO(image_file.read())
logo = PIL.Image.open(ima... | I was doing it the hard way. This works (also added the necessary mask to avoid transparent becoming black):
```
from reportlab.lib.pagesizes import letter
from reportlab.pdfgen.canvas import Canvas
from reportlab.lib.utils import ImageReader
logo = ImageReader('https://www.google.com/images/srpr/logo11w.png')
canva... |
In pytest, how to skip or xfail certain fixtures? | 26,128,620 | 4 | 2014-09-30T19:30:45Z | 26,129,206 | 7 | 2014-09-30T20:09:21Z | [
"python",
"testing",
"py.test"
] | I have a heavily-fixtured test function which fails (as it should) with certain fixture inputs. How can I indicate this? This is what I'm doing now, and maybe there's a better way. I'm pretty new to `py.test` so I'd appreciate any tips.
The next part is all the input fixtures. FYI, `example_datapackage_path` is define... | In your `datapackage` or `expression_key` fixtures you can use `pytest.xfail` and `pytest.skip` as described [here](http://pytest.org/latest/skipping.html#imperative-xfail-from-within-a-test-or-setup-function). For example:
```
@pytest.fixture
def datapackage(self, example_datapackage_path, metadata_key,
... |
matplotlib get ylim values | 26,131,607 | 24 | 2014-09-30T23:05:30Z | 26,131,688 | 34 | 2014-09-30T23:12:14Z | [
"python",
"matplotlib",
"plot"
] | I'm using `matplotlib` to plot data (using `plot` and `errorbar` functions) from Python. I have to plot a set of totally separate and independent plots, and then adjust they're `ylim` values so they can be easily visually compared.
How can I retrieve the `ylim` values from each plot, so that I can take the min and max... | Just use `axes.get_ylim()`, it is very similar to `set_ylim`. From the [docs](http://matplotlib.org/1.3.0/api/axes_api.html#matplotlib.axes.Axes.get_ylim):
> get\_ylim()
>
> Get the y-axis range [bottom, top] |
Create superuser Django in PyCharm | 26,132,021 | 7 | 2014-09-30T23:43:46Z | 27,591,011 | 7 | 2014-12-21T15:51:12Z | [
"python",
"django",
"pycharm"
] | I'm following the basic tutorial but for some reason any time I try to create a superuser `(run manage.py Task --> createsuperuser)` I get an error in the program.
It returns "Superuser created." but after giving me this error:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError:... | Well it looks like it is a bug in `pycharm 4.3.1` and `python 3.4` and `django 1.7`I faced the same problem and after searching for a while, I managed to solve the issue through using the command line. In the command line type:
```
$python manage.py createsuperuser
```
then it will ask you for `username`, type that a... |
Create superuser Django in PyCharm | 26,132,021 | 7 | 2014-09-30T23:43:46Z | 28,064,218 | 10 | 2015-01-21T09:57:56Z | [
"python",
"django",
"pycharm"
] | I'm following the basic tutorial but for some reason any time I try to create a superuser `(run manage.py Task --> createsuperuser)` I get an error in the program.
It returns "Superuser created." but after giving me this error:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError:... | Goto : PyCharm > Tools > Run django Console >
```
from django.contrib.auth.models import User
User.objects.create_superuser(username='yourUsername', password='YourPassword', email='[email protected]')
```
Cheers ! |
round a single column in pandas | 26,133,538 | 5 | 2014-10-01T03:11:41Z | 26,133,621 | 8 | 2014-10-01T03:24:10Z | [
"python",
"pandas"
] | Is there a way to round a single column in pandas without affecting the rest of the dataframe?
```
df:
item value1 value2
0 a 1.12 1.3
1 a 1.50 2.5
2 a 0.10 0.0
3 b 3.30 -1.0
4 b 4.80 -1.0
```
df.value1.apply(np.round)
gives
```
0 1
1 ... | You are very close.
You applied the round the the series of values given by `df.value1`.
The return type is thus a Series.
You need to assign that series back to the dataframe (or another dataframe with the same Index)
Also there is a `Series.round` method which is basically a short hand for `Series.apply(np.round)`
... |
How to get reference count of a PyObject? | 26,134,455 | 8 | 2014-10-01T05:05:58Z | 26,134,489 | 8 | 2014-10-01T05:09:13Z | [
"python",
"c++",
"c",
"python-c-api",
"python-internals"
] | How to get reference count of a `PyObject` from C++?
There are functions `Py_INCREF` and `Py_DECREF` which increase/decrease it, but I haven't found any function which return object's reference count.
I need it for debugging purposes. | The reference count of each and every object is stored in the `PyObject` itself, in a variable called [`ob_refcnt`](https://hg.python.org/cpython/file/37d896c3604a/Include/object.h#l107). You can directly access that.
```
typedef struct _object {
_PyObject_HEAD_EXTRA
Py_ssize_t ob_refcnt; # Reference ... |
Why does tuple(set([1,"a","b","c","z","f"])) == tuple(set(["a","b","c","z","f",1])) 85% of the time with hash randomization enabled? | 26,136,894 | 96 | 2014-10-01T08:09:29Z | 26,136,895 | 127 | 2014-10-01T08:09:29Z | [
"python",
"hash",
"set",
"cpython",
"python-internals"
] | [Given Zero Piraeus' answer to another question](http://stackoverflow.com/a/26116307/1763356), we have that
```
x = tuple(set([1, "a", "b", "c", "z", "f"]))
y = tuple(set(["a", "b", "c", "z", "f", 1]))
print(x == y)
```
Prints `True` about 85% of the time with [hash randomization](http://lemire.me/blog/archives/2012/... | I'm going to assume any readers of this question to have read both:
* [Zero Piraeus' answer](http://stackoverflow.com/a/26116307/1763356) and
* [My explanation of CPython's dictionaries](http://stackoverflow.com/a/26099671/1763356).
**The first thing to note is that hash randomization is decided on interpreter start-... |
Microsoft Visual C++ Compiler for Python 2.7 | 26,140,192 | 13 | 2014-10-01T11:21:21Z | 26,352,922 | 18 | 2014-10-14T04:59:01Z | [
"python",
"visual-c++"
] | I downloaded [Microsoft Visual C++ Compiler for Python 2.7](http://www.microsoft.com/en-us/download/details.aspx?id=44266) , and install it, the full path of `vcvarsall.bat` is:
```
C:\Users\UserName\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\vcvarsall.bat
```
But the following code can't retur... | Update setuptools to 6.0 or greater. In those version setuptools can autodetect Microsoft Visual C++ Compiler for Python 2.7 with the **vcvarsall.bat**.
Please reference to:
1. <https://pypi.python.org/pypi/setuptools/6.1#id4>
2. <https://bitbucket.org/pypa/setuptools/issue/258> |
Microsoft Visual C++ Compiler for Python 2.7 | 26,140,192 | 13 | 2014-10-01T11:21:21Z | 28,618,559 | 14 | 2015-02-19T22:51:25Z | [
"python",
"visual-c++"
] | I downloaded [Microsoft Visual C++ Compiler for Python 2.7](http://www.microsoft.com/en-us/download/details.aspx?id=44266) , and install it, the full path of `vcvarsall.bat` is:
```
C:\Users\UserName\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\vcvarsall.bat
```
But the following code can't retur... | Look in the `setup.py` file of the package you are trying to install. If it is an older package it may be importing `distutils.core.setup()` rather than `setuptools.setup()`.
I ran in to this (in 2015) with a combination of these factors:
1. The Microsoft Visual C++ Compiler for Python 2.7 from <http://aka.ms/vcpytho... |
How to execute "left outer join" in SqlAlchemy | 26,142,304 | 9 | 2014-10-01T13:15:50Z | 26,144,444 | 9 | 2014-10-01T15:04:15Z | [
"python",
"sql",
"sqlalchemy"
] | I need to execute this query::
```
select field11, field12
from Table_1 t1
left outer join Table_2 t2 ON t2.tbl1_id = t1.tbl1_id
where t2.tbl2_id is null
```
I had these classes in python:
```
class Table1(Base):
....
class Table2(Base):
table_id = Column(
Integer,
ForeignKey('Table1.id', ond... | ```
q = session.query(Table1.field1, Table1.field2)\
.outerjoin(Table2)\ # use in case you have relationship defined
# .outerjoin(Table2, Table1.id == Table2.table_id)\ # use if you do not have relationship defined
.filter(Table2.tbl2_id == None)
```
should do it, assuming that `field1` and `field2` are fr... |
Flask jsonify returns weird array? | 26,143,046 | 2 | 2014-10-01T13:52:35Z | 26,143,111 | 7 | 2014-10-01T13:55:16Z | [
"python",
"json",
"flask"
] | I'm trying to create a simple API using Flask. I now want to return a list of dicts as follows:
```
print results # prints out [{'date': '2014-09-25 19:00:00', 'title': u'Some Title'}]
response = make_response(jsonify(results))
response.headers['Content-Type'] = 'application/json'
return response
```
But when I go t... | `jsonify()` will only accept *dictionaries*. If you give it a list, it'll turn the object into a dictionary, with `dict(argument)`. See the [`Flask.jsonify()` documentation](http://flask.pocoo.org/docs/0.10/api/#flask.json.jsonify):
> Creates a `Response` with the JSON representation of the given arguments with an `ap... |
Convert row to column header for Pandas DataFrame, | 26,147,180 | 6 | 2014-10-01T17:33:44Z | 26,147,330 | 19 | 2014-10-01T17:42:09Z | [
"python",
"pandas",
"rename",
"dataframe"
] | The data I have to work with is a bit messy.. It has header names inside of its data. How can I choose a row from an existing pandas dataframe and make it (rename it to) a column header?
I want to do something like:
```
header = df[df['old_header_name1'] == 'new_header_name1']
df.columns = header
``` | ```
In [21]: df = pd.DataFrame([(1,2,3), ('foo','bar','baz'), (4,5,6)])
In [22]: df
Out[22]:
0 1 2
0 1 2 3
1 foo bar baz
2 4 5 6
```
Set the column labels to equal the values in the 2nd row (index location 1):
```
In [23]: df.columns = df.iloc[1]
```
Drop the 2nd row:
```
In [24]: ... |
Python -Remove Time from Datetime String | 26,153,795 | 7 | 2014-10-02T02:47:12Z | 26,154,047 | 12 | 2014-10-02T03:21:15Z | [
"python",
"python-2.7",
"datetime",
"strptime"
] | I have a date string and want to convert it to the date type:
I have tried to use datetime.datetime.strptime with the format that I want but it is returning the time with the conversion.
```
when = alldates[int(daypos[0])]
print when, type(when)
then = datetime.datetime.strptime(when, '%Y-%m-%d')
pri... | ```
print then.date()
```
What you want is a datetime.date object. What you have is a datetime.datetime object. You can either change the object when you print as per above, or do the following when creating the object:
```
then = datetime.datetime.strptime(when, '%Y-%m-%d').date()
``` |
Why is the asyncio library slower than threads for this I/O-bound operation? | 26,154,125 | 11 | 2014-10-02T03:31:23Z | 26,154,749 | 16 | 2014-10-02T05:05:19Z | [
"python",
"python-3.x",
"python-asyncio"
] | I'm writing a python program used to enumerate a site's domain name.For example,'a.google.com'.
First, I used the `threading` module to do this:
```
import string
import time
import socket
import threading
from threading import Thread
from queue import Queue
'''
enumerate a site's domain name like this:
1-9 a-z + .g... | First, I can't reproduce a performance difference nearly as large as the one you're seeing on my Linux machine. I'm consistently seeing about 20-25 seconds for the threaded version, and between 24-34 seconds for the `asyncio` version.
Now, why is `asyncio` slower? There are a few things that contribute to this. First,... |
Filter rows of a numpy array? | 26,154,711 | 7 | 2014-10-02T05:00:52Z | 26,154,854 | 11 | 2014-10-02T05:19:51Z | [
"python",
"numpy",
"filter"
] | I am looking to apply a function to each row of a numpy array. If this function evaluates to true I will keep the row, otherwise I will discard it. For example, my function might be:
```
def f(row):
if sum(row)>10: return True
else: return False
```
I was wondering if there was something similar to:
```
np.a... | Ideally, you would be able to implement a vectorized version of your function and use that to do [boolean indexing](http://docs.scipy.org/doc/numpy/user/basics.indexing.html#boolean-or-mask-index-arrays). For the vast majority of problems this is the right solution. Numpy provides quite a few functions that can act ove... |
Is there a way to refer to the entire matched expression in re.sub without the use of a group? | 26,155,985 | 4 | 2014-10-02T07:15:31Z | 26,156,038 | 8 | 2014-10-02T07:21:06Z | [
"python",
"regex"
] | Suppose I want to prepend all occurrences of a particular expression with a character such as `\`.
In `sed`, it would look like this.
```
echo '__^^^%%%__FooBar' | sed 's/[_^%]/\\&/g'
```
Note that the `&` character is used to represent the original matched expression.
I have looked through the [regex docs](https:... | From the [docs](https://docs.python.org/2/library/re.html#re.sub):
> The backreference `\g<0>` substitutes in the entire substring matched by the RE.
Example:
```
>>> print re.sub("[_%^$]", r"\\\g<0>", line)
\_\_\^\^\^\%\%\%\_\_FooBar
``` |
Convert a simple one line string to RDD in Spark | 26,157,620 | 13 | 2014-10-02T09:07:14Z | 26,158,173 | 13 | 2014-10-02T09:40:12Z | [
"python",
"apache-spark",
"pyspark",
"distributed-computing",
"rdd"
] | I have a simple line:
```
line = "Hello, world"
```
I would like to convert it to an RDD with only one element.
I have tried
```
sc.parallelize(line)
```
But it get:
```
sc.parallelize(line).collect()
['H', 'e', 'l', 'l', 'o', ',', ' ', 'w', 'o', 'r', 'l', 'd']
```
Any ideas? | try using List as parameter:
```
sc.parallelize(List(line)).collect()
```
it returns
```
res1: Array[String] = Array(hello,world)
``` |
How to test if all rows are equal in a numpy | 26,163,727 | 7 | 2014-10-02T15:02:05Z | 26,164,297 | 8 | 2014-10-02T15:31:55Z | [
"python",
"numpy"
] | In numpy, is there a nice idiomatic way of testing if all rows are equal in a 2d array?
I can do something like
```
np.all([np.array_equal(M[0], M[i]) for i in xrange(1,len(M))])
```
This seems to mix python lists with numpy arrays which is ugly and presumably also slow.
Is there a nicer/neater way? | One way is to subtract the first row from all of the rows of your array and check that each entry is equal to 0:
```
>>> a = np.arange(9).reshape(3, 3)
>>> b = np.ones((3, 3))
>>> ((a - a[0]) == 0).all()
False
>>> ((b - b[0]) == 0).all()
True
```
This might be faster than identifying unique rows for larger arrays bec... |
Python timezone '%z' directive for datetime.strptime() not available | 26,165,659 | 7 | 2014-10-02T16:52:50Z | 26,177,579 | 9 | 2014-10-03T10:47:22Z | [
"python",
"datetime",
"tzinfo"
] | ## Using '%z' pattern of datetime.strptime()
I have a string text that represent a date and I'm perfectly able to parse it and transform it into a clean datetime object:
```
date = "[24/Aug/2014:17:57:26"
dt = datetime.strptime(date, "[%d/%b/%Y:%H:%M:%S")
```
Except that I can't catch the entire date string with **t... | [`strptime()` is implemented in pure Python](https://hg.python.org/cpython/file/tip/Lib/_strptime.py#l298). Unlike `strftime()`; it [which directives are supported] doesn't depend on platform. `%z` is supported since Python 3.2:
```
>>> from datetime import datetime
>>> datetime.strptime('24/Aug/2014:17:57:26 +0200', ... |
Elegant expression for row-wise dot product of two matrices | 26,168,363 | 8 | 2014-10-02T19:40:33Z | 26,168,677 | 9 | 2014-10-02T20:00:03Z | [
"python",
"numpy"
] | I have two 2-d numpy arrays with the same dimensions, A and B, and am trying to calculate the row-wise dot product of them. I could do:
```
np.sum(A * B, axis=1)
```
Is there another way to do this so that numpy is doing the row-wise dot product in one step rather than two? Maybe with `tensordot`? | This is a good application for [`numpy.einsum`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html).
```
a = np.random.randint(0, 5, size=(6, 4))
b = np.random.randint(0, 5, size=(6, 4))
res1 = np.einsum('ij, ij->i', a, b)
res2 = np.sum(a*b, axis=1)
print(res1)
# [18 6 20 9 16 24]
print(np.allc... |
ImportError: No module named Qsci while running ninja-ide | 26,170,659 | 5 | 2014-10-02T22:30:20Z | 26,177,914 | 8 | 2014-10-03T11:10:14Z | [
"python",
"qscintilla"
] | I am trying to install and run ninja-ide <http://ninja-ide.org/home/>
However when I try to run ninja-ide I am facing this error
```
ImportError: No module named Qsci
```
I have been trying to install ninja-ide whole night.
I tried everything installing from source, installing using apt-get dependencies mentioned o... | You need to install:
python-qscintilla2
Also, the version that requires that, is the version still on development, not an official release. |
Python slow read performance issue | 26,178,038 | 55 | 2014-10-03T11:20:02Z | 26,203,484 | 21 | 2014-10-05T14:23:13Z | [
"python",
"performance",
"perl",
"io"
] | Following an earlier thread I boiled down my problem to it's bare bones, in migrating from a Perl script to a Python one I found a huge performance issue with slurping files in Python. Running this on Ubuntu Server.
NB: this is not a X vs. Y thread I need to know fundamentally if this is how it is or if I'm doing some... | I will focus on only one of your examples, because rest things should be analogical:
What I think, may matter in this situation is Read-Ahead (or maybe another technique related to this) feature:
Let consider such example:
I have created 1000 xml files in "1" dir (names 1.xml to 1000.xml) as you did by dd command an... |
python Named tuple to dictionary | 26,180,528 | 29 | 2014-10-03T14:08:01Z | 26,180,604 | 51 | 2014-10-03T14:12:56Z | [
"python",
"dictionary",
"tuples",
"namedtuple"
] | I have a named tuple class in python
```
class Town(collections.namedtuple('Town', [
'name',
'population',
'coordinates',
'population',
'capital',
'state_bird'])):
# ...
```
What I'd like to do is turn this into a dictionary. I'll admit python is not one of my stronger languages. The ke... | *TL;DR: there's a method `_asdict` provided for this.*
Here is a demonstration of the usage:
```
>>> fields = ['name', 'population', 'coordinates', 'capital', 'state_bird']
>>> Town = collections.namedtuple('Town', fields)
>>> funkytown = Town('funky', 300, 'somewhere', 'lipps', 'chicken')
>>> funkytown._asdict()
Ord... |
python Named tuple to dictionary | 26,180,528 | 29 | 2014-10-03T14:08:01Z | 26,180,609 | 15 | 2014-10-03T14:13:04Z | [
"python",
"dictionary",
"tuples",
"namedtuple"
] | I have a named tuple class in python
```
class Town(collections.namedtuple('Town', [
'name',
'population',
'coordinates',
'population',
'capital',
'state_bird'])):
# ...
```
What I'd like to do is turn this into a dictionary. I'll admit python is not one of my stronger languages. The ke... | There's a built in method on `namedtuple` instances for this, [`_asdict`](https://docs.python.org/2/library/collections.html#collections.somenamedtuple._asdict).
As discussed in the comments, on some versions `vars()` will also do it, but it's apparently highly dependent on build details, whereas `_asdict` should be r... |
Why does a class definition always produce the same bytecode? | 26,182,013 | 13 | 2014-10-03T15:29:04Z | 26,182,040 | 17 | 2014-10-03T15:30:22Z | [
"python",
"bytecode",
"disassembling",
"python-internals"
] | Say I do:
```
#!/usr/bin/env python
# encoding: utf-8
class A(object):
pass
```
Now I disassemble it:
```
python -m dis test0.py
4 0 LOAD_CONST 0 ('A')
3 LOAD_NAME 0 (object)
6 BUILD_TUPLE 1
9 LOAD_CONST ... | The new statements are stored in *nested* bytecode. You can see in your disassembly that *another code object* is loaded:
```
9 LOAD_CONST 1 (<code object A at 0x1004ebb30, file "test0.py", line 4>)
```
You need to inspect *that* code object instead. That's because the class body is executed just ... |
How to use NOT IN clause in sqlalchemy ORM query | 26,182,027 | 14 | 2014-10-03T15:29:45Z | 26,182,403 | 25 | 2014-10-03T15:53:06Z | [
"python",
"mysql",
"sqlalchemy"
] | how do i convert the following mysql query to sqlalchemy?
```
SELECT * FROM `table_a` ta, `table_b` tb where 1
AND ta.id = tb.id
AND ta.id not in (select id from `table_c`)
```
so far i have this for sqlalchemy:
```
query = session.query(table_a, table_b)
query = query.filter(table_a.id == table_b.id)
``` | Try this:
```
subquery = session.query(table_c.id)
query = query.filter(~table_a.id.in_(subquery))
```
Note: `table_a`, `table_b` and `table_c` should be mapped classes, not `Table` instances. |
How do I generate random text in NLTK 3.0? | 26,183,145 | 9 | 2014-10-03T16:42:56Z | 26,184,505 | 19 | 2014-10-03T18:18:51Z | [
"python",
"nlp",
"nltk"
] | The generate method of nltk.text.Text seems to have been removed in NLTK 3.0.
For example:
```
>>> bible = nltk.corpus.gutenberg.words(u'bible-kjv.txt')
>>> bibleText = nltk.Text(bible)
>>> bibleText.generate()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Text' object has ... | A note in the [first online chapter of the NLTK book](http://www.nltk.org/book/ch01.html#searching-text) says that:
> The generate() method is not available in NLTK 3.0 but will be
> reinstated in a subsequent version. |
How to create an alias to functools.lru_cache for memoization? | 26,183,682 | 4 | 2014-10-03T17:19:58Z | 26,183,916 | 7 | 2014-10-03T17:37:40Z | [
"python",
"decorator",
"memoization"
] | To add memoization to functions in Python, the [`functools.lru_cache()`](https://docs.python.org/3/library/functools.html#functools.lru_cache) decorator can be used:
```
import functools
@functools.lru_cache(maxsize=None)
def my_func():
...
```
However, I would like to define the following alias to the above use... | You don't need the `functools.partial` because it is already setup to take two calls. Just call it once:
```
memoize = functools.lru_cache(maxsize=None)
```
then use the alias as a decorator:
```
@memoize
def my_func():
...
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.