title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
python pip trouble installing from requirements.txt | 28,167,987 | 7 | 2015-01-27T10:17:22Z | 35,704,416 | 22 | 2016-02-29T16:21:30Z | [
"python",
"pip",
"virtualenv"
] | I've had great luck with pip in the past, but working at installing some stuff in a venv on is giving me some headaches.
I keep getting errors like
No distributions at all found for somepackage Storing debug log for failure in /root/.pip/pip.log
```
Could not find any downloads that satisfy the requirement somepackage... | Had a similar issue but the above method didn't work for me. Clarified it with a rather simpler solution:
`(venv) $ pip install --upgrade -r requirements.txt`
This should initiate an upgrade and install from your `requirements.txt` file |
How to get a layer from a caffe model using torch | 28,171,577 | 2 | 2015-01-27T13:22:44Z | 28,189,121 | 10 | 2015-01-28T09:51:23Z | [
"python",
"lua",
"caffe",
"torch"
] | In python when I want to get the data from a layer using caffe I have the following code
```
input_image = caffe.io.load_image(imgName)
input_oversampled = caffe.io.resize_image(input_image, self.net.crop_dims)
prediction = self.net.predict([input_image])
caffe_input = np.asarray(self.net.preprocess('d... | First of all please note that [torch-caffe-binding](https://github.com/szagoruyko/torch-caffe-binding) (i.e the tool you use with `require 'caffe'`) is a direct wrapper around Caffe library thanks to LuaJIT FFI.
This means that it allows you to conveniently do a forward or backward with a Torch tensor, **but** [behind... |
How to import and use python Levenshtein extension on OSX? | 28,172,261 | 6 | 2015-01-27T13:58:26Z | 29,494,360 | 7 | 2015-04-07T14:42:51Z | [
"python",
"python-2.7",
"osx-yosemite"
] | I've downloaded the python-Levenshtein archive and extracted Levenshtein dir. So, in result I have the following files structure:
```
Levenshtein
- __init__.py
- _levenshtein.c
- _levenshtein.h
- StringMatcher.py
myscript.py
```
And the following `myscript.py` content:
```
from Levenshtein import *
from warn... | It seems to me like you did not *build* the Levenshtein package. Go to the unextracted directory of the source you downloaded (for example, `python-Levenshtein-0.12.0/`) and build with:
```
python setup.py build
```
If all went well (apart, possibly, from some warnings), install to your `site-packages` with
```
sudo... |
How to import and use python Levenshtein extension on OSX? | 28,172,261 | 6 | 2015-01-27T13:58:26Z | 29,752,393 | 8 | 2015-04-20T15:43:40Z | [
"python",
"python-2.7",
"osx-yosemite"
] | I've downloaded the python-Levenshtein archive and extracted Levenshtein dir. So, in result I have the following files structure:
```
Levenshtein
- __init__.py
- _levenshtein.c
- _levenshtein.h
- StringMatcher.py
myscript.py
```
And the following `myscript.py` content:
```
from Levenshtein import *
from warn... | Two other ways to install the [python-Levenshtein](https://pypi.python.org/pypi/python-Levenshtein/) package:
1. `easy_install python-Levenshtein` (requires [setuptools](https://pypi.python.org/pypi/setuptools))
2. `pip install python-levenshtein` (requires [pip](https://pip.pypa.io/en/latest/installing.html)) |
Import caffe error | 28,177,298 | 6 | 2015-01-27T18:17:48Z | 28,235,061 | 9 | 2015-01-30T11:36:07Z | [
"python",
"caffe"
] | i compiled caffe successfully in my ubuntu machine but cannot import in python.
Caffe is installed /home/pbu/Desktop/caffe
i tried adding the /home/pbu/caffe/python path to sys.path.append, still not working
i am trying to import caffe
```
root@pbu-OptiPlex-740-Enhanced:/home/pbu/Desktop# python ./caffe/output.py
T... | This happens when you have not run `make` for the python files separately.
Run `make pycaffe` soon after running `make` in the Caffe directory.
You may have to set the path to the python library correctly in `Makefile.config` |
Why doesn't the namedtuple module use a metaclass to create nt class objects? | 28,184,531 | 18 | 2015-01-28T04:17:25Z | 28,184,705 | 17 | 2015-01-28T04:37:11Z | [
"python",
"metaclass",
"python-internals",
"namedtuple"
] | I spent some time investigating the [`collections.namedtuple` module](https://hg.python.org/cpython/file/3b920a778484/Lib/collections/__init__.py#l265) a few weeks ago. The module uses a factory function which populates the dynamic data (the name of the new `namedtuple` class, and the class attribute names) into a very... | There are some hints in the [issue 3974](http://bugs.python.org/issue3974). The author proposed a new way to create named tuples, which was rejected with the following comments:
> *It seems the benefit of the original version is that it's faster,
> thanks to hardcoding critical methods.*
> - Antoine Pitrou
> *There i... |
What is the mechanism that allows Python monkey patching in this instance? | 28,185,455 | 2 | 2015-01-28T05:45:01Z | 28,185,568 | 8 | 2015-01-28T05:53:24Z | [
"python",
"global-variables",
"monkeypatching",
"python-internals",
"side-effects"
] | Can someone explain the logic behind how this works with the Python interpreter? Is this behavior only thread local? Why does the assignment in the first module import persist after the second module import? I just had a long debugging session that came down to this.
*external\_library.py*
```
def the_best():
pri... | Nothing thread-local about this. `somemodule.anattr = avalue` is **very** global behavior! After this assignment the attribute is changed for good (until maybe changed back later) no matter what.
There's no mysterious mechanics at play! Assignment to any attribute of an object that allows such assignment (as module ob... |
how to use .get() in a nested dict? | 28,187,682 | 9 | 2015-01-28T08:29:28Z | 28,187,716 | 8 | 2015-01-28T08:32:01Z | [
"python",
"dictionary"
] | I use `.get()` to query for keys which may or may not be present in a dictionary.
```
In [1]: a = {'hello': True}
In [3]: print(a.get('world'))
None
```
I have, however, dictionaries where the key I want to check for is deeper in the structure and I do not know if the ancestors are present or not. If the dict is `b =... | You can return an empty dictionary object from `dict.get()` to ease chaining calls:
```
b.get('x', {}).get('y', {}).get('z')
```
but perhaps you'd be better off catching the `KeyError` exception:
```
try:
value = b['x']['y']['z']
except KeyError:
value = None
``` |
Windows Scipy Install: No Lapack/Blas Resources Found | 28,190,534 | 50 | 2015-01-28T11:00:07Z | 29,860,484 | 12 | 2015-04-25T02:56:33Z | [
"python",
"windows",
"python-3.x",
"numpy",
"pip"
] | I am trying to install python and a series of packages onto a 64bit windows 7 desktop. I have installed Python 3.4, have Microsoft Visual Studio C++ installed, and have successfully installed numpy, pandas and a few others. I am getting the following error when trying to install scipy;
```
numpy.distutils.system_info.... | The solution to the absence of BLAS/LAPACK libraries for SciPy installations on Windows 7 64-bit is described here:
<http://www.scipy.org/scipylib/building/windows.html>
Installing Anaconda is much easier, but you still don't get Intel MKL or GPU support without paying for it (they are in the MKL Optimizations and Ac... |
Windows Scipy Install: No Lapack/Blas Resources Found | 28,190,534 | 50 | 2015-01-28T11:00:07Z | 32,064,281 | 30 | 2015-08-18T05:36:06Z | [
"python",
"windows",
"python-3.x",
"numpy",
"pip"
] | I am trying to install python and a series of packages onto a 64bit windows 7 desktop. I have installed Python 3.4, have Microsoft Visual Studio C++ installed, and have successfully installed numpy, pandas and a few others. I am getting the following error when trying to install scipy;
```
numpy.distutils.system_info.... | **The following link should solve all problems with Windows and SciPy**; just choose the appropriate download. I was able to pip install the package with no problems. Every other solution I have tried gave me big headaches.
Source: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy>
Command: pip install [Local File Loc... |
Windows Scipy Install: No Lapack/Blas Resources Found | 28,190,534 | 50 | 2015-01-28T11:00:07Z | 34,949,400 | 12 | 2016-01-22T14:46:35Z | [
"python",
"windows",
"python-3.x",
"numpy",
"pip"
] | I am trying to install python and a series of packages onto a 64bit windows 7 desktop. I have installed Python 3.4, have Microsoft Visual Studio C++ installed, and have successfully installed numpy, pandas and a few others. I am getting the following error when trying to install scipy;
```
numpy.distutils.system_info.... | This was the order I got everything working. The second point is the most important one. Scipy needs `Numpy+MKL`, not just vanilla `Numpy`.
1. Install python 3.5
2. `pip install "file path"` (download Numpy+MKL wheel from here <http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>)
3. `pip install scipy` |
Python: How to re-use list comprehensions after they are created in an expression | 28,190,922 | 4 | 2015-01-28T11:19:25Z | 28,191,100 | 7 | 2015-01-28T11:27:43Z | [
"python",
"python-2.7"
] | How could I reuse the same list in an expression which is created using a list comprehension with an if else expression? Achieving it in a single statement/expression (without using any intermediate variable)
i.e. `index = 0 if [listcomprehension]` is empty else get first element of the list comprehension without re-cr... | Use a generator expression and next with a default value of 0, if you are not going to store the list creating one is pointless:
```
carIndex = next((index for index, name in enumerate(car.name for car in
cars if "VW" in car.name or "Poodle" in car.name)),-1)
```
Your original logic will always return 0 if there is ... |
What's an easy way to get item from a dict, if not found, get from another dict? | 28,195,810 | 2 | 2015-01-28T15:17:37Z | 28,195,899 | 12 | 2015-01-28T15:20:53Z | [
"python",
"dictionary"
] | I would like to get a value from `a`. If it can't be found there, then try to get it from `b`. Throw exception if it can't be found in `b` either.
`a.get(key, b[key])` doesn't work since it won't lazy evaluate `b[key]`.
What's the proper way to it?
The following works but seems a bit lengthy.
```
value = a.get(key,... | If you're using 3.3+, you can use [ChainMap](https://docs.python.org/3/library/collections.html#collections.ChainMap)
```
from collections import ChainMap
a = {'a': 1}
b = {'b': 2}
# Add as many `dict`s as you want in order of priority...
chained = ChainMap(a, b)
print(chained['b'])
```
If you're only interested in... |
Best way to count the number of rows with missing values in a pandas DataFrame | 28,199,524 | 7 | 2015-01-28T18:17:45Z | 28,199,556 | 9 | 2015-01-28T18:19:34Z | [
"python",
"pandas",
"missing-data"
] | I currently came up with some work arounds to count the number of missing values in a pandas `DataFrame`. Those are quite ugly and I am wondering if there is a better way to do it.
Let's create an example `DataFrame`:
```
from numpy.random import randn
df = pd.DataFrame(randn(5, 3), index=['a', 'c', 'e', 'f', 'h'],
... | For the second count I think just subtract the number of rows from the number of rows returned from `dropna`:
```
In [14]:
from numpy.random import randn
df = pd.DataFrame(randn(5, 3), index=['a', 'c', 'e', 'f', 'h'],
columns=['one', 'two', 'three'])
df = df.reindex(['a', 'b', 'c', 'd', 'e', 'f', 'g', ... |
Is this calculation executed in Python? | 28,200,121 | 2 | 2015-01-28T18:52:26Z | 28,200,154 | 12 | 2015-01-28T18:53:47Z | [
"python",
"python-internals"
] | > Disclaimer: I'm new to programming, but new to Python. This may be a pretty basic question.
I have the following block of code:
```
for x in range(0, 100):
y = 1 + 1;
```
Is the calculation of `1 + 1` in the second line executed 100 times?
I have two suspicions why it might not:
1) The compiler sees `1 + 1` ... | Option 1 is executed; the CPython compiler simplifies mathematical expressions with constants in the peephole optimiser.
Python will not eliminate the loop body however.
You can introspect what Python produces by looking at the bytecode; use the [`dis` module](https://docs.python.org/2/library/dis.html) to take a loo... |
Foreign key value in Django REST Framework | 28,200,485 | 7 | 2015-01-28T19:12:37Z | 28,220,246 | 9 | 2015-01-29T16:55:41Z | [
"python",
"django",
"rest",
"django-rest-framework"
] | **models.py:**
```
class Station(models.Model):
station = models.CharField()
class Flat(models.Model):
station = models.ForeignKey(Station, related_name="metro")
# another fields
```
Then in **serializers.py**:
```
class StationSerializer(serializers.ModelSerializer):
station = serializers.RelatedFi... | `RelatedField` is the base class for all fields which work on relations. Usually you should not use it unless you are subclassing it for a custom field.
In your case, you don't even need a related field at all. You are only looking for a read-only single foreign key representation, so you can just use a `CharField`.
... |
How to plot scikit learn classification report? | 28,200,786 | 8 | 2015-01-28T19:29:59Z | 34,304,414 | 7 | 2015-12-16T05:16:56Z | [
"python",
"numpy",
"matplotlib",
"scikit-learn"
] | Is it possible to plot with matplotlib scikit-learn classification report?. Let's assume I print the classification report like this:
```
print '\n*Classification Report:\n', classification_report(y_test, predictions)
confusion_matrix_graph = confusion_matrix(y_test, predictions)
```
and I get:
```
Clasification... | Expanding on [Bin](http://stackoverflow.com/users/3089523/bin)'s answer:
```
import matplotlib.pyplot as plt
import numpy as np
def show_values(pc, fmt="%.2f", **kw):
'''
Heatmap with text in each cell with matplotlib's pyplot
Source: http://stackoverflow.com/a/25074150/395857
By HYRY
'''
fro... |
Python property inheritance | 28,200,979 | 5 | 2015-01-28T19:43:01Z | 28,201,067 | 7 | 2015-01-28T19:47:35Z | [
"python",
"attributeerror"
] | I am new to python and I wasn't sure what I was doing was correct.
I have a base class `A` and an inherited class `B`.
```
class A(object):
def __init__(self, name):
self.__name = name
@property
def name(self):
return self.__name
@name.setter
def name(self, name):
self.__... | Attributes that begin with two underscores like your `__name` are signified as being private variables. There's no actual *PROTECTION* done to these (that is to say: if you can find it, you can access it), but they are name-mangled to be less easy to access. [More information about that can be found on the docs page](h... |
How to show query parameter options in Django REST Framework - Swagger | 28,203,070 | 7 | 2015-01-28T21:50:09Z | 28,206,314 | 13 | 2015-01-29T03:00:47Z | [
"python",
"django",
"rest",
"django-rest-framework"
] | This has been bugging me for a while now.
My ultimate goal is to show query parameter options inside SwaggerUI and give a form input for each query parameter. Similar to how it is displayed when providing a serializer for POST.
I am using a viewset which inherits from GenericViewSet and I have tried the following:
*... | Okay, for those who stumble upon this question, I have figured it out. It is rather silly, and I feel a little stupid for not knowing, but in my defense, it was not clearly documented. The information was not found in DRF documentation, or inside Django REST Swagger repository. Instead it was found under django-rest-fr... |
How to show query parameter options in Django REST Framework - Swagger | 28,203,070 | 7 | 2015-01-28T21:50:09Z | 28,468,879 | 10 | 2015-02-12T03:41:37Z | [
"python",
"django",
"rest",
"django-rest-framework"
] | This has been bugging me for a while now.
My ultimate goal is to show query parameter options inside SwaggerUI and give a form input for each query parameter. Similar to how it is displayed when providing a serializer for POST.
I am using a viewset which inherits from GenericViewSet and I have tried the following:
*... | I found the [rest framework swagger docs](http://django-rest-swagger.readthedocs.org/en/latest/yaml.html).
so we can write the parameter type(interger, char), response, etc.
the tripple `---` is necessary.
```
@api_view(["POST"])
def foo_view(request):
"""
Your docs
---
# YAML (must be separated by `-... |
How to use Cython typed memoryviews to accept strings from Python? | 28,203,670 | 8 | 2015-01-28T22:28:47Z | 29,067,248 | 11 | 2015-03-15T23:00:44Z | [
"python",
"python-2.7",
"cython",
"python-c-extension",
"memoryview"
] | How can I write a Cython function that takes a byte string object (a normal string, a bytearray, or another object that follows the [buffer protocol](https://docs.python.org/2/c-api/buffer.html)) as a [typed memoryview](http://docs.cython.org/src/userguide/memoryviews.html)?
According to the [Unicode and Passing Strin... | Despite the documentation suggesting otherwise, Cython (at least up to version 0.22) does **not** support coercing read-only buffer objects into typed memoryview objects. Cython always passes the `PyBUF_WRITABLE` flag to
`PyObject_GetBuffer()`, even when it doesn't need write access. This causes read-only buffer object... |
Use first row as column names? Pandas read_html | 28,206,556 | 4 | 2015-01-29T03:31:16Z | 28,206,750 | 7 | 2015-01-29T03:54:55Z | [
"python",
"parsing",
"pandas"
] | I have this simple one line script:
```
from pandas import read_html
print read_html('http://money.cnn.com/data/hotstocks/', flavor = 'bs4')
```
Which works, fine, but the column names are missing, they are being identified as 1, 2, 3. Is there an easy way to tell pandas to use the first row as the column names? I k... | 'read\_html` takes a header parameter. You can pass a row index:
```
read_html('http://money.cnn.com/data/hotstocks/', header =0, flavor = 'bs4')
```
Worth noting this caveat in the docs:
> For example, you might need to manually assign column names if the column names are converted to NaN when you pass the header=0... |
Convert string to numpy array | 28,207,743 | 4 | 2015-01-29T05:44:26Z | 28,207,791 | 11 | 2015-01-29T05:49:58Z | [
"python",
"arrays",
"numpy"
] | I have a string like `mystr = "100110"` (the real size is much bigger) I want to convert it to numpy array like `mynumpy = [1, 0, 0, 1, 1, 0], mynumpy.shape = (6,0)`, I know that numpy has `np.fromstring(mystr, dtype=int, sep='')` yet the problem is I can't split my string to every digit of it, so numpy takes it as an ... | `list` may help you do that.
```
import numpy as np
mystr = "100110"
print np.array(list(mystr))
# ['1' '0' '0' '1' '1' '0']
```
If you want to get numbers instead of string:
```
print np.array(list(mystr), dtype=int)
# [1 0 0 1 1 0]
``` |
Check if value is zero or not null in python | 28,210,060 | 4 | 2015-01-29T08:28:11Z | 28,210,081 | 10 | 2015-01-29T08:29:10Z | [
"python",
"python-3.x"
] | Often I am checking if a number variable `number` has a value with `if number` but sometimes the number could be zero. So I solve this by `if number or number == 0`.
Can I do this in a smarter way? I think it's a bit ugly to check if value is zero separately.
# Edit
I think I could just check if the value is a numbe... | If `number` could be `None` *or* a number, and you wanted to include `0`, filter on `None` instead:
```
if number is not None:
```
If `number` can be any number of types, test for the *type*; you can test for just `int` or a combination of types with a tuple:
```
if isinstance(number, int): # it is an integer
if is... |
Using the length of a parameter array as the default value of another parameter of the same function | 28,212,364 | 3 | 2015-01-29T10:29:49Z | 28,212,666 | 7 | 2015-01-29T10:43:07Z | [
"python",
"python-2.7"
] | This is my first time asking a question in SO, so if I'm somehow not doing it properly don't hesitate to edit it or ask me to modify it.
I think my question is kind of general, so I'm quite surprised for not having found any previous one related to this topic. If I missed it and this question is duplicated, I'll be ve... | Using a sentinel value such as `None` is typical:
```
def func(a, start=0, end=None):
if end is None:
end = # whatever
# do stuff
```
---
However, for your actual use case, there's already a builtin way to do this that fits in with the way Python does start/stop/step - which makes your function provi... |
get_bucket() gives 'Bad Request' for S3 buckets I didn't create via Boto | 28,213,328 | 5 | 2015-01-29T11:15:22Z | 28,217,152 | 7 | 2015-01-29T14:29:07Z | [
"python",
"amazon-s3",
"boto"
] | I'm using Boto to try to get a bucket in Amazon S3, but it returns Bad Request when I use get\_bucket() for some of the buckets. I'm starting to wonder if this is a bug with Boto, since I can get the bucket using get\_all\_buckets().
```
>>> from boto.s3.connection import S3Connection
>>> conn = S3Connection(S3_ACCESS... | Turns out the issue is because of the region (I was using Frankfurt). Two ways of dealing with it:
1. Give up on Frankfurt (@andpei points out there are [issues currently reported with it](https://github.com/boto/boto/issues/2741)) and recreate the bucket in a different region.
2. Specify the region using the 'host' p... |
Bash ${var:-default} equivalent in Python? | 28,214,210 | 3 | 2015-01-29T11:57:51Z | 28,214,289 | 10 | 2015-01-29T12:01:35Z | [
"python"
] | What's the simplest way to perform default variable substitution?
```
x = None
... (some process which may set x)
if x is None: use_x = "default"
else: use_x = x
```
Is there any way of writing this in one line? | You can use a [conditional expression](https://docs.python.org/2/reference/expressions.html#conditional-expressions):
```
use_x = "default" if x is None else x
```
You could use a dict defaulting to x if the key did not exist to resemble the bash syntax but the conditional would be the idiomatic way:
```
use_x = {No... |
Find duplicates in a list of lists with tuples | 28,214,895 | 3 | 2015-01-29T12:33:56Z | 28,215,060 | 7 | 2015-01-29T12:42:19Z | [
"python",
"list",
"python-2.7",
"python-3.x",
"set"
] | I am trying to find duplicates within tuples that are nested within a list. This whole construction is a list too. If there are other better ways to organize this to let my problem to be solved - I'd be glad to know, because this is something I build on the way.
```
pairsList = [
[1, (11, 12), (13, 14)... | *The first element in each list uniquely identifies each list.*
Great, then let's convert it to a dict first:
```
d = {x[0]: x[1:] for x in pairsList}
# d:
{1: [(11, 12), (13, 14)],
2: [(21, 22), (23, 24)],
3: [(31, 32), (13, 14)],
4: [(43, 44), (21, 22)]}
```
Let's index the whole data structure:
```
index = ... |
How to add custom renames in six? | 28,215,214 | 3 | 2015-01-29T12:49:52Z | 28,215,387 | 8 | 2015-01-29T12:58:15Z | [
"python",
"six"
] | According to the documentation, six supports [adding custom renames](http://pythonhosted.org//six/#advanced-customizing-renames) to [`six.moves`](http://pythonhosted.org//six/#module-six.moves):
> **`six.add_move(item)`**
>
> Add item to the `six.moves` mapping. item should be a `MovedAttribute` or
> `MovedModule` ins... | You can't import the name from within the move. Use:
```
from __future__ import print_function
from six import add_move, MovedModule
add_move(MovedModule('mock', 'mock', 'unittest.mock'))
from six.moves import mock
print(mock.MagicMock)
```
This will give you:
```
# Python 2
<class 'mock.MagicMock'>
# Python 3
<c... |
Unpickling a python 2 object with python 3 | 28,218,466 | 16 | 2015-01-29T15:32:39Z | 28,218,598 | 34 | 2015-01-29T15:38:01Z | [
"python",
"python-3.x",
"pickle",
"python-2.4",
"2to3"
] | I'm wondering if there is a way to load an object that was pickled in Python 2.4, with Python 3.4.
I've been running 2to3 on a large amount of company legacy code to get it up to date.
Having done this, when running the file I get the following error:
```
File "H:\fixers - 3.4\addressfixer - 3.4\trunk\lib\address\... | You'll have to tell `pickle.load()` how to convert Python bytestring data to Python 3 strings, or you can tell `pickle` to leave them as bytes.
The default is to try and decode all string data as ASCII, and that decoding fails. See the [`pickle.load()` documentation](https://docs.python.org/3/library/pickle.html#pickl... |
How to iterate over columns of pandas dataframe to run regression | 28,218,698 | 21 | 2015-01-29T15:42:25Z | 28,218,909 | 9 | 2015-01-29T15:51:24Z | [
"python",
"pandas",
"statsmodels"
] | I'm sure this is simple, but as a complete newbie to python, I'm having trouble figuring out how to iterate over variables in a `pandas` dataframe and run a regression with each.
Here's what I'm doing:
```
all_data = {}
for ticker in ['FIUIX', 'FSAIX', 'FSAVX', 'FSTMX']:
all_data[ticker] = web.get_data_yahoo(tick... | You can index dataframe columns by the position using `ix`.
```
df1.ix[:,1]
```
This returns the first column for example. (0 would be the index)
```
df1.ix[0,]
```
This returns the first row.
```
df1.ix[:,1]
```
This would be the value at the intersection of row 0 and column 1:
```
df1.ix[0,1]
```
and so on. S... |
How to iterate over columns of pandas dataframe to run regression | 28,218,698 | 21 | 2015-01-29T15:42:25Z | 32,558,621 | 38 | 2015-09-14T06:42:27Z | [
"python",
"pandas",
"statsmodels"
] | I'm sure this is simple, but as a complete newbie to python, I'm having trouble figuring out how to iterate over variables in a `pandas` dataframe and run a regression with each.
Here's what I'm doing:
```
all_data = {}
for ticker in ['FIUIX', 'FSAIX', 'FSAVX', 'FSTMX']:
all_data[ticker] = web.get_data_yahoo(tick... | ```
for column in df:
print(df[column])
``` |
How to iterate over columns of pandas dataframe to run regression | 28,218,698 | 21 | 2015-01-29T15:42:25Z | 36,372,667 | 7 | 2016-04-02T11:31:47Z | [
"python",
"pandas",
"statsmodels"
] | I'm sure this is simple, but as a complete newbie to python, I'm having trouble figuring out how to iterate over variables in a `pandas` dataframe and run a regression with each.
Here's what I'm doing:
```
all_data = {}
for ticker in ['FIUIX', 'FSAIX', 'FSAVX', 'FSTMX']:
all_data[ticker] = web.get_data_yahoo(tick... | You can use `iteritems()`:
```
for name, values in df.iteritems():
print '{name}: {value}'.format(name=name, value=values[0])
``` |
Save dendrogram to newick format | 28,222,179 | 5 | 2015-01-29T18:42:44Z | 31,878,514 | 7 | 2015-08-07T13:06:40Z | [
"python",
"scipy",
"dendrogram"
] | How can I save a dendrogram generated by scipy into [Newick format](http://en.wikipedia.org/wiki/Newick_format)? | You need the linkage matrix Z, which is the input to the scipy dendrogram function, and convert that to Newick format. Additionally, you need a list 'leaf\_names' with the names of your leaves. Here is a function that will do the job:
```
from scipy.cluster import hierarchy
def getNewick(node, newick, parentdist, leaf... |
Python json.loads ValueError, expecting delimiter | 28,223,242 | 9 | 2015-01-29T19:44:56Z | 28,223,331 | 9 | 2015-01-29T19:49:26Z | [
"python",
"json",
"python-2.7"
] | I am extracting a postgres table as json. The output file contains lines like:
```
{"data": {"test": 1, "hello": "I have \" !"}, "id": 4}
```
Now I need to load them in my python code using `json.loads`, but I get this error:
```
Traceback (most recent call last):
File "test.py", line 33, in <module>
print jso... | You can specify so called âraw stringsâ:
```
>>> print r'{"data": {"test": 1, "hello": "I have \" !"}, "id": 4}'
{"data": {"test": 1, "hello": "I have \" !"}, "id": 4}
```
They donât interpret the backslashes.
Usual strings change `\"` to `"`, so you can have `"` characters in strings that are themselves limit... |
"executable not specified" error in Pycharm | 28,224,357 | 2 | 2015-01-29T20:51:21Z | 28,224,358 | 8 | 2015-01-29T20:51:21Z | [
"python",
"ide",
"pycharm"
] | When I try to run a python script in Pycharm, I get this error message:
`error running myscript: Executable is not specified`
and the script does not run. How do I run my script through Pycharm? | You need to designate an interpreter for the project.
`File -> Settings -> Project -> Project Interpreter`, and then select an interpreter at the right.
It looks like this on PyCharm Community Edition 4.0.3:

![Project -> Project Interpreter -> Select Interpret... |
How to handle C extensions for python apps with pip? | 28,226,283 | 7 | 2015-01-29T22:58:58Z | 28,314,632 | 8 | 2015-02-04T06:13:34Z | [
"python",
"deployment"
] | For python applications that install with `pip`, how can you handle their C extension requirements automatically?
For example, the `mysqlclient` module requires development libs of MySQL installed on the system. When you initially install an application requiring that module it'll fail if the MySQL development librari... | **No**. There is no way of including totally separate C library as a part of your build process unless you are writing an [extension](https://docs.python.org/2/extending/building.html). Even in that case, you'll need to specify all **.c** files in **[ext\_modules](https://docs.python.org/2/distutils/setupscript.html#ex... |
Why does appending a list by itself create an infinite list | 28,227,397 | 3 | 2015-01-30T00:44:02Z | 28,227,428 | 7 | 2015-01-30T00:47:31Z | [
"python",
"list",
"recursion"
] | ```
l = [1, 2]
l.append(l)
>>>l
[1, 2, [...]] #l is an infinite list
```
Why does this create an infinite list instead of creating:
```
l = [1, 2]
l.append(l)
>>>l
[1, 2, [1, 2]]
``` | When you do:
```
l.append(l)
```
a *reference* to list `l` is appended to list `l`:
```
>>> l = [1, 2]
>>> l.append(l)
>>> l is l[2]
True
>>>
```
In other words, you put the list inside itself. This creates an infinite reference cycle which is represented by `[...]`.
---
To do what you want, you need to append a ... |
ssl 'module' object has no attribute 'SSLContext' | 28,228,214 | 9 | 2015-01-30T02:18:01Z | 28,228,233 | 17 | 2015-01-30T02:20:27Z | [
"python"
] | ```
In [1]: import ssl
In [2]: context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
AttributeError Traceback (most recent call last)
<ipython-input-2-13c9bad66150> in <module>()
----> 1 context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
AttributeError: 'module' object has no attribute 'SSLContext'
```
Python version: 2.7.6 | `SSLContext` was introduced in 2.7.9, you're using an old version of Python so it doesn't have this attribute. |
How to connect to a remote PostgreSQL database with Python | 28,228,241 | 7 | 2015-01-30T02:21:43Z | 28,228,609 | 14 | 2015-01-30T03:11:19Z | [
"python",
"postgresql",
"python-2.7",
"ssl"
] | I want to connect to a remote PostgreSQL database through Python to do some basic data analysis. This database requires SSL (verify-ca), along with three files (which I have):
* Server root certificate file
* Client certificate file
* Client key file
I have not been able to find a tutorial which describes how to make... | Use the [`psycopg2`](https://pypi.python.org/pypi/psycopg2) module.
You will need to use the ssl options in your connection string, or add them as key word arguments:
```
import psycopg2
conn = psycopg2.connect(database='yourdb', user='dbuser', password='abcd1234', host='server', port='5432', sslmode='require')
```
... |
Python Pandas inner join | 28,228,781 | 10 | 2015-01-30T03:31:13Z | 28,229,188 | 12 | 2015-01-30T04:21:10Z | [
"python",
"join",
"pandas",
"inner-join"
] | I'm trying to inner join DataFrame A to DataFrame B and am running into an error.
Here's my join statement:
```
merged = DataFrameA.join(DataFrameB, on=['Code','Date'])
```
And here's the error:
```
ValueError: len(left_on) must equal the number of levels in the index of "right"
```
I'm not sure the column order m... | use `merge` if you are not joining on the index:
```
merged = pd.merge(DataFrameA,DataFrameB, on=['Code','Date'])
```
**Follow up to question below:**
Here is a replicable example:
```
import pandas as pd
# create some timestamps for date column
i = pd.to_datetime(pd.date_range('20140601',periods=2))
#create two d... |
import vs __import__( ) vs importlib.import_module( )? | 28,231,738 | 13 | 2015-01-30T08:13:59Z | 28,231,805 | 22 | 2015-01-30T08:18:37Z | [
"python",
"python-import"
] | I noticed Flask was using Werkzeug to `__import__` a module, and I was a little confused. I went and checked out the docs on it and saw that it seems to give you more control somehow in terms of where it looks for the module, but I'm not sure *exactly* how and I have zero idea how it's different from `importlib.import_... | [`__import__`](https://docs.python.org/2/library/functions.html#__import__) is a low-level hook function that's used to import modules; it can be used to import a module *dynamically* by giving the module name to import as a variable, something the `import` statement won't let you do.
[`importlib.import_module()`](htt... |
numpy einsum to get axes permutation | 28,232,720 | 5 | 2015-01-30T09:24:09Z | 28,233,465 | 7 | 2015-01-30T10:07:28Z | [
"python",
"numpy",
"numpy-einsum"
] | What I understood in the documentation of ânp.einsumâ is that a permutation string, would give a permutation of the axis in a vector. This is confirmed by the following experiment:
```
>>> M = np.arange(24).reshape(2,3,4)
>>> M.shape
(2, 3, 4)
>>> np.einsum('ijk', M).shape
(2, 3, 4)
>>> np.einsum('ikj', M).shape
(... | When the output signature is not specified (i.e. there's no `'->'` in the subscripts string), `einsum` will create it by taking the letters it's been given and arranging them in alphabetical order.
This means that
```
np.einsum('kij', M)
```
is actually equivalent to
```
np.einsum('kij->ijk', M)
```
So writing `'k... |
What do "In [X]:" and "Out[Y]:" mean in Python code? | 28,238,430 | 3 | 2015-01-30T14:51:25Z | 28,238,477 | 10 | 2015-01-30T14:53:56Z | [
"python"
] | I'm a new Python user, and I'm sure this is a really basic question, but I can't find the answer anywhere. When people post Python code online it is often formatted like this:
```
In [1]: # some stuff
Out[1]:
# some more stuff
```
What are the `In`'s, `Out`'s, and the numbers? And why does my Python console not behav... | They are not Python code. They are [IPython prompts](http://ipython.org/), a popular Python add-on interactive shell.
Each line of code executed on the interactive prompt (denoted by `In`) is numbered, and so is the output produced (denoted by `Out`). You can then instruct IPython to refer back to those inputs and out... |
conditional row read of csv in pandas | 28,239,529 | 5 | 2015-01-30T15:45:05Z | 28,239,777 | 8 | 2015-01-30T15:58:04Z | [
"python",
"csv",
"pandas"
] | I have large csvs where I'm only interested in a subset of the rows. In particular, I'd like to read in all the rows which occur before a particular condition is met.
For example, if read\_csv would yield the dataframe:
```
A B C
1 34 3.20 'b'
2 24 9.21 'b'
3 34 3.32 'c'
4 24 24.3 ... | You could read the csv in chunks. Since `pd.read_csv` will return an iterator when the `chunksize` parameter is specified, you can use `itertools.takewhile` to read only as many chunks as you need, without reading the whole file.
```
import itertools as IT
import pandas as pd
chunksize = 10 ** 5
chunks = pd.read_csv(... |
Explain the aggregate functionality in Spark | 28,240,706 | 15 | 2015-01-30T16:49:18Z | 28,241,948 | 19 | 2015-01-30T18:02:33Z | [
"python",
"apache-spark",
"lambda",
"aggregate",
"rdd"
] | I am looking for some better explanation of the aggregate functionality that is available via spark in python.
The example I have is as follows (using pyspark from Spark 1.2.0 version)
```
sc.parallelize([1,2,3,4]).aggregate(
(0, 0),
(lambda acc, value: (acc[0] + value, acc[1] + 1)),
(lambda acc1, acc2: (acc1[0... | Aggregate lets you transform and combine the values of the RDD at will.
It uses two functions:
The first one transforms and adds the elements of the original collection [T] in a local aggregate [U] and takes the form: (U,T) => U. You can see it as a fold and therefore it also requires a zero for that operation. This ... |
Explain the aggregate functionality in Spark | 28,240,706 | 15 | 2015-01-30T16:49:18Z | 31,389,485 | 8 | 2015-07-13T17:13:27Z | [
"python",
"apache-spark",
"lambda",
"aggregate",
"rdd"
] | I am looking for some better explanation of the aggregate functionality that is available via spark in python.
The example I have is as follows (using pyspark from Spark 1.2.0 version)
```
sc.parallelize([1,2,3,4]).aggregate(
(0, 0),
(lambda acc, value: (acc[0] + value, acc[1] + 1)),
(lambda acc1, acc2: (acc1[0... | I don't have enough reputation points to comment on the previous answer by Maasg.
Actually the zero value should be 'neural' towards the seqop, meaning it wouldn't interfere with the seqop result, like 0 towards add, or 1 towards \*;
You should NEVER try with non-neural values as it might be applied arbitrary times.
T... |
Flask app "Restarting with stat" | 28,241,989 | 24 | 2015-01-30T18:05:19Z | 28,242,285 | 26 | 2015-01-30T18:23:23Z | [
"python",
"flask"
] | I've built a few Flask apps, but on my latest project I noticed something a little strange in development mode. The second line of the usual message in the terminal which always reads:
```
* Running on http://127.0.0.1:5000/
* Restarting with reloader
```
has been replaced by:
```
* Restarting with stat
```
I do... | Check your version of Werkzeug. Version 0.10 was just released and numerous changes went into the reloader. One change is that a default polling reloader is used; the old pyinotify reloader was apparently inaccurate. If you want more efficient polling, install the [`watchdog`](https://pypi.python.org/pypi/watchdog) pac... |
Correct way to obtain confidence interval with scipy | 28,242,593 | 11 | 2015-01-30T18:41:56Z | 28,243,282 | 20 | 2015-01-30T19:32:08Z | [
"python",
"numpy",
"scipy",
"confidence-interval"
] | I have a 1-dimensional array of data:
```
a = np.array([1,2,3,4,4,4,5,5,5,5,4,4,4,6,7,8])
```
for which I want to obtain the 68% confidence interval (ie: the [1 sigma](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule)).
The first comment in [this answer](http://stackoverflow.com/a/15034101/1391441) stat... | The 68% confidence interval for **a single draw** from a normal distribution with
mean mu and std deviation sigma is
```
stats.norm.interval(0.68, loc=mu, scale=sigma)
```
The 68% confidence interval for **the mean of N draws** from a normal distribution
with mean mu and std deviation sigma is
```
stats.norm.interva... |
python equivalent of get() in R (= use string to retrieve value of symbol) | 28,245,607 | 11 | 2015-01-30T22:20:17Z | 28,245,646 | 11 | 2015-01-30T22:23:37Z | [
"python"
] | In R, the `get(s)` function retrieves the value of the symbol whose name is stored in the character variable (vector) `s`, e.g.
```
X <- 10
r <- "XVI"
s <- substr(r,1,1) ## "X"
get(s) ## 10
```
takes the first symbol of the Roman numeral `r` and translates it to its integer equivalent.
Despite spending a... | You can use `locals`:
```
s = 1
locals()['s']
```
EDIT:
Actually, `get` in R is more versatile - `get('as.list')` will give you back `as.list`. For class members, in Python, we can use `getattr` ([here](https://docs.python.org/2/library/functions.html#getattr)), and for built-in things like `len`, `getattr(__builtin... |
Type error Iter - Python3 | 28,246,107 | 4 | 2015-01-30T23:07:16Z | 28,246,137 | 9 | 2015-01-30T23:10:03Z | [
"python",
"python-2.7",
"python-3.x"
] | Can someone please explain why the following code is giving
```
TypeError: iter() returned non-iterator of type 'counter' in python 3
```
This is working in python 2.7.3 without any error.
```
#!/usr/bin/python3
class counter(object):
def __init__(self,size):
self.size=size
self.start=0
d... | In python3.x you need to use [`__next__()`](https://docs.python.org/3/library/stdtypes.html#iterator.__next__) instead of `next()` .
from [Whatâs New In Python 3.0](https://docs.python.org/3.0/whatsnew/3.0.html):
> [PEP 3114](https://www.python.org/dev/peps/pep-3114/): the standard next() method has been renamed to... |
Problems in implementing Horner's method in Python | 28,250,401 | 16 | 2015-01-31T10:05:26Z | 28,251,288 | 14 | 2015-01-31T11:53:37Z | [
"python",
"function",
"loops",
"iteration",
"polynomial-math"
] | So I have written down the codes for evaluating polynomial using three different methods. Horner's method should be the fastest, while the naive method should be the slowest, right? But how come the time for computing it is not what I expect? And the time for calculation sometimes turns out to be exactly the same for i... | Your profiling can be much improved. Plus, we can make your code run 200-500x faster.
---
(1) **Rinse and repeat**
You can't run just one iteration of a performance test, for two reasons.
1. Your time resolution might not be good enough. This is why you sometimes got the same time for two implementations: the time ... |
functional pipes in python like %>% from dplyr | 28,252,585 | 10 | 2015-01-31T14:15:26Z | 28,252,765 | 10 | 2015-01-31T14:33:50Z | [
"python",
"dplyr",
"macropy"
] | In R (thanks to `dplyr`) you can now perform operations with a more functional piping syntax via `%>%`. This means that instead of coding this:
```
> as.Date("2014-01-01")
> as.character((sqrt(12)^2)
```
You could also do this:
```
> "2014-01-01" %>% as.Date
> 12 %>% sqrt %>% .^2 %>% as.character
```
To me this is... | > Does the python language have support for something similar?
*"more functional piping syntax"* is this really a more "functional" syntax ? I would say it adds an "infix" syntax to R instead.
That being said, the [Python's grammar](https://docs.python.org/3/reference/grammar.html) does not have direct support for in... |
functional pipes in python like %>% from dplyr | 28,252,585 | 10 | 2015-01-31T14:15:26Z | 28,257,073 | 9 | 2015-01-31T22:08:36Z | [
"python",
"dplyr",
"macropy"
] | In R (thanks to `dplyr`) you can now perform operations with a more functional piping syntax via `%>%`. This means that instead of coding this:
```
> as.Date("2014-01-01")
> as.character((sqrt(12)^2)
```
You could also do this:
```
> "2014-01-01" %>% as.Date
> 12 %>% sqrt %>% .^2 %>% as.character
```
To me this is... | One possible way of doing this is by using a module called [`macropy`](https://github.com/lihaoyi/macropy). Macropy allows you to apply transformations to the code that you have written. Thus `a | b` can be transformed to `b(a)`. This has a number of advantages and disadvantages.
In comparison to the solution mentione... |
You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application | 28,253,681 | 70 | 2015-01-31T16:10:29Z | 28,254,860 | 28 | 2015-01-31T18:10:26Z | [
"python",
"django",
"postgresql"
] | I am working on Django project with virtualenv and connect it to local postgres database. when i run the project is says,
```
ImportError: No module named psycopg2.extensions
```
then i used this command to install
```
pip install psycopg2
```
then during the installation it gives following error.
```
Downloading/... | I just run this command as a root from terminal and problem is solved,
```
sudo apt-get install -y postgis postgresql-9.3-postgis-2.1
pip install psycopg2
```
or
```
sudo apt-get install libpq-dev python-dev
pip install psycopg2
``` |
You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application | 28,253,681 | 70 | 2015-01-31T16:10:29Z | 28,938,258 | 149 | 2015-03-09T09:04:14Z | [
"python",
"django",
"postgresql"
] | I am working on Django project with virtualenv and connect it to local postgres database. when i run the project is says,
```
ImportError: No module named psycopg2.extensions
```
then i used this command to install
```
pip install psycopg2
```
then during the installation it gives following error.
```
Downloading/... | Use these following commands, this will solve the error:
```
sudo apt-get install postgresql
```
then fire:
```
sudo apt-get install python-psycopg2
```
and last:
```
sudo apt-get install libpq-dev
``` |
You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application | 28,253,681 | 70 | 2015-01-31T16:10:29Z | 33,808,572 | 11 | 2015-11-19T15:54:53Z | [
"python",
"django",
"postgresql"
] | I am working on Django project with virtualenv and connect it to local postgres database. when i run the project is says,
```
ImportError: No module named psycopg2.extensions
```
then i used this command to install
```
pip install psycopg2
```
then during the installation it gives following error.
```
Downloading/... | For me this simple command solved the problem:
```
sudo apt-get install postgresql postgresql-contrib libpq-dev python-dev
```
Then I can do:
```
pip install psycopg2
``` |
Is there a scala/java equivalent of Python 3's collections.Counter | 28,254,447 | 4 | 2015-01-31T17:29:12Z | 28,254,549 | 8 | 2015-01-31T17:38:54Z | [
"java",
"python",
"scala"
] | I want a class that will count the the number of objects I have - that sounds more efficient that gathering all the objects and then grouping them.
Python has an ideal structure in [collections.Counter](https://docs.python.org/2/library/collections.html#collections.Counter), does Java or Scala have a similar type? | From the documentation that you linked:
> The Counter class is similar to bags or multisets in other languages.
Java does not have a `Multiset` class, or an analogue. [Guava](https://code.google.com/p/guava-libraries/) has a [`MultiSet`](http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect... |
Why does python max('a', 5) return the string value? | 28,258,531 | 2 | 2015-02-01T01:34:24Z | 28,258,554 | 8 | 2015-02-01T01:39:09Z | [
"python",
"max",
"min"
] | Tracing back a `ValueError: cannot convert float NaN to integer` I found out that the line:
```
max('a', 5)
max(5, 'a')
```
will return `a` instead of 5.
In the above case I used the example string `a` but in my actual case the string is a `NaN` (the result of a fitting process that failed to converge).
What is the... | In Python 2, numeric values always sort before strings and almost all other types:
```
>>> sorted(['a', 5])
[5, 'a']
```
Numbers then, are considered *smaller* than strings. When using `max()`, that means the string is picked over a number.
That numbers are smaller is an arbitrary implementation choice. See the [*Co... |
How to convert an XML file to nice pandas dataframe? | 28,259,301 | 7 | 2015-02-01T03:58:53Z | 28,267,291 | 8 | 2015-02-01T20:08:36Z | [
"python",
"xml",
"python-2.7",
"parsing",
"pandas"
] | Let's assume that I have an XML like this:
```
<type="XXX" language="EN" gender="xx" feature="xx" web="foobar.com">
<count="N">
<KEY="e95a9a6c790ecb95e46cf15bee517651" web="www.foo_bar_exmaple.com"><![CDATA[A large text with lots of strings and punctuations symbols [...]
]]>
</document>
<KE... | As written, your XML is not valid. I'm guessing it's supposed to be
```
<author type="XXX" language="EN" gender="xx" feature="xx" web="foobar.com">
<documents count="N">
<document KEY="e95a9a6c790ecb95e46cf15bee517651" web="www.foo_bar_exmaple.com"><![CDATA[A large text with lots of strings and punctuation... |
Cython package with __init__.pyx: Possible? | 28,261,147 | 19 | 2015-02-01T09:07:39Z | 32,067,984 | 8 | 2015-08-18T09:01:44Z | [
"python",
"python-2.7",
"cython",
"python-import",
"python-c-extension"
] | Is it possible to create a Python 2.7 package using `__init__.pyx` (compiled to `__init__.so`)? If so how? I haven't had any luck getting it to work.
Here is what I have tried:
* `setup.py`:
```
#!/usr/bin/env python
from distutils.core import setup
from distutils.extension import Extension
from Cython.Di... | According to [this really old mailing list post](https://mail.python.org/pipermail/python-ideas/2008-October/002292.html) it works if you also have an `__init__.py` file (the `__init__.py` file is not used, but seems to be necessary for the directory to be treated as a module, and hence the `__init__.so` file to be loa... |
Any better way for doing a = b + a? | 28,262,205 | 4 | 2015-02-01T11:31:39Z | 28,262,229 | 10 | 2015-02-01T11:33:43Z | [
"python",
"string",
"pycharm"
] | I'm using PyCharm and I have this statement:
```
a = 'foo'
b = 'bar'
a = b + a
```
and PyCharm highlights the last line saying that:
> Assignment can be replaced with augmented assignment
First I thought there might be something like this but ended up with error:
```
a += b # 'foobar'
a =+ b # TypeError: bad opera... | Just ignore PyCharm, it is being obtuse. The remark clearly doesn't apply when the operands cannot just be swapped.
The hint works for numeric operands because `a + b` produces the same result as `b + a`, but for strings addition is not communicative and PyCharm should just keep out of it.
If you really want to avoid... |
Plotting in a non-blocking way with Matplotlib | 28,269,157 | 11 | 2015-02-01T23:23:00Z | 33,050,617 | 16 | 2015-10-10T05:22:46Z | [
"python",
"matplotlib",
"plot"
] | I have been playing with Numpy and matplotlib in the last few days. I am having problems trying to make matplotlib plot a function without blocking execution. I know there are already many threads here on SO asking similar questions, and I 've googled quite a lot but haven't managed to make this work.
I have tried usi... | I spent a long time looking for solutions, and found [this answer](http://stackoverflow.com/questions/11874767/real-time-plotting-in-while-loop-with-matplotlib).
It looks like, in order to get what you (and I) want, you need the combination of `plt.ion()`, `plt.show()` (not with `blocking=False`, that's deprecated) an... |
Curve curvature in numpy | 28,269,379 | 5 | 2015-02-01T23:55:51Z | 28,270,382 | 10 | 2015-02-02T02:34:13Z | [
"python",
"numpy"
] | I am measuring x,y coordinates (in cm) of an object with a special camera in fixed time intervals of 1s. I have the data in a numpy array:
```
a = np.array([ [ 0. , 0. ],[ 0.3 , 0. ],[ 1.25, -0.1 ],[ 2.1 , -0.9 ],[ 2.85, -2.3 ],[ 3.8 , -3.95],[ 5. , -5.75],[ 6.4 , -7.8 ],[ 8.05, -9.9 ],[ 9.9 ,... | **EDIT**: I put together this answer off and on over a couple of hours, so I missed your latest edits indicating that you only needed curvature. Hopefully, this answer will be helpful regardless.
Other than doing some curve-fitting, our method of approximating derivatives is via [finite differences](http://en.wikipedi... |
Determining if string is palindrome - Python | 28,269,963 | 3 | 2015-02-02T01:28:59Z | 28,269,985 | 7 | 2015-02-02T01:32:01Z | [
"python"
] | I wrote two simple functions to determine if a string is a palindrome. I thought they were equivalent, but 2 doesn't work. Why is this?
**1**
```
def is_palindrome(string):
if string == string[::-1]:
return True
else:
return False
```
**2**
```
def is_palindrome(string):
if string == rev... | `reversed` doesn't create a string but a 'reversed' object:
```
>>> reversed('radar')
<reversed object at 0x1102f99d0>
```
As such, the string `'radar'` does not compare equal to the object `reversed('radar')`. To make it work, you need to make sure the `reversed` object is actually evaluated:
```
def is_palindrome(... |
SQLAlchemy filter query "column LIKE ANY (array)" | 28,270,708 | 2 | 2015-02-02T03:26:18Z | 28,270,753 | 7 | 2015-02-02T03:36:27Z | [
"python",
"sqlalchemy",
"any"
] | Hi SQLAlchemy experts out there, here's a tricky one for you:
I'm trying to write a query that resolves into something like:
```
SELECT * FROM MyTable where my_column LIKE ANY (array['a%', 'b%'])
```
using SQLAlchemy:
```
foo = ['a%', 'b%']
# this works, but is dirty and silly
DBSession().query(MyTable).filter("my... | Use [or\_()](http://docs.sqlalchemy.org/en/rel_0_9/orm/tutorial.html#common-filter-operators) and `like()`, the following code should satisfy your need well:
```
from sqlalchemy import or_
foo = ['a%', 'b%']
DBSession().query(MyTable).filter(or_(*[MyTable.my_column.like(name) for name in foo]))
```
A where condition... |
Pandas How to filter a Series | 28,272,137 | 8 | 2015-02-02T06:21:20Z | 28,272,238 | 15 | 2015-02-02T06:31:31Z | [
"python",
"pandas"
] | I have a Series like this after doing groupby('name') and used mean() function on other column
```
name
383 3.000000
663 1.000000
726 1.000000
737 9.000000
833 8.166667
```
Could anyone please show me how to filter out the rows with 1.000000 mean values? Thank you and I greatly appreciate you... | ```
In [5]:
import pandas as pd
test = {
383: 3.000000,
663: 1.000000,
726: 1.000000,
737: 9.000000,
833: 8.166667
}
s = pd.Series(test)
s = s[s != 1]
s
Out[0]:
383 3.000000
737 9.000000
833 8.166667
dtype: float64
``` |
make django model field read only or disable in admin while saving the object first time | 28,275,239 | 5 | 2015-02-02T10:10:40Z | 28,281,660 | 10 | 2015-02-02T15:58:42Z | [
"python",
"django",
"django-admin",
"django-admin-actions"
] | I want to disable the few fields from model in django admin while saving initially.
```
"<input type="text" id="disabledTextInput" class="form-control" placeholder="Disabled input">"
```
like this.
My model is:
```
class Blogmodel(models.Model):
tag = models.ForeignKey(Tag)
headline = models.CharField(max_l... | There is no need to override `get_readonly_fields`. Simplest solution would be:
```
class ItemAdmin(admin.ModelAdmin):
exclude=("headline ",)
readonly_fields=('headline', )
```
When using `readonly_fields` you can't override `get_readonly_fields`, because default implementation reads readonly\_fields variable... |
Check for areas that are too thin in an image | 28,275,352 | 4 | 2015-02-02T10:16:26Z | 28,276,163 | 9 | 2015-02-02T10:55:35Z | [
"python",
"matlab",
"opencv",
"image-processing"
] | I am trying to validate black and white images (more of a clipart images - not photos) for an engraving machine.
One of the major things I need to take into consideration is the size of areas (or width of lines) since the machine can't handle lines that are too thin - so I need to find areas that are thinner than a g... | A solution using [matlab](/questions/tagged/matlab "show questions tagged 'matlab'") utilizing image [morphological operations](http://www.mathworks.com/help/images/morphology-fundamentals-dilation-and-erosion.html#f18-24720):
Define the minimal thickness of allowed area, for example, `minThick=4`
```
BW = imread('ht... |
Write a list in a python csv file, one new row per list | 28,277,150 | 5 | 2015-02-02T11:51:15Z | 28,277,372 | 12 | 2015-02-02T12:03:28Z | [
"python",
"list",
"csv"
] | I have the following source code, where I am trying to write a list in a csv file. I need every new list to be written in a new line of this csv file. The source code is the following:
```
import csv
list1=[55,100,'dir1/dir2/dir3/file.txt',0.8]
resultFile = open("output.csv",'wa')
wr = csv.writer(resultFile, dialect=... | Open file in append mode.
```
import csv
list1=[58,100,'dir1/dir2/dir3/file.txt',0.8]
with open("output.csv", "a") as fp:
wr = csv.writer(fp, dialect='excel')
wr.writerow(list1)
```
---
More on file [open modes](https://docs.python.org/2/library/functions.html#open)
try following:-
```
>>> with open('test... |
How do I debug efficiently with spyder in Python? | 28,280,308 | 13 | 2015-02-02T14:48:09Z | 28,285,708 | 14 | 2015-02-02T19:57:32Z | [
"python",
"debugging",
"spyder"
] | I like Python and I like Spyder but I find debugging with Spyder terrible!
* Every time I put a break point, I need to press two buttons: first
the debug and then the continue button (it pauses at first line
automatically) which is annoying.
* Moreover, rather than having the standard iPython console with auto com... | (*Spyder dev here*) We're aware the debugging experience in Spyder is far from ideal. What we offer right now is very similar to the standard Python debugger, but we're working to improve things in our next major version to provide something closer to what any scientist would expect of a debugger (in short, a regular I... |
Can't setFont(Times-Roman) missing the T1 files? | 28,281,891 | 4 | 2015-02-02T16:11:27Z | 28,291,975 | 8 | 2015-02-03T05:30:52Z | [
"python",
"openerp",
"reportlab",
"odoo"
] | I have the error :
`Can't find .pfb for face 'Times-Roman'
Error: reportlab.graphics.renderPM.RenderPMError: Can't setFont(Times-Roman) missing the T1 files?`
I think the Times-Roman fonts is not getting.
Can anyone have a solution for this one.
Thanks. | After the R&D i found the solutions for this... and its work for me...
-> This error is also post in [*bugs.debian.org*](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=745985), and patch is provide to avoid this error.
-> And another solution is (*I prefer this one*):
Download this [*pfbfer.zip*](http://www.repor... |
Revert Ubuntu 14.04 to default python after uninstalling anaconda | 28,283,695 | 7 | 2015-02-02T17:50:36Z | 28,283,930 | 9 | 2015-02-02T18:04:20Z | [
"python",
"ubuntu",
"anaconda"
] | I am new to ubuntu and the whole linux environment. I installed Anaconda on my system, but I would like to use the default python now for some reason. I removed the anaconda directory , but now the system can't find the python installation (obviously, but I dont know how to get to the right one).
Can someone write out ... | In your `.bashrc` file you will have a line something like, Anaconda adds it during the installation process:
```
export PATH=$HOME/anaconda/bin:$PATH
```
You need to remove that line, do a `source .bashrc` and type python, that should open a shell using your system default python. |
Python pi calculation? | 28,284,996 | 8 | 2015-02-02T19:13:03Z | 28,285,228 | 11 | 2015-02-02T19:27:12Z | [
"python",
"algorithm",
"pi"
] | I am a python beginner and I want to calculate pi. I tried using the Chudnovsky algorithm because I heard that it is faster than other algorithms.
This is my code:
```
from math import factorial
from decimal import Decimal, getcontext
getcontext().prec=100
def calc(n):
t= Decimal(0)
pi = Decimal(0)
deno... | It seems you are losing precision in this line:
```
pi = pi * Decimal(12)/Decimal(640320**(1.5))
```
Try using:
```
pi = pi * Decimal(12)/Decimal(640320**Decimal(1.5))
```
This happens because even though Python can handle arbitrary scale integers, it doesn't do so well with floats.
**Bonus**
A single line implem... |
Connection Timeout with Elasticsearch | 28,287,261 | 16 | 2015-02-02T21:39:20Z | 31,083,774 | 11 | 2015-06-27T00:00:36Z | [
"python",
"python-2.7",
"elasticsearch"
] | ```
from datetime import datetime
from elasticsearch import Elasticsearch
es = Elasticsearch()
doc = {
'author': 'kimchy',
'text': 'Elasticsearch: cool. bonsai cool.',
'timestamp': datetime(2010, 10, 10, 10, 10, 10)
}
res = es.index(index="test-index", doc_type='tweet', id=1, body=doc)
print(res['created']... | By default, the timeout value is set to 10 secs. If one wants to change the global timeout value, this can be achieved by setting the flag **timeout=your-time** while creating the object.
If you have already created the object without specifying the timeout value, then you can set the timeout value for particular requ... |
What's the most efficient way to search a list millions of times? | 28,288,022 | 5 | 2015-02-02T22:29:51Z | 28,288,128 | 7 | 2015-02-02T22:37:11Z | [
"python"
] | I know the simple way to search would be to have a list containing the strings, and just do `if string in list`, but it gets slow, and I've heard dictionary keys practically have no slowdown with large sets due to the fact they're not ordered.
However, I don't need any extra information relating to the items, so it fe... | You should use a [set](https://docs.python.org/2/library/stdtypes.html#types-set) in this case. Sets have the same lookup time as dictionaries ([constant](https://wiki.python.org/moin/TimeComplexity)), but they consist of individual items instead of key/value pairs. So, you get the same speed for less memory and a bett... |
Fast rolling-sum | 28,288,252 | 2 | 2015-02-02T22:44:47Z | 28,288,535 | 7 | 2015-02-02T23:06:49Z | [
"python",
"numpy",
"sliding-window"
] | I am looking for a fast way to compute a rolling-sum, possibly using Numpy. Here is my first approach:
```
def func1(M, w):
Rtn = np.zeros((M.shape[0], M.shape[1]-w+1))
for i in range(M.shape[1]-w+1):
Rtn[:,i] = np.sum(M[:, i:w+i], axis=1)
return Rtn
M = np.array([[0., 0., 0., 0., 0., 1... | Adapted from @Jaime's answer here: <http://stackoverflow.com/a/14314054/553404>
```
import numpy as np
def rolling_sum(a, n=4) :
ret = np.cumsum(a, axis=1, dtype=float)
ret[:, n:] = ret[:, n:] - ret[:, :-n]
return ret[:, n - 1:]
M = np.array([[0., 0., 0., 0., 0., 1., 1., 0., 1., 1., 1., 0., 0... |
next() doesn't play nice with any/all in python | 28,288,871 | 24 | 2015-02-02T23:39:03Z | 28,289,010 | 9 | 2015-02-02T23:52:14Z | [
"python",
"generator",
"generator-expression"
] | I ran down a bug today that came about because I was using `next()` to extract a value, and 'not found' emits a `StopIteration`.
Normally that would halt the program, but the function using `next` was being called inside an `all()` iteration, so the `all` just terminated early and returned `True`.
Is this an expected... | The problem isn't in using `all`, it's that you have a generator expression as the parameter to `all`. The `StopIteration` gets propagated to the generator expression, which doesn't really know where it originated, so it does the usual thing and ends the iteration.
You can see this by replacing your `error` function w... |
next() doesn't play nice with any/all in python | 28,288,871 | 24 | 2015-02-02T23:39:03Z | 28,289,253 | 21 | 2015-02-03T00:17:46Z | [
"python",
"generator",
"generator-expression"
] | I ran down a bug today that came about because I was using `next()` to extract a value, and 'not found' emits a `StopIteration`.
Normally that would halt the program, but the function using `next` was being called inside an `all()` iteration, so the `all` just terminated early and returned `True`.
Is this an expected... | While this is expected behaviour in **existing** versions of Python at the time of writing, it is scheduled to be changed over the course of the next few point releases of Python 3.x - to quote [PEP 479](https://www.python.org/dev/peps/pep-0479/#rationale):
> The interaction of generators and `StopIteration` is curren... |
Creating an empty MultiIndex | 28,289,440 | 2 | 2015-02-03T00:38:17Z | 31,310,689 | 7 | 2015-07-09T07:22:23Z | [
"python",
"pandas",
"multi-index"
] | I would like to create an **empty** **DataFrame** with a **MultiIndex** before assigning rows to it. I already found that empty DataFrames don't like to be assigned MultiIndexes on the fly, so I'm setting the MultiIndex **names** during creation. However, I don't want to assign **levels**, as this will be done later. T... | The solution is to leave out the labels. This works fine for me:
```
>>> my_index = pd.MultiIndex(levels=[[],[],[]],
labels=[[],[],[]],
names=[u'one', u'two', u'three'])
>>> my_index
MultiIndex(levels=[[], [], []],
labels=[[], [], []],
nam... |
Python web scraping for javascript generated content | 28,289,699 | 4 | 2015-02-03T01:07:56Z | 28,289,762 | 8 | 2015-02-03T01:14:25Z | [
"javascript",
"python",
"web-scraping",
"scrape"
] | I am trying to use python3 to return the bibtex citation generated by <http://www.doi2bib.org/>. The url's are predictable so the script can work out the url without having to interact with the web page. I have tried using selenium, bs4, etc but cant get the text inside the box.
```
url = "http://www.doi2bib.org/#/doi... | You don't need `BeautifulSoup` here. There is an *additional XHR request* sent to the server to fill out the bibtex citation, simulate it, for example, with [`requests`](http://docs.python-requests.org/en/latest/):
```
import requests
bibtex_id = '10.1007/s00425-007-0544-9'
url = "http://www.doi2bib.org/#/doi/{id}".... |
Plotting grouped data in same plot using Pandas | 28,293,028 | 6 | 2015-02-03T06:54:23Z | 28,299,215 | 12 | 2015-02-03T12:40:33Z | [
"python",
"pandas",
"plot"
] | In Pandas, I am doing:
```
bp = p_df.groupby('class').plot(kind='kde')
```
`p_df` is a `dataframe` object.
However, this is producing two plots, one for each class.
How do I force 1 plot with both classes in the same plot?
thanks | ## Version 1:
You can create your axis, and then use the `ax` keyword of [`DataFrameGroupBy.plot`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.plot.html) to add everything to this axis:
```
p_df = pd.DataFrame({"class": [1,1,2,2,1], "a": [2,3,2,3,2]})
fig, ax = plt.subpl... |
Python pip install requires server_hostname | 28,296,476 | 9 | 2015-02-03T10:20:56Z | 28,319,621 | 18 | 2015-02-04T10:58:03Z | [
"python",
"openssl"
] | I finished installing pip on linux, the `pip list` command works. But when using the `pip install` command it got the following error:
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/basecommand.py", line 232, in main
status = self.run(options, args)
... | [pip 6.1.0](https://pypi.python.org/pypi/pip/6.1.0) has been released, fixing this issue. You can upgrade with:
```
pip --trusted-host pypi.python.org install -U pip
```
to self-upgrade.
---
*Original answer*:
This is caused by a change in Python 2.7.9, which `urllib3` needs to account for. See [issue #543](https:... |
Python pip install requires server_hostname | 28,296,476 | 9 | 2015-02-03T10:20:56Z | 28,461,573 | 14 | 2015-02-11T18:21:17Z | [
"python",
"openssl"
] | I finished installing pip on linux, the `pip list` command works. But when using the `pip install` command it got the following error:
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/basecommand.py", line 232, in main
status = self.run(options, args)
... | I get the same issue, and find that it can be avoided (pip 6.0.8) in my case as follows
```
pip --trusted-host pypi.python.org install <thing>
``` |
RAII in Python: What's the point of __del__? | 28,300,946 | 4 | 2015-02-03T14:10:06Z | 28,302,755 | 7 | 2015-02-03T15:34:02Z | [
"python",
"raii"
] | At first glance, it seems like Python's `__del__` special method offers much the same advantages a destructor has in C++. But according to the Python documentation (<https://docs.python.org/3.4/reference/datamodel.html>), there is **no guarantee** that your object's `__del__` method ever gets called at all!
> It is no... | `__del__` is a finalizer. It is not a destructor. Finalizers and destructors are entirely different animals.
Destructors are called reliably, and only exist in languages with deterministic memory management (such as C++). Python's context managers (the `with` statement) can achieve similar effects in certain circumsta... |
Mocking a function to raise an Exception to test an except block | 28,305,406 | 7 | 2015-02-03T17:46:31Z | 28,305,779 | 15 | 2015-02-03T18:07:46Z | [
"python",
"unit-testing",
"python-2.7",
"mocking",
"python-mock"
] | I have a function (`foo`) which calls another function (`bar`). If invoking `bar()` raises an `HttpError`, I want to handle it specially if the status code is 404, otherwise re-raise.
I am trying to write some unit tests around this `foo` function, mocking out the call to `bar()`. Unfortunately, I am unable to get the... | Your mock is raising the exception just fine, but the `error.resp.status` value is missing. Rather than use `return_value`, just tell `Mock` that `status` is an attribute:
```
barMock.side_effect = HttpError(mock.Mock(status=404), 'not found')
```
Additional keyword arguments to `Mock()` are set as attributes on the ... |
Compiler can't find Py_InitModule() .. is it deprecated and if so what should I use? | 28,305,731 | 3 | 2015-02-03T18:05:24Z | 28,306,354 | 8 | 2015-02-03T18:40:34Z | [
"python",
"c",
"python-extensions"
] | I am attempting to write a C extension for python. With the code (below) I get the compiler warning:
```
implicit declaration of function âPy_InitModuleâ
```
And it fails at run time with this error:
```
undefined symbol: Py_InitModule
```
I have spent literally hours searching for a solution with no joy. I hav... | The code you have would work fine in Python 2.x, but `Py_InitModule` is no longer used in Python 3.x. Nowadays, you create a [`PyModuleDef`](https://docs.python.org/3/c-api/module.html#c.PyModuleDef) structure and then pass a reference to it to [`PyModule_Create`](https://docs.python.org/3/c-api/module.html#c.PyModule_... |
RemovedInDjango18Warning: Creating a ModelForm without either the 'fields' attribute or the 'exclude' attribute is deprecated | 28,306,288 | 12 | 2015-02-03T18:36:42Z | 28,306,347 | 55 | 2015-02-03T18:40:17Z | [
"python",
"django"
] | I am doing a Django project and when I tried to access 127.0.0.1:8000/articles/create, I got the following error in my Ubuntu terminal:
```
/home/(my name)/django_test/article/forms.py:4: RemovedInDjango18Warning: Creating a ModelForm without either the 'fields' attribute or the 'exclude' attribute is deprecated - fo... | For your form, it's a warning, not an error, telling you that in django 1.8, you will need to change your form to
```
from django import forms
from models import Article
class ArticleForm(forms.ModelForm):
class Meta:
model = Article
fields = '__all__' # Or a list of the fields that you want to ... |
What is object() good for? | 28,306,371 | 6 | 2015-02-03T18:41:28Z | 28,306,434 | 11 | 2015-02-03T18:45:33Z | [
"python",
"class",
"object"
] | How is it possible that
```
class EmptyClass:
def __init__(self):
pass
e = EmptyClass()
e.a = 123
```
works and:
```
o = object()
o.a = 123
```
does not (`AttributeError: 'object' object has no attribute 'a'`) while
```
print isinstance(e, object)
>>> True
```
?
What is `object()` good for then, whe... | You cannot add attributes to an instance of `object` because `object` does not have a [`__dict__` attribute](https://docs.python.org/3/library/stdtypes.html#object.__dict__) (which would store the attributes). From the [docs](https://docs.python.org/3/library/functions.html#object):
> `class object`
>
> Return a new f... |
Convert a list with repeated keys to a dictionary of lists | 28,306,906 | 3 | 2015-02-03T19:14:43Z | 28,306,929 | 8 | 2015-02-03T19:15:51Z | [
"python",
"dictionary",
"list-comprehension"
] | I have an association `list` with repeated keys:
```
l = [(1, 2), (2, 3), (1, 3), (2, 4)]
```
and I want a `dict` with `list` values:
```
d = {1: [2, 3], 2: [3, 4]}
```
Can I do better than:
```
for (x,y) in l:
try:
z = d[x]
except KeyError:
z = d[x] = list()
z.append(y)
``` | You can use the [`dict.setdefault()` method](https://docs.python.org/3/library/stdtypes.html#dict.setdefault) to provide a default empty list for missing keys:
```
for x, y in l:
d.setdefault(x, []).append(y)
```
or you could use a [`defaultdict()` object](https://docs.python.org/3/library/collections.html#collec... |
How to start two instances of Spyder with Python 2.7 & Python 3.4? | 28,318,322 | 4 | 2015-02-04T09:53:28Z | 28,469,566 | 8 | 2015-02-12T04:53:55Z | [
"python",
"python-2.7",
"python-3.x",
"spyder"
] | I had spyder installed with Python 3.4 on Windows Vista.
Today I wanted to run spyder with Python 2.7. So, went through [this post](http://stackoverflow.com/questions/24405561/how-to-install-2-anacondas-python-2-7-and-3-4-on-mac-os-10-9) & installed Python 2.7 for spyder. Now, how do I start spyder with Python 2.7 ins... | @Roberto: Gotcha!
Learnt that we can check for environments installed in conda using `conda info -e`
It showed the path for installed python 2 environment as `C:\Users\ramprasad.g\AppData\Local\Continuum\Anaconda3\envs\python2\`
Spyder IDE for Python 2 was in Scripts folder inside envs\python2 (`C:\Users\ramprasad.g... |
How to get a list of values into a flag in Golang? | 28,322,997 | 8 | 2015-02-04T13:46:35Z | 28,323,276 | 14 | 2015-02-04T14:00:21Z | [
"python",
"go"
] | What is Golang's equivalent of the below python commands ?
```
import argparse
parser = argparse.ArgumentParser(description="something")
parser.add_argument("-getList1",nargs='*',help="get 0 or more values")
parser.add_argument("-getList2",nargs='?',help="get 1 or more values")
```
I have seen that the flag package a... | You can define your own `flag.Value` and use `flag.Var()` for binding it.
The example is [here](http://play.golang.org/p/Ig5sm7jA14).
Then you can pass multiple flags like following:
```
go run your_file.go --list1 value1 --list1 value2
```
UPD: including code snippet right there just in case.
```
package main
im... |
PyCharm and PYTHONPATH | 28,326,362 | 6 | 2015-02-04T16:25:49Z | 28,326,635 | 11 | 2015-02-04T16:37:12Z | [
"python",
"pycharm"
] | I am new to PyCharm. I have a directory that I use for my PYTHONPATH.
That directory is c:\test\my\scripts. In this directory I have some modules I import. It works fine in my python shell. My question is how do I add this directory path to PyCharm so I can import what is in that directory. | you need to make sure each folder that is representing a package is done by putting a `__init__.py` file which is a empty python file named exactly `__init__.py` (underscore underscore init underscore underscore) that tells the interpreter that the folder is a python package.
Second thing to look for is that pycharm l... |
Python-Requests, extract url parameters from a string | 28,328,890 | 3 | 2015-02-04T18:38:42Z | 28,328,919 | 12 | 2015-02-04T18:40:33Z | [
"python",
"python-3.x",
"python-requests"
] | I am using this awesome library called [`requests`](http://docs.python-requests.org/en/latest) to maintain python 2 & 3 compatibility and simplify my application requests management.
I have a case where I need to parse a url and replace one of it's parameter. E.g:
```
http://example.com?param1=a&token=TOKEN_TO_REPLAC... | You cannot use `requests` for this; the library **builds** such URLs if passed a Python structure for the parameters, but does not offer any tools to parse them. That's not a goal of the project.
Stick to the `urllib.parse` method to parse out the parameters. Once you have a dictionary or list of key-value tuples, jus... |
RethinkDB losing data after restarting server | 28,329,352 | 6 | 2015-02-04T19:04:15Z | 28,330,153 | 8 | 2015-02-04T19:49:51Z | [
"python",
"ubuntu-14.04",
"rethinkdb",
"rethinkdb-python"
] | I save my data on RethinkDB Database. As long as I dont restart the server, all is well. But when I restart, it gives me an error saying database doesnt exist, although the folder and data does exist in folder ***rethinkdb\_data***. What is the problem ? | You're almost certainly not losing data, you're just starting RethinkDB without pointing it to the data. Try the following:
* Start RethinkDB from the directory that contains the `rethinkdb_data` directory.
* Alternatively, pass the `-d` flag to RethinkDB to point it to the directory that contains `rethinkdb_data`. Fo... |
Why do Python regex strings sometimes work without using raw strings? | 28,334,871 | 3 | 2015-02-05T01:49:26Z | 28,334,968 | 7 | 2015-02-05T02:02:03Z | [
"python",
"regex",
"string"
] | Python recommends using raw strings when defining regular expressions in the `re` module. From the [Python documentation](https://docs.python.org/2/library/re.html#module-re):
> Regular expressions use the backslash character ('\') to indicate special forms or to allow special characters to be used without invoking th... | The example above works because `\s` and `\d` are not escape sequences in python. According to the docs:
> Unlike Standard C, all unrecognized escape sequences are left in the string unchanged, i.e., the backslash is left in the string.Â
But it's best to just use raw strings and not worry about what is or isn't a py... |
struct.unpack causing TypeError:'int' does not support the buffer interface | 28,335,048 | 5 | 2015-02-05T02:12:20Z | 28,335,117 | 8 | 2015-02-05T02:20:09Z | [
"python",
"python-3.x",
"struct"
] | I am using the struct module in Python 3.4 like so:
```
length = struct.unpack('>B', data[34])[0]
```
`data` looks like this:
```
b'\x03\x01\x0e}GZ\xa8\x8e!\x1c\x7ft\xe8\xf9G\xbf\xb1\xdf\xbek\x8d\xb3\x05e~]N\x97\xad\xcaz\x03tP\x00\x00\x1eV\x00\xc0\n\xc0\t\xc0\x13\xc0\x14\xc0\x07\xc0\x11\x003\x002\x009\x00/\x005\x00\... | That's because you're passing it the contents of `data[34]` which *is* an `int`.
Try using `data[34:35]` instead, which is a one element byte array. |
Retrieve the command line arguments of the Python interpreter | 28,336,431 | 15 | 2015-02-05T04:49:55Z | 28,338,254 | 9 | 2015-02-05T07:19:40Z | [
"python",
"command-line",
"command-line-arguments",
"argv"
] | Inspired by [another question here](http://stackoverflow.com/questions/28336157/activating-pythons-optimization-mode-via-script-level-command-line-argument), I would like to retrieve the Python interpreter's full command line in a portable way. That is, I want to get the original `argv` of the interpreter, not the `sys... | You can use ctypes
```
~$ python2 -B -R -u
Python 2.7.9 (default, Dec 11 2014, 04:42:00)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Persistent session history and tab completion are enabled.
>>> import ctypes
>>> argv = ctypes.POINTER(ctypes.c_char_p)()
>>> argc = cty... |
python - simple way to join 2 arrays/lists based on common values | 28,339,454 | 4 | 2015-02-05T08:36:16Z | 28,339,504 | 7 | 2015-02-05T08:40:09Z | [
"python",
"arrays",
"numpy"
] | I have tried for a while but can't find a simple way to join 2 lists or arrays based only on common values. Similar to an SQL inner join but with arrays/lists and not dict, or some other data type. eg.
```
a = [1, 2, 3]
b = [2, 3, 4]
join(a, b)
```
prints
```
[2, 3]
```
seems so simple but lacking from python or nu... | Probably a duplicate, but in case it is not:
```
>>> a = [1,2,3]
>>> b = [2,3,4]
>>> list(set(a) & set(b))
[2, 3]
```
For large lists (external data), see [this S.O. answer](http://stackoverflow.com/questions/14614512/merging-two-tables-with-millions-of-rows-in-python). |
Python and Pandas - Moving Average Crossover | 28,345,261 | 5 | 2015-02-05T13:31:59Z | 28,345,879 | 8 | 2015-02-05T13:59:42Z | [
"python",
"numpy",
"pandas"
] | There is a Pandas DataFrame object with some stock data. SMAs are moving averages calculated from previous 45/15 days.
```
Date Price SMA_45 SMA_15
20150127 102.75 113 106
20150128 103.05 100 106
20150129 105.10 112 105
20150130 105.35 111 105
20150202 107.15 111 ... | I'm taking a crossover to mean when the SMA lines -- as functions of time --
intersect, as depicted on [this investopedia
page](http://www.investopedia.com/terms/s/sma.asp).

Since the SMAs represent continuous functions, there is a crossing when,
for... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.