title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Python mock multiple return values | 24,897,145 | 28 | 2014-07-22T20:25:17Z | 24,897,297 | 59 | 2014-07-22T20:34:16Z | [
"python",
"unit-testing",
"mocking",
"python-mock"
] | I am using pythons mock.patch and would like to change the return value for each call.
Here is the caviat:
the function being patched has no inputs, so I can not change the return value based on the input.
Here is my code for reference.
```
def get_boolean_response():
response = io.prompt('y/n').lower()
while... | You can assign an [*iterable*](https://docs.python.org/3/glossary.html#term-iterable) to `side_effect`, and the mock will return the next value in the sequence each time it is called:
```
>>> from unittest.mock import Mock
>>> m = Mock()
>>> m.side_effect = ['foo', 'bar', 'baz']
>>> m()
'foo'
>>> m()
'bar'
>>> m()
'ba... |
Check if key exists and iterate the JSON array using Python | 24,898,797 | 18 | 2014-07-22T22:16:51Z | 24,898,931 | 39 | 2014-07-22T22:28:19Z | [
"python",
"json",
"loops"
] | I have a bunch of JSON data from Facebook posts like the one below:
```
{"from": {"id": "8", "name": "Mary Pinter"}, "message": "How ARE you?", "comments": {"count": 0}, "updated_time": "2012-05-01", "created_time": "2012-05-01", "to": {"data": [{"id": "1543", "name": "Honey Pinter"}]}, "type": "status", "id": "id_7"}... | ```
import json
jsonData = """{"from": {"id": "8", "name": "Mary Pinter"}, "message": "How ARE you?", "comments": {"count": 0}, "updated_time": "2012-05-01", "created_time": "2012-05-01", "to": {"data": [{"id": "1543", "name": "Honey Pinter"}]}, "type": "status", "id": "id_7"}"""
def getTargetIds(jsonData):
data ... |
WebDriverException: Message: 'Can not connect to the ChromeDriver'. Error in utils.is_connectable(self.port): | 24,900,922 | 6 | 2014-07-23T02:32:44Z | 25,664,790 | 10 | 2014-09-04T11:52:45Z | [
"python",
"selenium",
"selenium-webdriver",
"selenium-chromedriver"
] | **I am trying to use the chromedriver 2.10 to run my tests on Chrome Browser Version 35.0.1916.114 On CentOS machine**
/home/varunm/EC\_WTF\_0.4.10/EC\_WTF0.4.10\_Project/wtframework/wtf/drivers/chromedriver
Actually I fixed the path issue, because the error message was different if the issue was with path
```
d... | **For Linux**
1. Check you have installed latest version of chrome brwoser-> **"chromium-browser -version"**
2. If not, install latest version of chrome **"sudo apt-get install chromium-browser"**
3. get appropriate version of chrome driver from following link <http://chromedriver.storage.googleapis.com/inde... |
python: How to get column names from pandas dataframe - but only for continuous data type? | 24,901,766 | 10 | 2014-07-23T04:17:22Z | 24,902,313 | 8 | 2014-07-23T05:09:47Z | [
"python",
"pandas"
] | I'm using df.columns.values to make a list of column names which I then iterate over and make charts, etc... but when I set this up I overlooked the non-numeric columns in the df. Now, I'd much rather not simply drop those columns from the df (or a copy of it). Instead, I would like to find a slick way to eliminate the... | Someone will give you a better answe than this possibly, but one thing I tend to do is if all my numeric data are `int64` or `float64` objects, then you can create a dict of the column data types and then use the values to create your list of columns.
So for example, in a dataframe where I have columns of type `float6... |
python: How to get column names from pandas dataframe - but only for continuous data type? | 24,901,766 | 10 | 2014-07-23T04:17:22Z | 24,907,560 | 8 | 2014-07-23T10:06:18Z | [
"python",
"pandas"
] | I'm using df.columns.values to make a list of column names which I then iterate over and make charts, etc... but when I set this up I overlooked the non-numeric columns in the df. Now, I'd much rather not simply drop those columns from the df (or a copy of it). Instead, I would like to find a slick way to eliminate the... | There's a new feature in 0.14.1, [`select_dtypes`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.select_dtypes.html#pandas.DataFrame.select_dtypes) to select columns by dtype, by providing a list of dtypes to include or exclude.
For example:
```
df = pd.DataFrame({'a': np.random.randn(1000),
... |
Is there an __repr__ equivalent for javascript? | 24,902,061 | 11 | 2014-07-23T04:47:52Z | 24,902,234 | 12 | 2014-07-23T05:02:43Z | [
"javascript",
"python",
"node.js"
] | The closest I got to something close to Python's **repr** is this:
```
function User(name, password){
this.name = name;
this.password = password;
}
User.prototype.toString = function(){
return this.name;
};
var user = new User('example', 'password');
console.log(user.toString()) // but user.n... | [`JSON.stringify`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify) is probably the closest you are going to get from native libraries. It doesn't work well with objects, but you could define your own code to work around that. I searched for libraries that provide this fu... |
how to unpack pkl file | 24,906,126 | 14 | 2014-07-23T08:58:18Z | 25,079,162 | 26 | 2014-08-01T11:22:08Z | [
"python",
"pickle",
"deep-learning",
"mnist"
] | I have a pkl file from MNIST dataset, which consists of handwritten digit images.
I'd like to take a look at each of those digit images, so I need to unpack the pkl file, except I can't find out how.
Is there a way to unpack/unzip pkl file? | ## Generally
Your `pkl` file is, in fact, a serialized `pickle` file, which means it has been dumped using Python's [`pickle`](https://docs.python.org/3/library/pickle.html) module.
To un-pickle de data you can:
```
import pickle
with open('serialized.pkl', 'rb') as f:
data = pickle.load(f)
```
## For the MNI... |
Insert pandas chart into an Excel file using XlsxWriter | 24,907,210 | 5 | 2014-07-23T09:49:44Z | 24,922,232 | 9 | 2014-07-23T22:37:15Z | [
"python",
"pandas",
"xlsxwriter"
] | I use python 3.4, pandas 0.14.1 and XlsxWriter 0.5.6.
I create a graph called 'graph' using pandas with the following code
```
graph=data_iter['_DiffPrice'].hist()
```
, which produces a beautiful histogram.
Now, how do I insert that graph into an Excel file using XlsxWriter?
I tried the XlsxWriter method
```
wor... | If you would like to export Pandas data as charts in Excel using XlsxWriter then have a look at the following how-to (that I wrote): [Using Pandas and XlsxWriter to create Excel charts](http://pandas-xlsxwriter-charts.readthedocs.io).

If on the other... |
Django 1.7 - makemigrations not detecting changes | 24,912,173 | 82 | 2014-07-23T13:44:54Z | 24,929,525 | 19 | 2014-07-24T09:07:38Z | [
"python",
"django",
"django-1.7",
"django-migrations"
] | As the title says, I can't seem to get migrations working.
The app was originally under 1.6, so I understand that migrations won't be there initially, and indeed if I run `python manage.py migrate` I get:
```
Operations to perform:
Synchronize unmigrated apps: myapp
Apply all migrations: admin, contenttypes, auth... | Ok, looks like I missed an obvious step, but posting this in case anyone else does the same.
When upgrading to 1.7, my models became unmanaged (`managed = False`) - I had them as `True` before but seems it got reverted.
Removing that line (To default to True) and then running `makemigrations` immediately made a migra... |
Django 1.7 - makemigrations not detecting changes | 24,912,173 | 82 | 2014-07-23T13:44:54Z | 25,843,194 | 120 | 2014-09-15T07:55:53Z | [
"python",
"django",
"django-1.7",
"django-migrations"
] | As the title says, I can't seem to get migrations working.
The app was originally under 1.6, so I understand that migrations won't be there initially, and indeed if I run `python manage.py migrate` I get:
```
Operations to perform:
Synchronize unmigrated apps: myapp
Apply all migrations: admin, contenttypes, auth... | If you're changing over from an existing app you made in django 1.6, then you need to do one pre-step (as I found out) listed in the documentation:
> python manage.py makemigrations **your\_app\_label**
The documentation does not make it obvious that you need to add the app label to the command, as the first thing it... |
Django 1.7 - makemigrations not detecting changes | 24,912,173 | 82 | 2014-07-23T13:44:54Z | 28,104,429 | 11 | 2015-01-23T06:47:33Z | [
"python",
"django",
"django-1.7",
"django-migrations"
] | As the title says, I can't seem to get migrations working.
The app was originally under 1.6, so I understand that migrations won't be there initially, and indeed if I run `python manage.py migrate` I get:
```
Operations to perform:
Synchronize unmigrated apps: myapp
Apply all migrations: admin, contenttypes, auth... | My solution was not covered here so I'm posting it. I had been using `syncdb` for a projectâjust to get it up and running. Then when I tried to start using Django migrations, it faked them at first then would say it was 'OK' but nothing was happening to the database.
My solution was to just delete all the migration ... |
"The owner of this website has banned your access based on your browser's signature" ... on a url request in a python program | 24,913,699 | 7 | 2014-07-23T14:47:01Z | 24,914,742 | 8 | 2014-07-23T15:30:11Z | [
"javascript",
"python",
"cookies",
"browser",
"urllib"
] | When doing a simple request, on python (Entought Canopy to be precise), with urllib2, the server denies me access :
```
data = urllib.urlopen(an url i cannot post because of reputation, params)
print data.read()
```
Error:
```
Access denied | play.pokemonshowdown.com used CloudFlare to restrict access
The owner of... | What this site is "checking" is not your browser, it's the "user agent" - a string your client program (browser, Python script or whatever) *eventually* sends as a request header. You can specify another user agent, cf [Changing user agent on urllib2.urlopen](http://stackoverflow.com/questions/802134/changing-user-agen... |
why (0.0006*100000)%10 is 10 | 24,914,298 | 3 | 2014-07-23T15:12:06Z | 24,914,506 | 9 | 2014-07-23T15:20:42Z | [
"python",
"c++",
"floating-point",
"floating-accuracy",
"modulus"
] | When I did (0.0006\*100000)%10 and (0.0003\*100000)%10 in python it returned 9.999999999999993 respectively, but actually it has to be 0.
Similarly in c++ fmod(0.0003\*100000,10) gives the value as 10. Can someone help me out where i'm getting wrong. | The closest IEEE 754 64-bit binary number to 0.0003 is 0.0002999999999999999737189393389513725196593441069126129150390625. The closest representable number to the result of multiplying it by 100000 is 29.999999999999996447286321199499070644378662109375.
There are a number of operations, such as floor and mod, that can... |
RQ - Empty & Delete Queues | 24,915,181 | 7 | 2014-07-23T15:48:40Z | 24,916,208 | 13 | 2014-07-23T16:39:48Z | [
"python",
"redis",
"python-rq"
] | I'm using [RQ](http://python-rq.org/), and I have a `failed` queue with thousands of items, and another `test` queue I created a while back for testing which is now empty and unused. I'm wondering how to remove all jobs from the `failed` queue, and delete the `test` queue altogether?
Apologies for the basic question, ... | # Cleanup using rq
RQ offers methods to make any queue empty:
```
>>> from redis import Redis
>>> from rq import Queue
>>> qfail = Queue("failed", connection=Redis())
>>> qfail.count
8
>>> qfail.empty()
8L
>>> qfail.count
0
```
You can do the same for `test` queue, if you have it still present.
# Cleanup using `rq-... |
Is there a way to use regular expressions in the replacement string in re.sub() in Python? | 24,915,396 | 3 | 2014-07-23T15:59:07Z | 24,915,424 | 8 | 2014-07-23T16:00:24Z | [
"python",
"regex",
"string",
"replace",
"substring"
] | In Python in the `re` module there is the following function:
> re.sub(pattern, repl, string, count=0, flags=0) â Return the string obtained by replacing the leftmost non-overlapping occurrences of pattern in string by the replacement repl. If the pattern isnât found, string is returned unchanged.
I've found it c... | Place `\d+` in a capture group `(...)` and then use `\1` to refer to it:
```
>>> import re
>>> re.sub('[a-z]*(\d+)', r'lion\1', 'zebra432')
'lion432'
>>>
>>> # You can also refer to more than one capture group
>>> re.sub('([a-z]*)(\d+)', r'\1lion\2', 'zebra432')
'zebralion432'
>>>
```
From the [docs](https://docs.pyt... |
Error while installing scrapy | 24,917,657 | 7 | 2014-07-23T18:01:20Z | 26,449,293 | 17 | 2014-10-19T09:49:01Z | [
"python",
"django",
"scrapy"
] | I am trying to install scrapy, it gives the following error:
```
Downloading/unpacking cryptography>=0.2.1 (from pyOpenSSL->Scrapy)
Running setup.py egg_info for package cryptography
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to... | I faced the same problem.
I installed libffi-dev with the following command
```
sudo apt-get install libffi-dev
```
After that, I installed scrapy. There was no issue. |
Traits Enum values | 24,918,647 | 3 | 2014-07-23T18:55:17Z | 24,919,778 | 7 | 2014-07-23T19:56:15Z | [
"python",
"enthought"
] | I'm having trouble setting an Enum value to one of it's possibility in a class...
If I have in an iPython window:
```
eTest = Enum('zero', 'one', 'two', 'three')
```
I can do:
```
eTest.value = eTest.values[2]
```
and `print eTest.value` gives me the correct answer: two
I'm trying the same thing in a python class... | You do not work with `Enum` objects like this. The `Enum` object is just a kind of declaration that tells the `HasTraits` class that has one of these to make an instance attribute that does a particular kind of validation. This instance attribute will *not* be an `Enum` object: it will be one of the enumerated values. ... |
Convert PILLOW image into StringIO | 24,920,728 | 6 | 2014-07-23T20:52:42Z | 24,920,879 | 13 | 2014-07-23T21:01:05Z | [
"python",
"python-imaging-library",
"python-2.x"
] | I'm writing a program which can receive images in a variety of common image formats but needs to examine them all in one consistent format. It doesn't really matter what image format, mainly just that all of them are the same. Since I need to convert the image format and then continue working with the image, I don't wa... | There are *two types* of `cStringIO.StringIO()` objects depending on how the instance was created; one for just reading, the other for writing. You cannot interchange these.
When you create an *empty* `cStringIO.StringIO()` object, you really get a `cStringIO.StringO` (note the `O` at the end) class, it can only act a... |
Installing Anaconda on Amazon Elastic Beanstalk | 24,921,415 | 4 | 2014-07-23T21:32:12Z | 24,936,118 | 8 | 2014-07-24T14:12:54Z | [
"python",
"amazon-web-services",
"amazon-ec2",
"elastic-beanstalk",
"anaconda"
] | I've added deploy commands to my Elastic Beanstalk deployment which download the Anaconda installer, and install it into `/anaconda`. Everything goes well, but I cannot seem to correctly modify the PATH of my instance to include `/anaconda/bin` as suggested by the Anaconda installation page. If I SSH into an instance a... | Found the answer: `import pandas` was failing because `matplotlib` was failing to initialize, because it was trying to get the current user's home directory. Since the application is run via WSGI, the HOME variable is set to `/home/wsgi` *but this directory doesn't exist*. So, creating this directory via deployment com... |
Django: Check if settings variable is set | 24,923,505 | 10 | 2014-07-24T00:59:30Z | 24,923,694 | 20 | 2014-07-24T01:26:14Z | [
"python",
"django",
"django-settings"
] | Here is an example of what I'm trying to achieve. The desired effect is that a particular feature should take effect if and only if its relevant setting exists is defined. Otherwise, the feature should be disabled.
settings.py:
```
SOME_VARIABLE = 'some-string'
ANOTHER_VARIABLE = 'another-string'
```
random\_code\_f... | It seemed you've made it the correct way: **import setting module and check.**
And you can try to use:
`if hasattr(settings, 'ANOTHER_VARIABLE'):`
instead of:
`if settings.is_defined('ANOTHER_VARIABLE'):`
I looked for the documentation, hope this might help:
~~<https://docs.djangoproject.com/en/1.6/topics/setting... |
Logistic Regression function on sklearn | 24,935,415 | 3 | 2014-07-24T13:41:56Z | 25,044,203 | 7 | 2014-07-30T18:25:34Z | [
"python",
"numpy",
"machine-learning",
"scipy",
"scikit-learn"
] | I am learning Logistic Regression from sklearn and came across this : <http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression>
I have a created an implementation which shows me the accuracy scores for training and testing. However it is ver... | I recently started studying LR myself, I still don't get many steps of the derivation but I think I understand which formulas are being used.
First of all let's assume that you are using the latest version of scikit-learn and that the solver being used is `solver='lbfgs'` (which is the default I believe).
The code is... |
Django Templates and MongoDB _id | 24,936,601 | 4 | 2014-07-24T14:32:35Z | 24,936,772 | 8 | 2014-07-24T14:40:03Z | [
"python",
"django",
"mongodb",
"django-templates"
] | ```
Variables and attributes may not begin with underscores: 'value._id'
```
How does one reference the `_id` of an item gotten from MongoDB in Django Templates? | [Custom template filter](https://docs.djangoproject.com/en/dev/howto/custom-template-tags/#writing-custom-template-filters) would help:
```
from django import template
register = template.Library()
@register.filter(name='private')
def private(obj, attribute):
return getattr(obj, attribute)
```
You can use it th... |
ImportError: No module named flask.ext.httpauth | 24,937,035 | 7 | 2014-07-24T14:49:39Z | 27,674,334 | 11 | 2014-12-28T07:00:00Z | [
"python",
"rest",
"flask",
"http-basic-authentication"
] | I am trying to inistiate a Python server which uses the Flask framework. I'm having a hard time setting up the flask extention HTTPBasicAuth. I'm not sure how I can get this extension setup properly. Please help!
CMD output:
> C:\Dev Workspaces\RestTutorial\REST-tutorial-master>python
> rest-server.py Traceback (most... | Probably too late to answer. But putting it here for others.
Only installing Flask will not install httpauth you have to install it explicitly. Run the following command to install globally:
```
$ pip install flask-httpauth
```
or
```
$ flask/bin/pip install flask-httpauth
```
where flask/bin is your virtual envir... |
passing functions as arguments in other functions python | 24,941,530 | 2 | 2014-07-24T18:40:00Z | 24,941,586 | 11 | 2014-07-24T18:43:54Z | [
"python",
"function"
] | I have these functions, and I'm getting errors, with the do\_twice functions, but I'm having problems debugging it
```
#!/usr/bin/python
#functins exercise 3.4
def do_twice(f):
f()
f()
def do_four(f):
do_twice(f)
do_twice(f)
def print_twice(str):
print str + 'one'
print str + 'two'
str = ... | The problem is that the expression `print_twice(str)` is evaluated by calling `print_twice` with `str` and getting the result that you returned,\* and that result is what you're passing as the argument to `do_four`.
What you need to pass to `do_four` is a function that, when called, calls `print_twice(str)`.
You can ... |
Is 'file' a keyword in python? | 24,942,358 | 8 | 2014-07-24T19:27:45Z | 24,942,363 | 18 | 2014-07-24T19:28:03Z | [
"python",
"keyword"
] | Is `file` a keyword in python?
I've seen some code using the keyword `file` just fine, while others have suggested not to use it and my editor is color coding it as a keyword. | No, `file` is a builtin, not a keyword:
```
>>> import keyword
>>> keyword.iskeyword('file')
False
>>> import __builtin__
>>> hasattr(__builtin__, 'file')
True
```
It can be seen as an alias for `open()`, but it has been removed from Python 3, as the new [`io` framework](https://docs.python.org/3/library/io.html) rep... |
matplotlib: Change grid interval and specify tick labels | 24,943,991 | 11 | 2014-07-24T21:07:48Z | 24,953,575 | 34 | 2014-07-25T10:25:13Z | [
"python",
"matplotlib",
"plot",
"grid",
"label"
] | I am trying to plot counts in gridded plots but I am not being able to figure out how I go about it. I want to:
(1) have dotted grids at an interval of 5
(2) have major tick labels only every 20
(3) I want the ticks to be outside the plot
(4) have "counts" inside those grids
I have checked for potential duplicates... | There are several problems in your code.
First the big ones:
1. you are creating a new figure and a new axes in every iteration of your loop ->
put `fig = plt.figure` and `ax = fig.add_subplot(1,1,1)` outside of the loop.
2. Don't use the Locators, call the functions `ax.set_xticks()` and `ax.grid()` with the corr... |
Removing \r\n from a Python list after importing with readlines | 24,946,640 | 5 | 2014-07-25T01:30:34Z | 24,946,720 | 10 | 2014-07-25T01:38:55Z | [
"python",
"readlines"
] | I have saved a list of ticker symbols into a text file as follows:
```
MMM
ABT
ABBV
ANF
....
```
Then I use readlines to put the symbols into a Python list:
```
stocks = open(textfile).readlines()
```
However, when I look at the list in it contains Windows end-of-line delimiter which I do not want:
```
list: ['MMM... | That's basically how `readlines` works. You could post-process it:
```
stocks = [x.rstrip() for x in stocks]
```
But I prefer not using `readlines` at all if I don't want EOL character(s), instead doing:
```
stocks = open(textfile).read().splitlines()
```
Or even better:
```
with open(textfile) as f:
stocks = ... |
Import error for Oauth | 24,954,974 | 7 | 2014-07-25T10:37:02Z | 24,955,127 | 13 | 2014-07-25T11:50:34Z | [
"python",
"python-2.7",
"windows-7",
"python-requests"
] | I am using *Python* on *windows* having working the same code it was working fine now it has suddent change for the updating the programme I found the error **ImportError: cannot import name to\_native\_string** can you help me why this error is occuring.
```
Traceback (most recent call last):
File "C:\Users\inla\De... | Check your installed `requests` version.
`requests.utils.to_native_string` is available since `requests 2.0.0`.
Upgrading `requests` to latest version will solve your problem.
---
```
C:\Users\falsetru>pip install requests==1.2.3
Downloading/unpacking requests==1.2.3
...
Successfully installed requests
Cleaning up... |
Read elevation using gdal python from geotiff | 24,956,653 | 3 | 2014-07-25T13:16:22Z | 24,957,068 | 18 | 2014-07-25T13:37:22Z | [
"python",
"gdal",
"geotiff"
] | I am loading a geotiff file using GDAL. I have managed to read the coordinates X,Y but not the elevation.
Has anyone worked on a similar case before ?
Regards, | If you'd like the read all of the elevation values into a numpy array, you'd typically do something like this:
```
from osgeo import gdal
gdal.UseExceptions()
ds = gdal.Open('test_data.tif')
band = ds.GetRasterBand(1)
elevation = band.ReadAsArray()
print elevation.shape
print elevation
```
`elevation` will be a 2D ... |
get key by value in dictionary with same value in python? | 24,958,010 | 4 | 2014-07-25T14:24:45Z | 24,958,057 | 9 | 2014-07-25T14:27:08Z | [
"python",
"dictionary"
] | Assume a dictionary
```
d={'a':1,'b':2,'c':1}
```
When I use
```
d.keys()[d.values(). index(1)]
```
I get `'a'`,but i want get `'c'` as well ,since the value of `'c'` is also 1. How can i do that? | You can use list comprehension, like this
```
print [key for key in d if d[key] == 1]
```
It iterates through the keys of the dictionary and checks if the value is `1`. If the value is 1, it adds the corresponding key to the list.
Alternatively you can use, `dict.iteritems()` in Python 2.7, like this
```
print [key... |
Bulk package updates using Conda [Anaconda] | 24,958,833 | 48 | 2014-07-25T15:06:50Z | 24,965,191 | 69 | 2014-07-25T22:05:17Z | [
"python",
"anaconda",
"conda"
] | I've been playing around with both Anaconda and Canopy. While I love the Spyder , object inspector etc integration in Anaconda I admire the graphical package manager and ability to perform bulk updates in Canopy.
So is there a way (using conda update) that I can list outdated packages and select or bulk update (compat... | You want `conda update --all`.
`conda search --outdated` will show outdated packages, and `conda update --all` will update them (note that the latter will not update you from Python 2 to Python 3, but the former will show Python as being outdated if you do use Python 2). |
Python: How do I format numbers for a fixed width? | 24,960,235 | 4 | 2014-07-25T16:22:52Z | 24,960,300 | 11 | 2014-07-25T16:27:15Z | [
"python",
"string",
"numbers",
"fixed-width"
] | let's say
```
numbers = [ 0.7653, 10.2, 100.2325, 500.9874 ]
```
I'd like to output the numbers with a fixed width by varying the number of decimal places to get an output like this:
```
0.7653
10.200
100.23
500.98
```
is there an easy way to do this? I've been trying with various `%f` and `%d` configurations with ... | Combining two [`str.format`](https://docs.python.org/3/library/stdtypes.html#str.format) / [`format`](https://docs.python.org/3/library/functions.html#format) calls:
```
numbers = [ 0.7653, 10.2, 100.2325, 500.9874 ]
>>> for n in numbers:
... print('{:.6s}'.format('{:0.4f}'.format(n)))
... # OR format(format(... |
How can I pass parameters to on_key in fig.canvas.mpl_connect('key_press_event', on_key)? | 24,960,910 | 3 | 2014-07-25T17:05:33Z | 24,960,937 | 9 | 2014-07-25T17:07:12Z | [
"python",
"matplotlib"
] | I have a function
```
def on_key(event):
```
Which I call from
```
fig.canvas.mpl_connect('key_press_event', on_key)
```
I would like to pass the parameters plt1, plt2, plt3 to on\_key...
how can I do this? | Probably
```
def on_key(event, arg1, arg2, arg3):
```
and
```
fig.canvas.mpl_connect('key_press_event', lambda event: on_key(event, plt1, plt2, plt3))
```
or as list
```
def on_key(event, args_list):
```
and
```
fig.canvas.mpl_connect('key_press_event', lambda event: on_key(event, [plt1, plt2, plt3]))
``` |
Why is using a Python generator much slower to traverse binary tree than not? | 24,962,093 | 4 | 2014-07-25T18:17:29Z | 24,962,495 | 7 | 2014-07-25T18:43:13Z | [
"python",
"recursion",
"generator",
"pypy"
] | I've got a binary tree, where the nodes interact with data. I initially implemented a standard post order recursive traversal.
```
def visit_rec(self, node, data):
if node:
self.visit_rec(node.left, data)
self.visit_rec(node.right, data)
node.do_stuff(data)
```
I thought I could improve i... | On PyPy, function calls are much more highly optimized than generators or iterators.
There are many things that have different performance characteristics in PyPy (for example, PyPy's itertools.islice() performs abyssmally).
You're doing the right thing by measuring the performance to see which way is fastest.
Also ... |
Sublime text3 and virtualenvs | 24,963,030 | 18 | 2014-07-25T19:18:41Z | 25,002,696 | 33 | 2014-07-28T19:19:56Z | [
"python",
"sublimetext3",
"virtualenvwrapper"
] | I'm totally new with sublime3, but i couldn't find anything helpful for my problem...
I've differents virtualenvs (made with virtualenwrapper) and I'd like to be able to specify which venv to use with each project
Since I'm using SublimeREPL plugin to have custom builds, how can i specify which python installation to... | Hopefully this is along the lines you are imagining. I attempted to simplify my solution and remove some things you likely do not need.
The advantages of this method are:
* Single button press to launch a SublimeREPL with correct interpreter *and* run a file in it if desired.
* After setting the interpreter, no chang... |
Serialize SQLAlchemy Objects that have children with multiple children | 24,963,140 | 2 | 2014-07-25T19:26:07Z | 26,458,190 | 7 | 2014-10-20T03:28:51Z | [
"python",
"flask",
"sqlalchemy",
"flask-sqlalchemy",
"marshmallow"
] | I have an SLQALchemy object that I am serializing with [marshmallow](https://marshmallow.readthedocs.org/en/latest/index.html).
The object has N likes and N comments. It looks like this:
```
class Parent():
__tablename__ = 'parent'
title = Column(db.String(100), unique=True, nullable=False)
description ... | > Is there a way to tell marshmallow that it should also expect many `<Comment>` or `<Like>`?
Yes. You can also pass `many=True` to `Nested` fields.
```
class ParentSerializer(Serializer):
comments = fields.Nested(CommentSerializer, many=True)
likes = fields.Nested(LikeSerializer, many=True)
``` |
import pyttsx works in python 2.7, but not in python3 | 24,963,638 | 3 | 2014-07-25T20:03:43Z | 26,613,255 | 7 | 2014-10-28T16:15:47Z | [
"python",
"python-3.x",
"raspberry-pi"
] | **Question**: why is python3 unable to find the engine module when importing pyttsx?
**Details**:
I'm doing this on a raspberry pi with Raspbian Wheezy
Under python 2.7, the following works:
```
>>> import pyttsx
```
Under python3, the following happens:
```
>>> import pyttsx
Traceback (etc...)
File "<stdin>", l... | I attempted to install pyttsx on Python 3.4 (on Windows). Here's what I discovered:
The [pyttsx found on PyPi](https://pypi.python.org/pypi/pyttsx) was developed by [Peter Parente on GitHub](https://github.com/parente/pyttsx).
Parente has abandoned further development, and never ported it to Python 3. I cannot even g... |
python curses tty screen blink | 24,964,940 | 2 | 2014-07-25T21:41:34Z | 24,966,639 | 7 | 2014-07-26T01:30:23Z | [
"python",
"curses",
"tty"
] | I'm writing a python curses game (<https://github.com/pankshok/xoinvader>).
I found a problem: in terminal emulator it works fine, but in tty screen blinks.
I tried to use curses.flash(), but it got even worse.
for example, screen field:
```
self.screen = curses.newwin(80, 24, 0, 0)
```
Main loop:
```
def loop(self... | Don't call `clear` to clear the screen, use `erase` instead. Using `clear` sets a flag so that when you call `refresh` the first thing it does is clear the screen of the terminal. This is what is causing the terminal's screen to appear to blink. The user sees the old screen, then a completely blank screen, then your ne... |
Python hasattr vs getattr | 24,971,061 | 5 | 2014-07-26T12:32:06Z | 24,971,134 | 9 | 2014-07-26T12:43:01Z | [
"python",
"python-2.7",
"python-3.x",
"cpython"
] | I have been reading lately some [tweets](https://twitter.com/raymondh/status/492554332764520448) and the [python documentation](https://docs.python.org/3/library/functions.html#hasattr) about hasattr and it says:
> hasattr(object, name)
>
> > The arguments are an object and a string. The result is True if the string i... | The documentation does not encourage, the documentation just states the obvious. The `hasattr` is implemented as such, and throwing an `AttributeError` from a property getter can make it look like the attribute does not exist. This is an important detail, and that is why it is explicitly stated in the documentation. Co... |
Python - Parse a string and convert it into list | 24,971,593 | 3 | 2014-07-26T13:38:52Z | 24,971,624 | 12 | 2014-07-26T13:42:36Z | [
"python",
"string",
"list",
"parsing"
] | I've a string
```
a = "sequence=1,2,3,4&counter=3,4,5,6"
```
How do I convert it into a list,ie
```
sequence = [1,2,3,4]
counter = [3,4,5,6]
``` | Use [`urlparse.parse_qs`](https://docs.python.org/2/library/urlparse.html#urlparse.parse_qs) ([`urllib.parse.parse_qs`](https://docs.python.org/3/library/urllib.parse.html#urllib.parse.parse_qs) in Python 3.x) to parse query string:
```
>>> import urlparse
>>> a = "sequence=1,2,3,4&counter=3,4,5,6"
>>> {key: [int(x) f... |
Unit test script returns exit code = 0 even if tests fail | 24,972,098 | 6 | 2014-07-26T14:35:24Z | 24,972,157 | 13 | 2014-07-26T14:41:21Z | [
"python",
"unit-testing",
"python-3.x",
"exit-code"
] | My testing script looks as follows:
```
import os
import sys
from unittest import defaultTestLoader as loader, TextTestRunner
path_to_my_project = os.path.dirname(os.path.abspath(__file__)) + '/../'
sys.path.insert(0, path_to_my_project)
suite = loader.discover('my_project')
runner = TextTestRunner()
runner.run(suit... | The code is not using [`unittest.main`](https://docs.python.org/3/library/unittest.html#unittest.main). You need to check the result using [`TestResult.wasSuccessful`](https://docs.python.org/3/library/unittest.html#unittest.TestResult.wasSuccessful) and call [`sys.exit`](https://docs.python.org/3/library/sys.html#sys.... |
Interpolation over regular grid in Python | 24,978,052 | 13 | 2014-07-27T04:57:45Z | 24,983,256 | 26 | 2014-07-27T16:55:49Z | [
"python",
"numpy",
"interpolation",
"spatial-interpolation",
"kriging"
] | I have been struggling to inteprolate the data for "empty" pixels in my 2D matrix. Basically, I understand (but not deeply) interpolation techniques such as Inverse Distance Weighting, Kriging, Bicubic etc. I dont know the starting point exactly (either in the statement of the problem or Python case).
**The problem de... | What is a sensible solution largely depends on what questions you're trying to answer with the interpolated pixels -- caveat emptor: extrapolating over missing data can lead to very misleading answers!
**Radial Basis Function Interpolation / Kernel Smoothing**
In terms of practical solutions available in Python, one ... |
Pandas - GroupBy and then Merge on original table | 24,980,437 | 9 | 2014-07-27T11:25:56Z | 24,980,809 | 19 | 2014-07-27T12:10:29Z | [
"python",
"python-2.7",
"pandas"
] | I'm trying to write a function to aggregate and perform various stats calcuations on a dataframe in Pandas and then merge it to the original dataframe however, I'm running to issues. This is code equivalent in SQL:
```
SELECT EID,
PCODE,
SUM(PVALUE) AS PVALUE,
SUM(SQRT(SC*EXP(SC-1))) AS SC,
... | By default, `groupby` output has the grouping columns as indicies, not columns, which is why the merge is failing.
There are a couple different ways to handle it, probably the easiest is using the `as_index` parameter when you define the groupby object.
```
po_grouped_df = poagg_df.groupby(['EID','PCODE'], as_index=F... |
Different std in pandas vs numpy | 24,984,178 | 7 | 2014-07-27T18:26:18Z | 24,984,205 | 14 | 2014-07-27T18:29:45Z | [
"python",
"numpy",
"pandas",
"precision"
] | The standard deviation differs between pandas and numpy. Why and which one is the correct one? (the relative difference is 3.5% which should not come from rounding, this is high in my opinion).
**Example**
```
import numpy as np
import pandas as pd
from StringIO import StringIO
a='''0.057411
0.024367
0.021247
-0.00... | In a nutshell, neither is "incorrect". Pandas uses the [unbiased estimator](http://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation) (`N-1` in the denominator), whereas Numpy by default does not.
To make them behave the same, pass `ddof=1` to [`numpy.std()`](http://docs.scipy.org/doc/numpy/reference/gen... |
Find the last substring after a character | 24,986,504 | 3 | 2014-07-27T23:01:00Z | 24,986,509 | 10 | 2014-07-27T23:01:36Z | [
"python",
"regex",
"string",
"substring"
] | I know many ways how to find a substring: from start index to end index, between characters etc., but I have a problem which I don't know how to solve:
I have a string like for example a path: `folder1/folder2/folder3/new_folder/image.jpg`
and the second path: `folder1/folder2/folder3/folder4/image2.png`
And from this... | Use [`os.path.basename()`](https://docs.python.org/2/library/os.path.html#os.path.basename) instead and not worry about the details.
`os.path.basename()` returns the filename portion of your path:
```
>>> import os.path
>>> os.path.basename('folder1/folder2/folder3/new_folder/image.jpg')
'image.jpg'
```
For a more g... |
Nested dictionary to multiindex dataframe where dictionary keys are column labels | 24,988,131 | 8 | 2014-07-28T03:43:02Z | 24,988,227 | 15 | 2014-07-28T03:58:45Z | [
"python",
"dictionary",
"pandas",
"dataframe",
"multi-index"
] | Say I have a dictionary that looks like this:
```
dictionary = {'A' : {'a': [1,2,3,4,5],
'b': [6,7,8,9,1]},
'B' : {'a': [2,3,4,5,6],
'b': [7,8,9,1,2]}}
```
and I want a dataframe that looks something like this:
```
A B
a b a b
0 1 6 2 7
1 2 7... | Pandas wants the MultiIndex values as tuples, not nested dicts. The simplest thing is to convert your dictionary to the right format before trying to pass it to DataFrame:
```
>>> reform = {(outerKey, innerKey): values for outerKey, innerDict in dictionary.iteritems() for innerKey, values in innerDict.iteritems()}
>>>... |
Define functions with too many arguments to abide by PEP8 standard | 24,988,162 | 3 | 2014-07-28T03:48:25Z | 24,988,191 | 11 | 2014-07-28T03:52:02Z | [
"python",
"pep8"
] | I have defined a function with a long list of arguments. The total characters in definition is above 80 and doesn't abide by PEP8.
```
def my_function(argument_one, argument_two, argument_three, argument_four, argument_five):
```
What can be the best approach to avoid horizontal scrolling. | An example is given in PEP 8:
```
class Rectangle(Blob):
def __init__(self, width, height,
color='black', emphasis=None, highlight=0):
```
So that is the official answer. Personally I detest this approach, in which continuation lines have leading whitespace that doesn't correspond to any real in... |
How to draw vertical lines on a given plot in matplotlib? | 24,988,448 | 10 | 2014-07-28T04:29:41Z | 24,988,486 | 17 | 2014-07-28T04:34:36Z | [
"python",
"matplotlib"
] | Given a plot of signal in time representation, how to draw lines marking corresponding time index?
Specifically, given a signal plot with time index ranging from 0 to 2.6(s), I want to draw vertical red lines indicating corresponding time index for the list `[0.22058956, 0.33088437, 2.20589566]`, how can I do it? | The standard way to add vertical lines that will cover your entire plot window without you having to specify their actual height is `plt.axvline`
```
import matplotlib.pyplot as plt
plt.axvline(x=0.22058956)
plt.axvline(x=0.33088437)
plt.axvline(x=2.20589566)
```
OR
```
xcoords = [0.22058956, 0.33088437, 2.20589566... |
Python different behaviour with abstractmethod | 24,990,397 | 4 | 2014-07-28T07:29:38Z | 24,990,759 | 8 | 2014-07-28T07:55:02Z | [
"python",
"abstract-class"
] | I have two classes inheriting from the same parent `P`:
```
from abc import ABCMeta, abstractmethod
class P(object):
__metaclass__ = ABCMeta
@abstractmethod
def foo(self):
pass
class C(P):
pass
class D(tuple, P):
pass
```
The only difference is that `D` inherited from `tuple` and `P... | Abstract methods are tested for in the `object.__new__` method; when you inherit from `tuple`, which has its own `__new__` method, `object.__new__` is not called and the test for abstract methods is not made.
In other words, mixing abstract methods with *any* of the built-in immutable types will cause this problem.
T... |
pyzmq missing when running ipython notebook | 24,995,438 | 32 | 2014-07-28T12:38:40Z | 25,588,720 | 53 | 2014-08-31T03:34:31Z | [
"python",
"pip",
"ipython",
"easy-install"
] | I can run iPython, but when I try to initiate a notebook I get the following error:
```
~ ipython notebook
Traceback (most recent call last):
File "/usr/local/bin/ipython", line 8, in <module>
load_entry_point('ipython==2.1.0', 'console_scripts', 'ipython')()
File "/Library/Python/2.7/site-... | Arg. The *ipython* install is a little idiosyncratic. Here's what I had to do to resolve this:
```
$ pip uninstall ipython
$ pip install "ipython[all]"
```
The issue is that notebooks have their own set of dependencies, which aren't installed with `pip install ipython`. However, having installed *ipython*, pip doesn'... |
how redirect output to the file in subprocess.Popen | 24,996,352 | 4 | 2014-07-28T13:28:05Z | 24,996,460 | 7 | 2014-07-28T13:33:20Z | [
"python",
"stdout"
] | I tried such code to redirect standart output to the file:
```
subprocess.Popen('my command', cwd='my path', shell=True, stdout=stdout.txt, stderr=stdout.txt)
```
But got error: `NameError: name 'stdout' is not defined`
I use python version 2.5.2 | Open the file first and use `a` to append if you want to keep a record of all output/errors or use `w` to overwrite each time:
```
with open("stdout.txt","a+") as stdout:
subprocess.Popen('my command', cwd='my path', shell=True, stdout=stdout, stderr=stdout)
```
Using `with` will automatically close your file. |
Making a custom colormap using matplotlib in python | 24,997,926 | 3 | 2014-07-28T14:51:27Z | 25,000,108 | 10 | 2014-07-28T16:45:21Z | [
"python",
"matplotlib",
"heatmap",
"colorbar",
"colormap"
] | I have an image that I'm showing with matplotlib.

The image is generated by the following code:
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
labels = ['Name1', 'Name2', 'Name3', 'Name4', 'Name5', 'Name6']
data ... | There's more than one way to do this. In your case, it's easiest to use `LinearSegmentedColormap.from_list` and specify relative positions of colors as well as the colornames. (If you had evenly-spaced changes, you could skip the tuples and just do `from_list('my cmap', ['blue', 'white', 'red'])`.) You'll then need to ... |
Delete 0s that occur only at the beginning of a list | 24,998,530 | 2 | 2014-07-28T15:20:28Z | 24,998,560 | 9 | 2014-07-28T15:22:13Z | [
"python",
"list"
] | I have a list which has 0s at the beginning I want to delete. 0s do occur elsewhere in the data but I want to keep these.
```
l=[0, 0, 0, 0, 151, 199, 149, 147, 281, 133, 166, 162, 0, 353, 867, 1060, 525, 1031, 420, 0, 832, 1114, 869, 531, 546, 555, 520, 679, 715, 669, 555, 888, 605, 809, 0, 514]
```
needs to become:... | Use [`itertools.dropwhile()`](https://docs.python.org/2/library/itertools.html#itertools.dropwhile) to drop those zeros:
```
from itertools import dropwhile
import operator
l = list(dropwhile(operator.not_, l))
```
This will drop *just* the initial 0 values; or rather, all false-y values, using [`operator.not_()`](h... |
Why does str(KeyError) add extra quotes? | 24,998,968 | 6 | 2014-07-28T15:44:25Z | 24,999,035 | 10 | 2014-07-28T15:48:46Z | [
"python",
"exception",
"keyerror"
] | Why does the string representation of `KeyError` add extra quotes to the error message? All other built-in exceptions just return the error message string directly.
For example, the following code:
```
print str(LookupError("foo"))
print str(KeyError("foo"))
```
Produces the following output:
```
foo
'foo'
```
I h... | This is done so that you can detect `KeyError('')` properly. From the [`KeyError_str` function source](http://hg.python.org/cpython/file/bffa0b8a16e8/Objects/exceptions.c#l1270):
```
/* If args is a tuple of exactly one item, apply repr to args[0].
This is done so that e.g. the exception raised by {}[''] prints
... |
Check a python list if any of its elements is string type | 24,999,073 | 2 | 2014-07-28T15:50:26Z | 24,999,100 | 7 | 2014-07-28T15:51:57Z | [
"python",
"exception",
"typechecking"
] | I am trying to raise a `TypeError` if a list contains an element of string data type when a mixed type of list given. I was able to do this with the code below but I wonder if there is a better way to achieve the same result.
```
numbers = [1, "two", 3, 4, 5]
myStrList = [x for x in numbers if isinstance(x, str)]
if l... | You can use [`any`](https://docs.python.org/3/library/functions.html#any) and a [generator expression](https://docs.python.org/3/reference/expressions.html#grammar-token-generator_expression):
```
numbers = [1, "two", 3, 4, 5]
if any(isinstance(x, str) for x in numbers):
raise TypeError
else:
#do something
```... |
Python assignment destructuring | 24,999,875 | 14 | 2014-07-28T16:32:55Z | 24,999,958 | 17 | 2014-07-28T16:38:09Z | [
"python",
"destructuring"
] | These three expressions seem to be equivalent:
```
a,b,c = line.split()
(a,b,c) = line.split()
[a,b,c] = line.split()
```
Do they compile to the same code?
Which one is more pythonic? | According to `dis`, they all get compiled to the same bytecode:
```
>>> def f1(line):
... a,b,c = line.split()
...
>>> def f2(line):
... (a,b,c) = line.split()
...
>>> def f3(line):
... [a,b,c] = line.split()
...
>>> import dis
>>> dis.dis(f1)
2 0 LOAD_FAST 0 (line)
3 LOA... |
I am getting the error 'redefined-outer-name' | 24,999,937 | 5 | 2014-07-28T16:37:26Z | 25,000,042 | 7 | 2014-07-28T16:41:44Z | [
"python"
] | When running my lint, I am getting the error below:
```
Redefining name 'tmp_file' from outer scope (line 38) (redefined-outer-name)
```
Here is my snippet of code in that line:
```
tmp_file = open('../build/' + me_filename + '.js','w')
``` | That happens because you have a local name identical to a global name. The local name takes precedence, of course, but it hides the global name, makes it inaccesible, and cause confusion to the reader.
**Solution**
Change the local name. Or maybe the global name, whatever makes more sense. But note that the global na... |
How to cast tuple into namedtuple? | 25,000,159 | 5 | 2014-07-28T16:48:56Z | 25,000,171 | 10 | 2014-07-28T16:49:39Z | [
"python",
"python-3.x",
"namedtuple"
] | I'd like to use namedtuples internally, but I want to preserve compatibility with users that feed me a ordinary tuple.
```
from collections import namedtuple
tuplePi=(1,3.14,"pi") #Normal tuple
Record=namedtuple("MyNamedTuple", ["ID", "Value", "Name"])
namedE=Record(2, 2.79, "e") #Named tuple
namedPi=Record(tuple... | You can use the `*args` call syntax:
```
namedPi = Record(*tuplePi)
```
This passes in each element of the `tuplePi` sequence as a separate argument.
You can also use the [`namedtuple._make()` class method](https://docs.python.org/2/library/collections.html#collections.somenamedtuple._make) to turn any sequence into... |
Numpy's 'linalg.solve' and 'linalg.lstsq' not giving same answer as Matlab's '\' or mldivide | 25,001,753 | 3 | 2014-07-28T18:23:11Z | 25,004,588 | 11 | 2014-07-28T21:22:56Z | [
"python",
"matlab",
"numpy"
] | I'm trying to implement the least squares curve fitting algorithm on Python, having already written it on Matlab. However, I'm having trouble getting the right transform matrix, and the problem seems to be happening at the solve step. (Edit: My transform matrix is incredibly accurate with Matlab, but completely off wit... | The interesting thing is that you will get quite different results with `np.linalg.lstsq` and `np.linalg.solve`.
```
x1 = np.linalg.lstsq(A_star, B_star)[0]
x2 = np.linalg.solve(A_star, B_star)
```
Both should offer a solution for the equation Ax = B. However, these give two quite different arrays:
```
In [37]: x1 ... |
rethrowing python exception. Which to catch? | 25,001,971 | 9 | 2014-07-28T18:36:31Z | 25,002,034 | 15 | 2014-07-28T18:40:22Z | [
"python",
"exception",
"throw",
"rethrow"
] | I'm learning to use python. I just came across this article:
<http://nedbatchelder.com/blog/200711/rethrowing_exceptions_in_python.html>
It describes rethrowing exceptions in python, like this:
```
try:
do_something_dangerous()
except:
do_something_to_apologize()
raise
```
Since you re-throw the exception... | Try it and see:
```
def failure():
raise ValueError, "Real error"
def apologize():
raise TypeError, "Apology error"
try:
failure()
except ValueError:
apologize()
raise
```
The result:
```
Traceback (most recent call last):
File "<pyshell#14>", line 10, in <module>
apologize()
File "<pys... |
ArgumentError: relationship expects a class or mapper argument | 25,002,620 | 4 | 2014-07-28T19:15:28Z | 25,002,955 | 7 | 2014-07-28T19:36:05Z | [
"python",
"sqlalchemy",
"pyramid"
] | I am getting this strange error, and I'm saying strange because I made a change to an unrelated table and now I am getting this error.
I am trying to query my `tDevice` table which looks like this:
```
class TDevice(Base):
__tablename__ = 'tDevice'
ixDevice = Column(Integer, primary_key=True)
ixDeviceTyp... | not happy with myself since it's such a dumb mistake but here is my culprit:
```
report_type = relationship('tReportType',
uselist=False,
backref=backref('report'))
```
should be:
```
report_type = relationship('TReportType',
uselist=Fa... |
Python doesn't find MagickWand Libraries (despite correct location?) | 25,003,117 | 6 | 2014-07-28T19:46:08Z | 26,088,037 | 10 | 2014-09-28T18:18:42Z | [
"python",
"imagemagick",
"python-3.4",
"wand"
] | I wanted to install the Python ImageMagick API wand and followed this site:
<http://docs.wand-py.org/en/latest/guide/install.html#install-imagemagick-on-windows>
However, when running a very simple test:
```
from wand.image import Image
```
I get the following output:
> Traceback (most recent call last):
>
> File ... | I solved my own problem after THINKING about it ;)
I had the 32bit version of Python and the 64bit version of ImageMagick... after uninstalling ImageMagick and installing the 32bit version everything is fine.
Maybe this helps someone who runs into the same problem and comes here via Google ^^ |
How to run a specific test cases from a test suite using Robot Framework | 25,005,277 | 5 | 2014-07-28T22:20:25Z | 25,005,371 | 14 | 2014-07-28T22:28:33Z | [
"python",
"robotframework"
] | I am new to Robot and learning to write logic and test cases.
I have a test suite "mytestsuite.robot" which has a lot of test cases. I have a couple of errors in one of my test cases.
How do I run just that specific test case since I dont want to run the whole test suite again.
mytestsuite.robot
```
testcase1
....
... | You want to use the option `-t` or `--test`, but the option goes *before* the name of the file rather than after. This should work:
```
robot -t testcase1 mytestsuite.robot
```
The order of the command line arguments is covered in the user guide under a section titled [Starting test execution](http://robotframework.o... |
Time python scripts using IPython magic | 25,005,594 | 9 | 2014-07-28T22:53:32Z | 25,005,695 | 17 | 2014-07-28T23:03:24Z | [
"python",
"profiling",
"ipython",
"ipython-magic"
] | How can I time the execution of a Python script using the iPython %time or %%timeit magic commands? For example, I have script.py and I'd like to know how long it takes to execute. Small nuance: script.py needs input parameter(s). The below doesn't seem to work.
```
%%time script.py input_param1 input_param2
``` | # Solution
Your can use:
```
%%timeit
%run script.py input_param1 input_param2
```
beware that the script will be executed multiple times (the number is adaptive). To execute it only once (and have less accurate timing) change the first line to
```
%%timeit -n1 -r1
```
# Explanation
All the magic commands startin... |
How to plot ROC curve in Python | 25,009,284 | 6 | 2014-07-29T06:20:49Z | 25,009,504 | 8 | 2014-07-29T06:40:04Z | [
"python",
"matplotlib",
"plot",
"statistics",
"roc"
] | I am trying to plot a ROC curve to evaluate the accuracy of a prediction model I developed in Python using logistic regression packages. I have computed the true positive rate as well as the false positive rate; however, I am unable to figure out how to plot these correctly using `matplotlib` and calculate the AUC valu... | It is not at all clear what the problem is here, but if you have an array `true_positive_rate` and an array `false_positive_rate`, then plotting the ROC curve and getting the AUC is as simple as:
```
import matplotlib.pyplot as plt
import numpy as np
x = # false_positive_rate
y = # true_positive_rate
# This is the ... |
E731 do not assign a lambda expression, use a def | 25,010,167 | 61 | 2014-07-29T07:26:14Z | 25,010,243 | 65 | 2014-07-29T07:31:26Z | [
"python",
"lambda",
"pep"
] | I get this pep8 warning whenever I use lambda expressions. Are lambda expressions not recommended? If not why? | The recommendation in [PEP-8](http://www.python.org/dev/peps/pep-0008/#programming-recommendations) you are running into is:
> Always use a def statement instead of an assignment statement that
> binds a lambda expression directly to a name.
>
> Yes:
>
> ```
> def f(x): return 2*x
> ```
>
> No:
>
> ```
> f = lambda x:... |
E731 do not assign a lambda expression, use a def | 25,010,167 | 61 | 2014-07-29T07:26:14Z | 27,928,036 | 11 | 2015-01-13T17:32:28Z | [
"python",
"lambda",
"pep"
] | I get this pep8 warning whenever I use lambda expressions. Are lambda expressions not recommended? If not why? | Lattyware is absolutely right: Basically [PEP-8](http://legacy.python.org/dev/peps/pep-0008/#programming-recommendations) wants you to avoid things like
```
f = lambda x: 2 * x
```
and instead use
```
def f(x):
return 2 * x
```
However, as addressed in a recent [bugreport](https://github.com/jcrocholl/pep8/issu... |
install scipy module on centos | 25,010,394 | 16 | 2014-07-29T07:40:59Z | 25,147,728 | 38 | 2014-08-05T20:27:44Z | [
"python",
"centos",
"scipy",
"install"
] | I'm using centos, and I installed python2.7 alternately to the default python. I'm able to install all the module I want with pip, but I can't install scipy. When I'm doing
```
sudo /usr/local/bin/pip2.7 install scipy
```
It's downloading it, but then I have this error message:
```
blas_mkl_info:
libraries mkl,vm... | Scipy requires two Linear Algebra libraries installed on your system. Make sure the following (or variants thereof) are installed,
```
sudo yum install blas-devel lapack-devel
``` |
How do I run parallel jobs in python3 with asyncio? | 25,012,574 | 5 | 2014-07-29T09:54:02Z | 25,027,955 | 7 | 2014-07-30T02:43:28Z | [
"python",
"python-asyncio"
] | Assuming, that I have a class like that:
```
class MyClass:
def __init__(self):
run_some_long_time_function()
```
How can I create many instances of this class in parallel using asyncio in python 3.4.1? | The asyncio event loop is single threaded, so nothing running on the event loop will run in parallel. You can however spawn a thread and wait for it to finish. The default executor should create a thread for you:
```
loop = asyncio.get_event_loop()
asyncio.async(loop.run_in_executor(None, lambda: MyClass()))
asyncio.... |
Creating a defaultdict with empty numpy array | 25,014,298 | 3 | 2014-07-29T11:33:02Z | 25,014,320 | 7 | 2014-07-29T11:34:31Z | [
"python",
"numpy",
"collections",
"defaultdict"
] | I'm wondering if there's a more clever way to create a default dict from collections.
The dict should have an empty numpy ndarray as default value.
My best result is so far:
```
import collections
d = collections.defaultdict(lambda: numpy.ndarray(0))
```
However, i'm wondering if there's a possibility to skip the la... | You can use `functools.partial()` instead of a lambda:
```
from collections import defaultdict
from functools import partial
defaultdict(partial(numpy.ndarray, 0))
```
You *always* need a callable for `defaultdict()`, and `numpy.ndarray()` *always* needs at least one argument, so you cannot just pass in `numpy.ndarr... |
time data does not match format | 25,015,711 | 3 | 2014-07-29T12:48:19Z | 25,015,727 | 11 | 2014-07-29T12:49:07Z | [
"python",
"datetime"
] | I get the following error:
```
time data '07/28/2014 18:54:55.099000' does not match format '%d/%m/%Y %H:%M:%S.%f'
```
But I cannot see what parameter is wrong in `%d/%m/%Y %H:%M:%S.%f` ?
This is the code I use.
```
from datetime import datetime
time_value = datetime.strptime(csv_line[0] + '000', '%d/%m/%Y %H:%M:%S... | You have the month and day swapped:
```
'%m/%d/%Y %H:%M:%S.%f'
```
`28` will never fit in the range for the `%m` month parameter otherwise.
With `%m` and `%d` in the correct order parsing works:
```
>>> from datetime import datetime
>>> datetime.strptime('07/28/2014 18:54:55.099000', '%m/%d/%Y %H:%M:%S.%f')
datetim... |
Why there are no quotes when using dictionary based string formatting in Python? | 25,019,341 | 3 | 2014-07-29T15:38:14Z | 25,019,394 | 10 | 2014-07-29T15:41:25Z | [
"python",
"python-2.7"
] | I may have a silly question:
```
x = {'a':1,'b':2,'c':3}
print "%(a)s" % x # works fine
print "%('a')s" % x # failed
```
Could someone tell me why? | `%(a)s` retrieves the value for the existing `a` key from `x`.
`%('a')s` attempts to get the value for non-existent `'a'` key (with quotes) and that causes the error to appear.
Simple code example:
```
>>> x = {"'a'": 'a within quotes', 'a': 'no quotes'}
>>> "%('a')s" % x
'a within quotes'
>>> "%(a)s" % x
'no quotes... |
Django 1.7 where to put the code to add Groups programatically? | 25,024,795 | 11 | 2014-07-29T20:52:39Z | 25,803,284 | 15 | 2014-09-12T07:46:51Z | [
"python",
"django",
"django-models",
"django-admin"
] | I have been trying to find the answer in the Django Auth docs, but can not seem to find what i am looking for.
The problem i am having is, when i define the code for adding Groups (same as Groups in the admin page):
```
#read_only
group, created = Group.objects.get_or_create(name='read_only')
if created:
group... | I was recommended [this](https://docs.djangoproject.com/en/dev/topics/migrations/#data-migrations) way to do it:
Create a fake migration in the appropriate module:
```
python manage.py makemigrations --empty yourappname
```
Open up the file that was created, which should look like this:
```
# -*- coding: utf-8 -*-
... |
Max and Min date in pandas groupby | 25,024,797 | 10 | 2014-07-29T20:52:45Z | 25,025,065 | 12 | 2014-07-29T21:12:10Z | [
"python",
"pandas",
"dataframe"
] | I have a dataframe that looks like:
```
data = {'index': ['2014-06-22 10:46:00', '2014-06-24 19:52:00', '2014-06-25 17:02:00', '2014-06-25 17:55:00', '2014-07-02 11:36:00', '2014-07-06 12:40:00', '2014-07-05 12:46:00', '2014-07-27 15:12:00'],
'type': ['A', 'B', 'C', 'A', 'B', 'C', 'A', 'C'],
'sum_col': [1, 2, ... | You need to combine the functions that apply to the same column, like this:
```
In [116]: gb.agg({'sum_col' : np.sum,
...: 'date' : [np.min, np.max]})
Out[116]:
date sum_col
amin amax sum
type weekofyear
A ... |
Log scale using Bokeh's scatter function | 25,025,021 | 6 | 2014-07-29T21:08:44Z | 25,025,224 | 11 | 2014-07-29T21:23:04Z | [
"python",
"bokeh"
] | How do I get log scales when using Bokeh's `scatter` function. I'm looking for something like the following:
```
scatter(x, y, source=my_source, ylog=True)
```
or
```
scatter(x, y, source=my_source, yscale='log')
``` | Something along these lines will work:
```
import numpy as np
from bokeh.plotting import *
N = 100
x = np.linspace(0.1, 5, N)
output_file("logscatter.html", title="log axis scatter example")
figure(tools="pan,wheel_zoom,box_zoom,reset,previewsave",
y_axis_type="log", y_range=[0.1, 10**2], title="log axis sc... |
break the function after certain time | 25,027,122 | 4 | 2014-07-30T00:42:39Z | 25,027,182 | 7 | 2014-07-30T00:49:29Z | [
"python",
"subprocess"
] | In python, for a toy example:
```
for x in range(0, 3):
# call function A(x)
```
I want to continue the for loop if function A takes more than 5 second by skipping it so I won't get stuck or waste time.
By doing some search, I realized subprocess or thread may help but I have no idea how to implement here.
Any h... | I think creating a new process may be overkill. If you're on Mac or a Unix-based system, you should be able to use signal.SIGALRM to forcibly time out functions that take too long. This will work on functions that are idling for network or other issues that you absolutely can't handle by modifying your function. I have... |
Using Selenium on Raspberry Pi headless | 25,027,385 | 9 | 2014-07-30T01:20:24Z | 25,726,038 | 18 | 2014-09-08T14:00:35Z | [
"python",
"selenium",
"raspberry-pi",
"iceweasel"
] | This is my first time trying to run Selenium on a raspberry pi using the Iceweasel browser.
I tried a simple test this evening
```
# selenium test for /mod2
# verify: posts, and page name
class TestMod2Selenium(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
def test_validate_p... | This works for me on Raspberry Pi headless:
Installation:
```
sudo apt-get install python-pip iceweasel xvfb
sudo pip install pyvirtualdisplay selenium
```
Code:
```
from selenium import webdriver
from pyvirtualdisplay import Display
display = Display(visible=0, size=(800, 600))
display.start()
driver = webdriver... |
use a css stylesheet on a jinja2 template | 25,034,812 | 6 | 2014-07-30T10:45:11Z | 25,036,606 | 11 | 2014-07-30T12:16:17Z | [
"python",
"html",
"css",
"flask",
"jinja2"
] | I am making a website using html, css, flask and jinja2.
I have a page working on a flask server, the buttons and labels etc. are displayed, but the css stylesheet I have is not loaded in.
How would I link a stylesheet to a jinja2 template. I have looked around on the internet but cannot find out how.
Here is the cs... | All public files (the ones that are not processed, like templates or python files) should be placed into dedicated static folders. By default, Jinja2 have one static folder called `static`.
That should fix your problem:
1. Move `/templates/styles.css` to `/static/styles.css`
2. Update your code with following code, t... |
Is it possible to limit Flask POST data size on a per-route basis? | 25,036,498 | 7 | 2014-07-30T12:10:43Z | 25,036,561 | 13 | 2014-07-30T12:14:01Z | [
"python",
"python-3.x",
"flask"
] | I am aware it is possible to [set an overall limit on request size](http://flask.pocoo.org/docs/patterns/fileuploads/) in Flask with:
```
app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024
```
BUT I want to ensure that one specific route will not accept POST data over a certain size.
Is this possible? | You'll need to check this for the specific route itself; you can always test the content length; [`request.content_length`](http://werkzeug.pocoo.org/docs/wrappers/#werkzeug.wrappers.CommonRequestDescriptorsMixin.content_length) is either `None` or an integer value:
```
cl = request.content_length
if cl is not None an... |
python, x = """"""; lexed as triple quotes or 3 pairs of quotes | 25,038,433 | 2 | 2014-07-30T13:41:27Z | 25,038,626 | 8 | 2014-07-30T13:51:07Z | [
"python"
] | In python you can say this:
```
x = """""" # x = ''
```
Does the Python lexer see this as two triple quotes with nothing inside? I.e. along the lines of `x = """ """` (with no space)?
This was my immediate thought. However, this is possible in python:
```
>>> "4" "5"
'45'
>>> # and
>>> "4""5"
'45'
```
So I can see... | You can tell by using the [tokenizer](https://docs.python.org/2/library/tokenize.html):
```
>>> from StringIO import StringIO
>>> from tokenize import generate_tokens as gt
>>> from pprint import pprint as pp
>>> code = 'x=""""""'
>>> codeio = StringIO(code)
>>> tokens = list(gt(codeio.readline))
>>> pp(tokens)
[(1, '... |
find numeric columns in pandas (python) | 25,039,626 | 15 | 2014-07-30T14:36:21Z | 28,155,580 | 17 | 2015-01-26T17:39:29Z | [
"python",
"types",
"pandas"
] | say df is a pandas DataFrame.
I would like to find all columns of numeric type.
something like:
```
isNumeric = is_numeric(df)
``` | You could use `select_dtypes` method of DataFrame. It includes two parameters include and exclude. So isNumeric would look like:
```
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
newdf = df.select_dtypes(include=numerics)
``` |
find numeric columns in pandas (python) | 25,039,626 | 15 | 2014-07-30T14:36:21Z | 34,530,065 | 8 | 2015-12-30T13:00:51Z | [
"python",
"types",
"pandas"
] | say df is a pandas DataFrame.
I would like to find all columns of numeric type.
something like:
```
isNumeric = is_numeric(df)
``` | You can use the following command to filter only numeric columns
```
df._get_numeric_data()
```
Example
```
In [32]: data
Out[32]:
A B
0 1 s
1 2 s
2 3 s
3 4 s
In [33]: data._get_numeric_data()
Out[33]:
A
0 1
1 2
2 3
3 4
``` |
Get a list of values from a list of dictionaries in python | 25,040,875 | 4 | 2014-07-30T15:30:10Z | 25,040,901 | 8 | 2014-07-30T15:31:23Z | [
"python",
"list",
"dictionary"
] | I have a list of dictionaries, and I need to get a list of the values from a given key from the dictionary (all the dictionaries have those same key).
For example, I have:
```
l = [ { "key": 1, "Val1": 'val1 from element 1', "Val2": 'val2 from element 1' },
{ "key": 2, "Val1": 'val1 from element 2', "Val2": 'v... | Using a simple [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions) (if you're sure every dictionary has the key):
```
In [10]: [d['key'] for d in l]
Out[10]: [1, 2, 3]
```
Otherwise you'll need to check for existence first:
```
In [11]: [d['key'] for d in l if 'key' in d]... |
Matplotlib timelines | 25,041,905 | 5 | 2014-07-30T16:17:51Z | 25,042,264 | 8 | 2014-07-30T16:36:36Z | [
"python",
"matplotlib",
"pandas",
"timeline"
] | I'm looking to take a python DataFrame with a bunch of timelines in it and plot these in a single figure. The DataFrame indices are Timestamps and there's a specific column, we'll call "sequence", that contains strings like "A" and "B". So the DataFrame looks something like this:
```
+--------------------------+---+
|... | I would just map each category to a y-value using a dictionary.
```
import random
import numpy as np
import matplotlib.pyplot as plt
import pandas
categories = list('ABCD')
# map categories to y-values
cat_dict = dict(zip(categories, range(1, len(categories)+1)))
# map y-values to categories
val_dict = dict(zip(ran... |
sort a 2D list first by 1st column and then by 2nd column | 25,046,306 | 3 | 2014-07-30T20:27:50Z | 25,046,328 | 7 | 2014-07-30T20:29:16Z | [
"python",
"list",
"sorting"
] | I am trying to find a nice way to sort a 2d list , first by the 1st value , and then by the 2nd value.
I think an example will be the best
If I have a list
```
[[1,4],
[2,7],
[10,1],
[1,2],
[10,6]
[2,1]]
```
I want that is will be sorted like this
```
[[1,2],
[1,4],
[2,1],
[2,7],
[10,1],
[10,6]]
``` | ```
l=[[1,4],
[2,7],
[10,1],
[1,2],
[10,6],
[2,1]]
print sorted(l,key=lambda x: (x[0],x[1])) # use lambda to sort by "x[0]"-> first element of the sublists or x[1] -> second element, if its a tie
[[1, 2], [1, 4], [2, 1], [2, 7], [10, 1], [10, 6]]
```
Or simply `sorted(l)` of `l.sort()` as your elements sort naturally.... |
assertRaises in python unit-test not catching the exception | 25,047,256 | 4 | 2014-07-30T21:25:16Z | 25,047,330 | 7 | 2014-07-30T21:29:43Z | [
"python",
"unit-testing",
"assertraises"
] | Can somebody tell me why the following unit-test is failing on the
ValueError in test\_bad, rather than catching it with assertRaises
and succeeding? I think I'm using the correct procedure and syntax,
but the ValueError is not getting caught.
I'm using Python 2.7.5 on a linux box.
Here is the code â¦
```
import un... | Unittest's [assertRaises](https://docs.python.org/2/library/unittest.html#unittest.TestCase.assertRaises) takes a callable and arguments, so in your case, you'd call it like:
```
self.assertRaises(ValueError, self.isone.is_one, 2)
```
If you prefer, as of Python2.7, you could also use it as a context manager like:
`... |
Finding a sum in nested list using a lambda function | 25,047,561 | 7 | 2014-07-30T21:47:20Z | 25,047,602 | 10 | 2014-07-30T21:51:14Z | [
"python",
"lambda",
"functional-programming",
"sum"
] | I have a data structure similar to this
```
table = [
("marley", "5"),
("bob", "99"),
("another name", "3")
]
```
What I would like to do, to get the sum of the 2nd column (5 + 99 + 3) functionally like this:
```
total = sum(table, lambda tup : int(tup[1]))
```
That would be similar syntax to the python... | One approach is to use a [generator expression](https://docs.python.org/2/tutorial/classes.html#generator-expressions):
```
total = sum(int(v) for name,v in table)
``` |
Split a string by backslash in python | 25,047,976 | 8 | 2014-07-30T22:23:42Z | 25,047,988 | 15 | 2014-07-30T22:25:33Z | [
"python"
] | Simple question but I'm struggling with it for too much time. Basically I want to split a string by \ (backslash).
```
a = "1\2\3\4"
```
Tried to escape the the backslash but it doesn't seem to work:
```
print(a.split('\'))
print(a.split('"\"'))
print(a.split('\\'))
print(a.split('"\\"'))
```
I want to get thi... | You have the right idea with escaping the backslashes, but despite how it looks, your input string doesn't actually have any backslashes in it. You need to escape them in the input, too!
```
>>> a = "1\\2\\3\\4" # Note the doubled backslashes here!
>>> print(a.split('\\')) # Split on '\\'
['1', '2', '3', '4']
```
Y... |
Failed to catch syntax error python | 25,049,498 | 8 | 2014-07-31T01:18:50Z | 25,049,535 | 13 | 2014-07-31T01:25:12Z | [
"python",
"error-handling"
] | ```
try:
x===x
except SyntaxError:
print "You cannot do that"
```
outputs
```
x===x
^
SyntaxError: invalid syntax
```
this does not catch it either
```
try:
x===x
except:
print "You cannot do that"
```
Other errors like NameError, ValueError, are catchable.
Thoughts?
System specs:
```... | You can only catch `SyntaxError` if it's thrown out of an `eval` or `exec` operation.
```
>>> try:
... eval('x === x')
... except SyntaxError:
... print "You cannot do that"
...
You cannot do that
```
This is because, normally, the interpreter parses the *entire file* before executing any of it, so it detects ... |
Failed to catch syntax error python | 25,049,498 | 8 | 2014-07-31T01:18:50Z | 25,049,539 | 8 | 2014-07-31T01:25:25Z | [
"python",
"error-handling"
] | ```
try:
x===x
except SyntaxError:
print "You cannot do that"
```
outputs
```
x===x
^
SyntaxError: invalid syntax
```
this does not catch it either
```
try:
x===x
except:
print "You cannot do that"
```
Other errors like NameError, ValueError, are catchable.
Thoughts?
System specs:
```... | `SyntaxError`s get raised when the file/code is [**parsed**](https://docs.python.org/2/library/exceptions.html#exceptions.SyntaxError), not when that line of code is executed. The reason for this is simple -- If the syntax is wrong at a single point in the code, the parser can't continue so all code after that line is ... |
Is 'encoding is an invalid keyword' error inevitable in python 2.x? | 25,049,962 | 6 | 2014-07-31T02:34:34Z | 25,050,323 | 10 | 2014-07-31T03:22:07Z | [
"python",
"python-2.7",
"python-3.x",
"encoding",
"utf-8"
] | [Ansi to UTF-8 using python causing error](http://stackoverflow.com/questions/24893173/ansi-to-utf-8-using-python-causing-error)
I tried the answer there to convert ansi to utf-8.
```
import io
with io.open(file_path_ansi, encoding='latin-1', errors='ignore') as source:
with open(file_path_utf8, mode='w', encodi... | For Python2.7, Use `io.open()` in both locations.
```
import io
import shutil
with io.open('/etc/passwd', encoding='latin-1', errors='ignore') as source:
with io.open('/tmp/goof', mode='w', encoding='utf-8') as target:
shutil.copyfileobj(source, target)
```
The above program runs without errors on my PC. |
How to filter in NaN (pandas)? | 25,050,141 | 10 | 2014-07-31T02:57:26Z | 25,050,179 | 11 | 2014-07-31T03:02:10Z | [
"python",
"pandas",
null
] | I have a pandas dataframe (df), and I want to do something like:
```
newdf = df[(df.var1 == 'a') & (df.var2 == NaN)]
```
I've tried replacing NaN with `np.NaN`, or `'NaN'` or `'nan'` etc, but nothing evaluates to True. There's no `pd.NaN`.
I can use `df.fillna(np.nan)` before evaluating the above expression but that... | This doesn't work because `NaN` isn't equal to anything, including `NaN`. Use `pd.isnull(df.var2)` instead. |
Extract first item of each sublist in python | 25,050,311 | 18 | 2014-07-31T03:21:13Z | 25,050,328 | 28 | 2014-07-31T03:22:27Z | [
"python",
"list"
] | I am wondering what is the best way to extract a first item of each sublist in a list of lists and append it to a new list. So if i have:
```
lst = [[a,b,c], [1,2,3], [x,y,z]]
```
and i want to pull out a, 1 and x and create a seperate list from those.
I tried:
```
lst2.append(x[0] for x in lst)
``` | Using [list comprehension](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions):
```
>>> lst = [['a','b','c'], [1,2,3], ['x','y','z']]
>>> lst2 = [item[0] for item in lst]
>>> lst2
['a', 1, 'x']
``` |
Extract first item of each sublist in python | 25,050,311 | 18 | 2014-07-31T03:21:13Z | 25,050,572 | 13 | 2014-07-31T03:51:27Z | [
"python",
"list"
] | I am wondering what is the best way to extract a first item of each sublist in a list of lists and append it to a new list. So if i have:
```
lst = [[a,b,c], [1,2,3], [x,y,z]]
```
and i want to pull out a, 1 and x and create a seperate list from those.
I tried:
```
lst2.append(x[0] for x in lst)
``` | You could use zip:
```
>>> lst=[[1,2,3],[11,12,13],[21,22,23]]
>>> zip(*lst)[0]
(1, 11, 21)
```
Or, (my favorite) use numpy:
```
>>> import numpy as np
>>> a=np.array([[1,2,3],[11,12,13],[21,22,23]])
>>> a
array([[ 1, 2, 3],
[11, 12, 13],
[21, 22, 23]])
>>> a[:,0]
array([ 1, 11, 21])
``` |
OrderedDict vs Dict in python | 25,056,387 | 13 | 2014-07-31T10:18:49Z | 25,057,250 | 8 | 2014-07-31T11:01:36Z | [
"python",
"dictionary",
"ordereddictionary"
] | In [Tim Peter's answer](http://stackoverflow.com/a/18951209/1860929) to "Are there any reasons not to use an ordered dictionary", he says
> OrderedDict is a subclass of dict.
>
> It's not a lot slower, but at least doubles the memory over using a plain dict.
Now, while going through a [particular question](http://sta... | I think the problem with size is due to the fact that there's no `__sizeof__` method defined in Python 2.X [implementation of `OrderedDict`](http://hg.python.org/cpython/file/2.7/Lib/collections.py#l26), so it simply falls back to dict's `__sizeof__` method.
To prove this here I've created a class `A` here which exten... |
Get the mean across multiple Pandas DataFrames | 25,057,835 | 8 | 2014-07-31T11:32:09Z | 25,058,102 | 9 | 2014-07-31T11:44:59Z | [
"python",
"numpy",
"pandas"
] | I'm generating a number of dataframes with the same shape, and I want to compare them to one another. I want to be able to get the mean and median across the dataframes.
```
Source.0 Source.1 Source.2 Source.3
cluster
0 0.001182 0.184535 0.814230 0.000054
1... | Assuming the two dataframes have the same columns, you could just concatenate them and compute your summary stats on the concatenated frames:
```
import numpy as np
import pandas as pd
# some random data frames
df1 = pd.DataFrame(dict(x=np.random.randn(100), y=np.random.randint(0, 5, 100)))
df2 = pd.DataFrame(dict(x=... |
Get the mean across multiple Pandas DataFrames | 25,057,835 | 8 | 2014-07-31T11:32:09Z | 25,059,620 | 9 | 2014-07-31T13:04:46Z | [
"python",
"numpy",
"pandas"
] | I'm generating a number of dataframes with the same shape, and I want to compare them to one another. I want to be able to get the mean and median across the dataframes.
```
Source.0 Source.1 Source.2 Source.3
cluster
0 0.001182 0.184535 0.814230 0.000054
1... | I go similar as @ali\_m, but since you want one mean per row-column combination, I conclude differently:
```
df1 = pd.DataFrame(dict(x=np.random.randn(100), y=np.random.randint(0, 5, 100)))
df2 = pd.DataFrame(dict(x=np.random.randn(100), y=np.random.randint(0, 5, 100)))
df = pd.concat([df1, df2])
foo = df.groupby(leve... |
Correct Regex-Syntax for re.sub replacement in Python | 25,058,613 | 3 | 2014-07-31T12:14:14Z | 25,058,657 | 7 | 2014-07-31T12:16:44Z | [
"python",
"regex"
] | I'm currently looking for the correct regex to replace something like this in Python:
old\_string contains:
```
some text
some text
<!-V
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat
V->
some text
```
What I've tried to replace... | By default, `.` doesn't match a line break character. If you want `.` to match those too, use the [`DOTALL`](https://docs.python.org/2/library/re.html#re.DOTALL) flag:
```
new_string = re.sub(r"<!-V(.*?)V->", '', old_string, flags=re.DOTALL)
```
There's also the option of replacing `.` with a character class that mat... |
Make a Python class throw an error when creating a new property | 25,061,792 | 3 | 2014-07-31T14:40:24Z | 25,061,917 | 8 | 2014-07-31T14:45:04Z | [
"python",
"class",
"python-2.7",
"attributes"
] | Let's say this is my class:
```
class A:
def __init__(self):
self.good_attr = None
self.really_good_attr = None
self.another_good_attr = None
```
Then a caller can set the values on those variables:
```
a = A()
a.good_attr = 'a value'
a.really_good_attr = 'better value'
a.another_good_att... | You can hook into attribute setting with the `__setattr__` method. This method is called for *all* attribute setting, so take into account it'll be called for your 'correct' attributes too:
```
class A(object):
good_attr = None
really_good_attr = None
another_good_attr = None
def __setattr__(self, nam... |
UnicodeDecodeError when using Python 2.x unicodecsv | 25,062,937 | 6 | 2014-07-31T15:26:02Z | 25,062,993 | 7 | 2014-07-31T15:28:14Z | [
"python",
"unicode",
"python-unicode"
] | I'm trying to write out a csv file with Unicode characters, so I'm using the unicodecsv package. Unfortunately, I'm still getting UnicodeDecodeErrors:
```
# -*- coding: utf-8 -*-
import codecs
import unicodecsv
raw_contents = 'He observes an âOversized Gorillaâ near Ashford'
encoded_contents = unicode(raw_conten... | You should *not* use `codecs.open()` for your file. `unicodecsv` wraps the `csv` module, which always writes a *byte string* to the open file object. In order to write that byte string to a Unicode-aware file object such as returned by `codecs.open()`, it is implicitly *decoded*; this is where your `UnicodeDecodeError`... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.