title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
TypeError: can't multiply sequence by non-int of type 'float' 3.3 | 18,668,203 | 3 | 2013-09-06T23:49:29Z | 18,668,250 | 7 | 2013-09-06T23:56:13Z | [
"python",
"python-3.x"
] | Ok I have edited the code to where it would hopefully work but I get the TypeError: can't multiply sequence by non-int of type 'float'.
Heres the code that I have:
```
uTemp = input("Enter Temperature Variable: ")
cOrF = input("Do you want C for celcius, or F for Farehnheit?: ")
if cOrF:
F = 1.8 * uTemp + 32
``... | The error is telling you that you can't multiply `uTemp`, a string, by a floating-point number (`1.8`). Which makes perfect sense, right? What is eight tenths of a string? Convert `uTemp` to a float:
```
uTemp = float(input("Enter Temperature Variable: "))
```
Your next problem is that `cOrF` is treated as a Boolean ... |
How do I set sys.argv so I can unit test it? | 18,668,947 | 9 | 2013-09-07T02:03:35Z | 27,765,993 | 10 | 2015-01-04T13:29:54Z | [
"python"
] | I would like to set
```
sys.argv
```
so I can unit test passing in different combinations. The following doesn't work:
```
#!/usr/bin/env python
import argparse, sys
def test_parse_args():
global sys.argv
sys.argv = ["prog", "-f", "/home/fenton/project/setup.py"]
setup = get_setup_file()
assert setup... | Changing sys.argv at runtime is a pretty fragile way of testing. You should use [mock](http://www.voidspace.org.uk/python/mock/index.html)'s [patch](http://www.voidspace.org.uk/python/mock/patch.html) functionality, which can be used as a context manager to substitute one object (or attribute, method, function, etc.) w... |
Is it possible to write single line return statement with if statement? | 18,669,836 | 7 | 2013-09-07T04:49:58Z | 18,669,843 | 19 | 2013-09-07T04:50:56Z | [
"python"
] | Is is possible to return from a method in single line in python
Looking for something like this
```
return None if x is None
```
Tried above, and it is invalid syntax
I could easily do:
```
if x is None:
return None
```
But just curious if I can combine above if statement into a single line | Yes, it's called a [conditional expression](http://docs.python.org/2/reference/expressions.html#conditional-expressions):
```
return None if x is None else something_else
```
You need an `else something` in a conditional for it to work. |
Is it possible to write single line return statement with if statement? | 18,669,836 | 7 | 2013-09-07T04:49:58Z | 18,669,891 | 20 | 2013-09-07T04:58:18Z | [
"python"
] | Is is possible to return from a method in single line in python
Looking for something like this
```
return None if x is None
```
Tried above, and it is invalid syntax
I could easily do:
```
if x is None:
return None
```
But just curious if I can combine above if statement into a single line | It is possible to write a standard "if" statement on a single line:
```
if x is None: return None
```
However the [pep 8 style guide](http://www.python.org/dev/peps/pep-0008/) recommends against doing this:
> Compound statements (multiple statements on the same line) are generally discouraged |
Launching an app in heroku? What is procfile? 'web:' command? | 18,670,186 | 6 | 2013-09-07T05:46:01Z | 18,670,597 | 18 | 2013-09-07T06:47:38Z | [
"python",
"linux",
"heroku",
"web",
"flask"
] | I was referring to this site as i am learning python/flask and trying to use heroku.
<http://ryaneshea.com/lightweight-python-apps-with-flask-twitter-bootstrap-and-heroku>
Let me explain what all I did, so that any one who is stuck like me can get the picture.
I am using Linux Backtrack command line .
1. I started vi... | the Procfile tells Heroku what commands should be run (<https://devcenter.heroku.com/articles/procfile>).
You are able to define difference process types, such as web (the only one which will autostart by default), workers, etc...
So basically a Procfile containing
```
web: python app.py
```
is telling Heroku to st... |
How to suppress noisy factory started/stopped log messages from Twisted? | 18,670,252 | 3 | 2013-09-07T05:56:08Z | 18,673,435 | 9 | 2013-09-07T12:42:47Z | [
"python",
"logging",
"twisted"
] | I am using twisted.web.client.Agent which in turn is using HTTP11ClientProtocol. The innumerable factory started / stopped messages are obscuring the messages that I am actually interested in. So, I am looking for a way to suppress them. Is there any?
```
2013-09-07 11:03:15+0530 [-] Starting factory <twisted.web.clie... | If you want every `_HTTP11ClientFactory` in the application to hush up, you could just set the `noisy` property at the class level:
```
from twisted.web import client
client._HTTP11ClientFactory.noisy = False
```
A more flexible approach would be to create a custom `HTTPConnectionPool` that uses quiet factories and p... |
How can I use Homebrew to install both Python 2 and 3 on Mac? | 18,671,253 | 63 | 2013-09-07T08:11:22Z | 18,671,336 | 115 | 2013-09-07T08:20:23Z | [
"python",
"homebrew"
] | I need to be able to switch back and forth between Python 2 and 3. How do I do that using Homebrew as I don't want to mess with path and get into trouble.
Right now I have 2.7 installed through Homebrew. | I would use [pyenv](https://github.com/yyuu/pyenv) You can install it:
```
$ brew install pyenv
```
Once you have it, you can install different versions of python and choose which one you can use. Example:
```
$ pyenv install 2.7.5
```
You can check the versions you have installed with:
```
$ pyenv versions
```
A... |
How can I use Homebrew to install both Python 2 and 3 on Mac? | 18,671,253 | 63 | 2013-09-07T08:11:22Z | 26,609,911 | 29 | 2014-10-28T13:42:16Z | [
"python",
"homebrew"
] | I need to be able to switch back and forth between Python 2 and 3. How do I do that using Homebrew as I don't want to mess with path and get into trouble.
Right now I have 2.7 installed through Homebrew. | You can have both versions installed at the same time.
For Python 2.x:
```
brew install python
```
For Python 3.x:
```
brew install python3
```
Now, you will have both the versions installed in your machine. When you want to use version 2, use the `python` executable. When you want to use version 3, use the `pytho... |
ProcessPoolExecutor from concurrent.futures way slower than multiprocessing.Pool | 18,671,528 | 16 | 2013-09-07T08:45:17Z | 18,672,200 | 25 | 2013-09-07T10:09:37Z | [
"python",
"concurrency",
"multiprocessing",
"future",
"concurrent.futures"
] | I was experimenting with the new shiny [concurrent.futures](http://docs.python.org/3.3/library/concurrent.futures.html) module introduced in Python 3.2, and I've noticed that, almost with identical code, using the Pool from concurrent.futures is *way* slower than using [multiprocessing.Pool](http://docs.python.org/3.3/... | When using `map` from `concurrent.futures`, each element from the iterable [is submitted](http://hg.python.org/cpython/file/3.3/Lib/concurrent/futures/_base.py#l538) separately to the executor, which creates a `Future` object for each call. It then returns an iterator which yields the results returned by the futures. ... |
JSON - Generating a json in a loop in python | 18,673,952 | 2 | 2013-09-07T13:45:15Z | 18,673,979 | 8 | 2013-09-07T13:48:09Z | [
"python",
"json"
] | I have some difficulties generating a specific JSON object in python.
I need it to be in this format:
```
[
{"id":0 , "attributeName_1":"value" , "attributeName_2":"value" , .... },
{"id":1 , "attributeName_2":"value" , "attributeName_3":"value" , .... },
.
.
.
]
```
In python, im getting the ids, att... | The problem is that you are appending to `data` multiple times in the loop: first `{"id":feature.pk}`, then `{attribute.attribute.name : attribute.value}` in the inner loop.
Instead, you need to define a dictionary inside the loop, fill it with `id` item and attributes and only then append:
```
data=[]
for feature in... |
how do I insert a column at a specific column index in pandas? | 18,674,064 | 42 | 2013-09-07T13:59:01Z | 18,674,915 | 75 | 2013-09-07T15:32:55Z | [
"python",
"indexing",
"pandas"
] | Can I insert a column at a specific column index in pandas?
```
import pandas as pd
df = pd.DataFrame({'l':['a','b','c','d'], 'v':[1,2,1,2]})
df['n'] = 0
```
This will put column `n` as the last column of `df`, but isn't there a way to tell `df` to put `n` at the beginning? | see docs: <http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion>
using idx = 0 will insert at the beginning
```
df.insert(idx, col_name, value)
``` |
Dynamically Generating SQL Queries with Python and SQLite3 | 18,674,389 | 2 | 2013-09-07T14:35:10Z | 18,674,464 | 7 | 2013-09-07T14:45:24Z | [
"python",
"sql",
"sqlite3"
] | Below is a generalisation of my problem:
Consider the table
```
ID A B C
r1 1 1 0 1
. . . . .
. . . . .
. . . . .
rN N 1 1 0
```
Where the columns `A,B,C` contain either `0` or `1`. I am trying to write a python function that takes a list of perm... | ```
def permCount(permList):
condition = ' OR '.join(['(A=? AND B=? AND C=?)'
for row in permList])
sql = "SELECT Count(*) FROM Table WHERE {c}".format(
c=condition)
args = sum(permList, [])
cursor.execute(sql, args)
```
Use [parametrized SQL](http://www.codinghorr... |
Load data from Python pickle file in a loop? | 18,675,863 | 4 | 2013-09-07T17:13:56Z | 18,675,864 | 7 | 2013-09-07T17:13:56Z | [
"python",
"pickle"
] | In a small data-acquisition project we use the Python's `pickle` to store recorded data, i.e. for each "event" we add it to the output file `f` with
```
pkl.dump(event, f, pkl.HIGHEST_PROTOCOL)
```
where `import cPickle as pkl`.
In the analysis of the data we read each event, but in contrast to a normal file where p... | Yes, indeed. Use this generator below to make the events readable in a loop:
```
def pickleLoader(pklFile):
try:
while True:
yield pkl.load(pklFile)
except EOFError:
pass
```
Now you can simply write:
```
with open(filename) as f:
for event in pickleLoader(f):
do_somet... |
How to run Conda? | 18,675,907 | 27 | 2013-09-07T17:20:17Z | 18,675,970 | 30 | 2013-09-07T17:26:28Z | [
"python",
"anaconda",
"conda"
] | How do I run conda on a linux machine?
I installed Anaconda, and can run Python, so I assume that I installed it correctly.
Following this guide: <http://docs.continuum.io/conda/intro.html>, I am trying to installed Python33, so I am copying and pasting the following line into my console:
```
conda create -n py33 py... | It turns out that I had not set the path.
To do so, I first had to edit .bash\_profile (I downloaded it to my local desktop to do that, I do not know how to text edit a file from linux)
Then add this to .bash\_profile:
```
PATH=$PATH:$HOME/anaconda/bin
``` |
How to run Conda? | 18,675,907 | 27 | 2013-09-07T17:20:17Z | 26,525,045 | 13 | 2014-10-23T09:25:53Z | [
"python",
"anaconda",
"conda"
] | How do I run conda on a linux machine?
I installed Anaconda, and can run Python, so I assume that I installed it correctly.
Following this guide: <http://docs.continuum.io/conda/intro.html>, I am trying to installed Python33, so I am copying and pasting the following line into my console:
```
conda create -n py33 py... | If you have installed Anaconda but are not able to load the correct versions of python and ipython, or if you see conda: command not found when trying to use conda, this may be an issue with your PATH environment variable. At the prompt, type:
```
export PATH=~/anaconda/bin:$PATH
```
[For this example](http://docs.co... |
How to run Conda? | 18,675,907 | 27 | 2013-09-07T17:20:17Z | 35,717,840 | 27 | 2016-03-01T08:22:30Z | [
"python",
"anaconda",
"conda"
] | How do I run conda on a linux machine?
I installed Anaconda, and can run Python, so I assume that I installed it correctly.
Following this guide: <http://docs.continuum.io/conda/intro.html>, I am trying to installed Python33, so I am copying and pasting the following line into my console:
```
conda create -n py33 py... | With the most recent version of anaconda, you might want to try this:
```
export PATH=~/anaconda2/bin:$PATH
```
and then
```
conda --v
```
to confirm that it worked. |
matplotlib.pyplot - fix only one axis limit, set other to auto | 18,676,022 | 9 | 2013-09-07T17:31:54Z | 18,676,882 | 17 | 2013-09-07T19:11:13Z | [
"python",
"numpy",
"matplotlib"
] | The following `MWE` produces a simple scatter plot:
```
import numpy as np
import matplotlib.pyplot as plt
# Generate some random two-dimensional data:
m1 = np.random.normal(size=100)
m2 = np.random.normal(scale=0.5, size=100)
# Plot data with 1.0 max limit in y.
plt.figure()
# Set x axis limit.
plt.xlim(0., 1.0)
# ... | If the concern is simply that a lot of data is being plotted, why not retrieve the plot's lower y-limit and use that when setting the limits?
```
plt.ylim(plt.ylim()[0], 1.0)
```
Or analogously for a particular axis. A bit ugly, but I see no reason why it shouldn't work.
---
The issue actually resides in the fact t... |
How to properly use the "choices" field option in Django | 18,676,156 | 5 | 2013-09-07T17:47:20Z | 18,676,319 | 7 | 2013-09-07T18:03:53Z | [
"python",
"django",
"django-models"
] | I'm reading the tutorial here: <https://docs.djangoproject.com/en/1.5/ref/models/fields/#choices>
and i'm trying to create a box where the user can select the month he was born in. What I tried was
```
MONTH_CHOICES = (
(JANUARY, "January"),
(FEBRUARY, "February"),
(MARCH, "March"),
....
(DECEMBER... | According to the [documentation](https://docs.djangoproject.com/en/dev/ref/models/fields/#django.db.models.Field.choices):
> Field.choices
>
> An iterable (e.g., a list or tuple) consisting itself of
> iterables of exactly two items (e.g. [(A, B), (A, B) ...]) to use as
> choices for this field. If this is given, the ... |
How to properly use the "choices" field option in Django | 18,676,156 | 5 | 2013-09-07T17:47:20Z | 32,657,683 | 15 | 2015-09-18T17:19:32Z | [
"python",
"django",
"django-models"
] | I'm reading the tutorial here: <https://docs.djangoproject.com/en/1.5/ref/models/fields/#choices>
and i'm trying to create a box where the user can select the month he was born in. What I tried was
```
MONTH_CHOICES = (
(JANUARY, "January"),
(FEBRUARY, "February"),
(MARCH, "March"),
....
(DECEMBER... | I think no one actually has answered to the first question:
> Why did they create those variables?
Those variables aren't strictly necessary. It's true. You can perfectly do something like this:
```
MONTH_CHOICES = (
("JANUARY", "January"),
("FEBRUARY", "February"),
("MARCH", "March"),
# ....
("D... |
How to configure color when convert numpy array to QImage? | 18,676,888 | 2 | 2013-09-07T19:11:54Z | 18,681,103 | 7 | 2013-09-08T06:29:24Z | [
"python",
"qt",
"opencv",
"numpy",
"pyqt"
] | The program is based on pyqt and opencv. I plan to read and process image with opencv while using QT as GUI.
when I open a gray image, The result is OK. But it will change the color of image when opening a color image. I guess it's because I made mistake when I convert numpy array to OImage type, but I can't figure ou... | You need to convert the image data from BGR to RGB. You also need to swap width and height (see below) -- your code only works for images with same width and height.
```
self.cv_img = cv2.imread(cvfilename)
if self.cv_img != None:
# Notice the dimensions.
height, width, bytesPerComponent = cv_img.shape
by... |
Grouping daily data by month in python/pandas and then normalising | 18,677,271 | 9 | 2013-09-07T19:50:29Z | 18,677,517 | 25 | 2013-09-07T20:16:13Z | [
"python",
"pandas"
] | Hi I have the table below in a Pandas dataframe:
```
q_string q_visits q_date
0 nucleus 1790 2012-10-02 00:00:00
1 neuron 364 2012-10-02 00:00:00
2 current 280 2012-10-02 00:00:00
3 molecular 259 2012-10-02 00:00:00
4 stem 201... | If I understand you correctly:
For (1) do this:
Make some fake data by sampling from the values you gave and some random dates and # of visits:
```
In [179]: string = Series(np.random.choice(df.string.values, size=100), name='string')
In [180]: visits = Series(poisson(1000, size=100), name='date')
In [181]: date =... |
Add an offset to part of a list in Python? | 18,680,784 | 2 | 2013-09-08T05:40:09Z | 18,680,801 | 8 | 2013-09-08T05:42:32Z | [
"python",
"arrays",
"list",
"numpy",
"scipy"
] | I have a list `a = [1, 2, 3, 4, 5]`, and now I wish to add a `1` to every element from `index 2` onwards, i.e. `a[2] + 1`, `a[3] + 1`, `a[4] + 1`.
That is I want `a = [1, 2, 4, 5, 6]` in the end.
What is **the most Pythonic way** of dong so? | ```
>>> a = [1, 2, 3, 4, 5]
>>> a[2:] = [x+1 for x in a[2:]]
>>> a
[1, 2, 4, 5, 6]
```
For numpy array:
```
>>> a = np.array([1,2,3,4,5])
>>> a[2:] += 1
>>> a
array([1, 2, 4, 5, 6])
``` |
Python - Remove Last Line From String | 18,682,965 | 5 | 2013-09-08T11:07:44Z | 18,683,105 | 8 | 2013-09-08T11:25:28Z | [
"python",
"telnetlib"
] | I'm trying to capture and manipulate data within a Telnet session using telnetlib, things are going fairly well, however my newbness with Python is causing me some headache.
My issue is pretty straight forward, I am able to capture and display the data I want (So far) however I just seem to be cycling through errors w... | You can remove the last line of a string like this:
```
def remove_last_line_from_string(s):
return s[:s.rfind('\n')]
string = "String with\nsome\nnewlines to answer\non StackOverflow.\nThis line gets removed"
print string
string = remove_last_line_from_string(string)
print '\n\n'+string
```
The output will be:... |
Docker, Supervisord and logging - how to consolidate logs in docker logs? | 18,683,810 | 10 | 2013-09-08T12:44:20Z | 21,371,113 | 26 | 2014-01-27T00:17:33Z | [
"python",
"django",
"logging",
"supervisord",
"docker"
] | So, experimenting with Docker + Supervisord + Django app via uWSGI. I have the whole stack working fine, but need to tidy up the logging.
If I launch supervisor in non-daemon mode,
```
/usr/bin/supervisord -n
```
Then I get the logging output for supervisor played into the docker logs stdout. However, if supervisord... | I accomplished this using .
Install [supervisor-stdout](https://pypi.python.org/pypi/supervisor-stdout) in your Docker image:
```
RUN apt-get install -y python-pip && pip install supervisor-stdout
```
### Supervisord Configuration
Edit your `supervisord.conf` look like so:
```
[program:myprogram]
command=/what/eve... |
Docker, Supervisord and logging - how to consolidate logs in docker logs? | 18,683,810 | 10 | 2013-09-08T12:44:20Z | 28,644,680 | 8 | 2015-02-21T10:25:29Z | [
"python",
"django",
"logging",
"supervisord",
"docker"
] | So, experimenting with Docker + Supervisord + Django app via uWSGI. I have the whole stack working fine, but need to tidy up the logging.
If I launch supervisor in non-daemon mode,
```
/usr/bin/supervisord -n
```
Then I get the logging output for supervisor played into the docker logs stdout. However, if supervisord... | Docker container is like a kleenex, you use it then you drop it. To be "alive", Docker needs something running in foreground (whereas daemons run in background), that's why you are using Supervisord.
So you need to "redirect/add/merge" process output (access and error) to Supervisord output you see when running your c... |
Generating random correlated x and y points using Numpy | 18,683,821 | 5 | 2013-09-08T12:44:58Z | 18,684,433 | 7 | 2013-09-08T13:49:49Z | [
"python",
"random",
"numpy",
"correlation",
"normal-distribution"
] | I'd like to generate correlated arrays of x and y coordinates, in order to test various matplotlib plotting approaches, but I'm failing somewhere, because I can't get `numpy.random.multivariate_normal` to give me the samples I want. Ideally, I want my x values between -0.51, and 51.2, and my y values between 0.33 and 5... | As the name implies `numpy.random.multivariate_normal` generates normal distributions, this means that there is a non-null probability of finding points outside of any given interval. You can generate correlated uniform distributions but this a little more convoluted. Take a look [here](http://www.acooke.org/random.pdf... |
How to create a list of date string in 'yyyymmdd' format with Python Pandas? | 18,684,076 | 4 | 2013-09-08T13:14:31Z | 18,684,173 | 11 | 2013-09-08T13:24:07Z | [
"python",
"pandas",
"datetime-format"
] | I want a list of date range in which each element is `'yyyymmdd'` format string, such as : `['20130226','20130227','20130228','20130301','20130302']` .
I can use pandas to do so:
```
>>> pandas.date_range('20130226','20130302')
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-02-26 00:00:00, ..., 2013-03-02 00:00:0... | Using `format`:
```
>>> r = pandas.date_range('20130226','20130302')
>>> r.format(formatter=lambda x: x.strftime('%Y%m%d'))
['20130226', '20130227', '20130228', '20130301', '20130302']
```
or using `map`:
```
>>> r.map(lambda x: x.strftime('%Y%m%d'))
array(['20130226', '20130227', '20130228', '20130301', '20130302']... |
How to create datetime object from "16SEP2012" in python | 18,684,397 | 7 | 2013-09-08T13:46:11Z | 18,684,426 | 20 | 2013-09-08T13:48:55Z | [
"python",
"string",
"datetime"
] | I can create datetime objects in python this way:
```
import datetime
new_date= datetime.datetime(2012,09,16)
```
How can I create same `datetime` object from a string in this format: `"16SEP2012"` ? | Use [`datetime.datetime.strptime`](http://docs.python.org/2/library/datetime.html#datetime.datetime.strptime):
```
>>> datetime.datetime.strptime('16Sep2012', '%d%b%Y')
datetime.datetime(2012, 9, 16, 0, 0)
``` |
Pep8 E501: line too long error | 18,685,184 | 4 | 2013-09-08T15:03:42Z | 18,685,279 | 9 | 2013-09-08T15:13:28Z | [
"python",
"twitter",
"pep8"
] | I get the error `E501: line too long` from this code:
```
header, response = client.request('https://api.twitter.com/1.1/statuses /user_timeline.json?include_entities=true&screen_name='+username+'&count=1')
```
but if I write this way or another way:
```
header, response = client.request('\
https://api.... | The whitespaces at the beginning of the lines become part of your string if you break it like this.
Try this:
```
header, response = client.request(
'https://api.twitter.com/1.1/statuses/user_timeline.'
'json?include_entities=true&screen_name=' + username + '&count=1')
```
The strings will [automatically be co... |
How to export Django model data into CSV file | 18,685,223 | 2 | 2013-09-08T15:08:02Z | 18,689,442 | 7 | 2013-09-08T22:51:49Z | [
"python",
"django",
"csv"
] | I want to export all my model data into CSV file :
models.py
```
import ast
import uuid
import base64
from django.db import models
from django.contrib import admin
from qlu.settings import HOST_NAME,STATS_URI
from django.core.validators import URLValidator
#------------------------------------------------------------... | I usually prefer an action for this in the admin. This is the [snippet](http://djangosnippets.org/snippets/2690/):
```
def download_csv(modeladmin, request, queryset):
if not request.user.is_staff:
raise PermissionDenied
opts = queryset.model._meta
model = queryset.model
response = HttpResponse... |
scrapy User timeout caused connection failure | 18,686,251 | 5 | 2013-09-08T16:54:23Z | 18,691,163 | 7 | 2013-09-09T03:34:54Z | [
"python",
"scrapy"
] | I am using scrapy to download images but got timeout error:
```
Retrying <GET http://www/***.jpg> (failed 1 times): User timeout caused connection failure
```
However, I am able to download the image with wget **instantly**. DOWNLOAD\_TIMEOUT (scrapy parameter) is set to default 180 sec, so this should not be the roo... | If you are scraping multiple images (especially from multiple domains), then downloads will happen concurrently and each download may take longer when compared to downloading a single image from the command line. Try decreasing the [CONCURRENT\_REQUESTS](http://doc.scrapy.org/en/latest/topics/settings.html#concurrent-r... |
Reverse a string in python without using reversed or [::-1] | 18,686,860 | 18 | 2013-09-08T17:56:45Z | 18,686,882 | 40 | 2013-09-08T17:59:05Z | [
"python",
"string",
"function",
"for-loop",
"reverse"
] | I came across a strange Codecademy exercise that required a function that would take a string as input and return it in reverse order. The only problem was you could not use the reversed method or the common answer here on stackoverflow, `[::-1]`.
Obviously in the real world of programming, one would most likely go wi... | You can also do it with recursion:
```
def reverse(text):
if len(text) <= 1:
return text
return reverse(text[1:]) + text[0]
```
And a simple example for the string `hello`:
```
reverse(hello)
= reverse(ello) + h # The recursive step
= reverse(llo) + e + h
= reverse(lo) + l + e + h
=... |
Reverse a string in python without using reversed or [::-1] | 18,686,860 | 18 | 2013-09-08T17:56:45Z | 18,686,885 | 8 | 2013-09-08T17:59:27Z | [
"python",
"string",
"function",
"for-loop",
"reverse"
] | I came across a strange Codecademy exercise that required a function that would take a string as input and return it in reverse order. The only problem was you could not use the reversed method or the common answer here on stackoverflow, `[::-1]`.
Obviously in the real world of programming, one would most likely go wi... | Use reversed `range`:
```
def reverse(strs):
for i in xrange(len(strs)-1, -1, -1):
yield strs[i]
...
>>> ''.join(reverse('hello'))
'olleh'
```
`xrange` or `range` with -1 step would return items in reversed order, so we need to iterate from `len(string)-1` to `-1`(exclusive) and fetch items from ... |
Reverse a string in python without using reversed or [::-1] | 18,686,860 | 18 | 2013-09-08T17:56:45Z | 18,686,993 | 12 | 2013-09-08T18:08:54Z | [
"python",
"string",
"function",
"for-loop",
"reverse"
] | I came across a strange Codecademy exercise that required a function that would take a string as input and return it in reverse order. The only problem was you could not use the reversed method or the common answer here on stackoverflow, `[::-1]`.
Obviously in the real world of programming, one would most likely go wi... | Just another option:
```
from collections import deque
def reverse(iterable):
d = deque()
d.extendleft(iterable)
return ''.join(d)
``` |
dict.keys()[0] on Python 3 | 18,686,903 | 10 | 2013-09-08T18:00:52Z | 18,686,943 | 20 | 2013-09-08T18:04:07Z | [
"python",
"python-3.x",
"dictionary"
] | I have this sentence:
```
def Ciudad(prob):
numero = random.random()
ciudad = prob.keys()[0]
for i in prob.keys():
if(numero > prob[i]):
if(prob[i] > prob[ciudad]):
ciudad = i
else:
if(prob[i] > prob[ciudad]):
ciudad = i
return ci... | `dict.keys()` is a dictionary view. Just use `list()` directly on the dictionary instead if you need a list of keys, item 0 will be the first key in the (arbitrary) dictionary order:
```
list(prob)[0]
```
or better still just use:
```
next(iter(dict))
```
Either method works in both Python 2 *and* 3 and the `next()... |
Why is there different behavior with short-hand assignment and NaN? | 18,688,178 | 4 | 2013-09-08T20:09:56Z | 18,688,308 | 7 | 2013-09-08T20:23:56Z | [
"python",
"numpy"
] | I see this in python 2.7.3, with both pylab and numpy. Why is this:
```
>>> x = pylab.arange(5)
>>> x = x + pylab.nan
>>> print x
[ nan nan nan nan nan]
```
different than this:
```
>>> x = pylab.arange(5)
>>> x += pylab.nan
__main__:1: RuntimeWarning: invalid value encountered in add
>>> print x
[-9223372036854... | It's because `arange(5)` returns an array of integers, but `nan` is a float value. When you ise regular assignment, this is okay, because `x + nan` transparently converts `x` to float to do the addition and returns a float result. But with `+=`, it tries to put this float result back into the original `x`, which is an ... |
Correct use of $ne or $not in pymongo (unsupported projection option) | 18,688,297 | 2 | 2013-09-08T20:22:32Z | 18,689,920 | 7 | 2013-09-09T00:09:40Z | [
"python",
"mongodb",
"pymongo"
] | I would like to code the following query to Mongo:
"Get all rows where the field equals var1 but/and not var2"
So I have this:
```
db["mydb"].find({"field":var1},{"field":{"$ne":query2}})
```
But it yields the error that $ne is an "unsupported projection option"
I have been googling but I can't find something similar... | You can use the `$and` operator to combine requirements like this:
```
db["mydb"].find({"$and": [{"field": var1}, {"field": {"$ne": var2}}]})
``` |
What's the efficiency difference between these (almost identical) conditions | 18,688,617 | 5 | 2013-09-08T21:01:28Z | 18,688,670 | 13 | 2013-09-08T21:08:15Z | [
"python"
] | I was given a challenge by a friend to build an efficient Fibonacci function in python. So I started testing around different ways of doing the recursion (I do not have high math skills to think of a complex algorithm, and please **do not show me an efficient Fibonacci function**, that is not the question).
Then I tri... | `(not n > 1)` is `(n <= 1)`, re-run the second code with `<=` and you will see that you get similar timings:
```
In [1]: def fibo(n):
....: if n <= 1:
....: return 1
....: return fibo(n-1)+fibo(n-2)
....:
In [2]: %timeit map(fibo, range(10))
10000 loops, best of 3: 29.2 us per loop
In [3]: ... |
What's the efficiency difference between these (almost identical) conditions | 18,688,617 | 5 | 2013-09-08T21:01:28Z | 18,688,676 | 8 | 2013-09-08T21:09:21Z | [
"python"
] | I was given a challenge by a friend to build an efficient Fibonacci function in python. So I started testing around different ways of doing the recursion (I do not have high math skills to think of a complex algorithm, and please **do not show me an efficient Fibonacci function**, that is not the question).
Then I tri... | > Aren't those essentially the same?
The second function is calculating a higher fibonacci number, so naturally it takes longer:
```
>>> def fibo(n):
... if n > 1:
... return fibo(n-1)+fibo(n-2)
... return 1
...
>>> fibo(10)
89
>>> def fibo(n):
... if n < 1:
... return 1
... return fi... |
numpy array: replace nan values with average of columns | 18,689,235 | 10 | 2013-09-08T22:24:58Z | 18,689,440 | 19 | 2013-09-08T22:51:06Z | [
"python",
"arrays",
"numpy",
null
] | I've got a numpy array filled mostly with real numbers, but there is a few `nan` values in it as well.
How can I replace the `nan`s with averages of columns where they are? | No loops required:
```
import scipy.stats as stats
print a
[[ 0.93230948 nan 0.47773439 0.76998063]
[ 0.94460779 0.87882456 0.79615838 0.56282885]
[ 0.94272934 0.48615268 0.06196785 nan]
[ 0.64940216 0.74414127 nan nan]]
#Obtain mean of columns as you need, nanmean is just ... |
How is this sorting code working? | 18,689,293 | 6 | 2013-09-08T22:33:09Z | 18,689,327 | 7 | 2013-09-08T22:37:27Z | [
"python"
] | How is this sorting code working? I cannot understand how the values returned by the iterator are being used to sort the list?
```
mylist=["zero","two","one"]
list1=[3,1,2]
it = iter(list1)
sorted(mylist, key=lambda x: next(it))
```
Output:
```
['two', 'one', 'zero']
``` | It works like this - the `key=lambda x: next(it)` part is stating: assign an order value of `3`, then `1` then `2` to each of the elements in `mylist`. So `two` comes first, then `one` then `zero`:
```
["zero", "two", "one"] # original list
[ 3, 1, 2 ] # assign this order to each element
```
Now, after sor... |
group multi-index pandas dataframe | 18,689,474 | 7 | 2013-09-08T22:57:55Z | 18,689,514 | 11 | 2013-09-08T23:05:39Z | [
"python",
"pandas"
] | Is it possible to groupby a multi-index (2 levels) pandas dataframe by one of the multi-index levels ?
The only way I know of doing it is to reset\_index on a multiindex and then set index again. I am sure there is a better way to do it, and I want to know how. | Yes, use the `level` parameter. Take a look [here](http://pandas.pydata.org/pandas-docs/dev/groupby.html#groupby-with-multiindex). Example:
```
In [26]: s
first second third
bar doo one 0.404705
two 0.577046
baz bee one -1.715002
two -1.039268
foo bop... |
Efficiently checking if arbitrary object is NaN in Python / numpy / pandas? | 18,689,512 | 44 | 2013-09-08T23:05:11Z | 18,689,589 | 8 | 2013-09-08T23:15:16Z | [
"python",
"numpy",
"pandas"
] | My numpy arrays use `np.nan` to designate missing values. As I iterate over the data set, I need to detect such missing values and handle them in special ways.
Naively I used `numpy.isnan(val)`, which works well unless `val` isn't among the subset of types supported by `numpy.isnan()`. For example, missing data can oc... | Is your type really arbitrary? If you know it is just going to be a int float or string you could just do
```
if val.dtype == float and np.isnan(val):
```
assuming it is wrapped in numpy , it will always have a dtype and only float and complex can be NaN |
Efficiently checking if arbitrary object is NaN in Python / numpy / pandas? | 18,689,512 | 44 | 2013-09-08T23:05:11Z | 18,689,712 | 78 | 2013-09-08T23:33:44Z | [
"python",
"numpy",
"pandas"
] | My numpy arrays use `np.nan` to designate missing values. As I iterate over the data set, I need to detect such missing values and handle them in special ways.
Naively I used `numpy.isnan(val)`, which works well unless `val` isn't among the subset of types supported by `numpy.isnan()`. For example, missing data can oc... | `pandas.isnull()` checks for missing values in both numeric and string/object arrays. From the documentation, it checks for:
> NaN in numeric arrays, None/NaN in object arrays
Quick example:
```
import pandas as pd
import numpy as np
s = pd.Series(['apple', np.nan, 'banana'])
pd.isnull(s)
Out[9]:
0 False
1 T... |
pandas DataFrame: replace nan values with average of columns | 18,689,823 | 14 | 2013-09-08T23:54:05Z | 18,691,949 | 33 | 2013-09-09T05:27:50Z | [
"python",
"pandas",
null
] | I've got a pandas DataFrame filled mostly with real numbers, but there is a few `nan` values in it as well.
How can I replace the `nan`s with averages of columns where they are?
This question is very similar to this one: [numpy array: replace nan values with average of columns](http://stackoverflow.com/questions/1868... | You can simply use [`DataFrame.fillna`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html#pandas.DataFrame.fillna) to fill the `nan`'s directly:
```
In [27]: df
Out[27]:
A B C
0 -0.166919 0.979728 -0.632955
1 -0.297953 -0.912674 -1.365463
2 -0.120211 -0.540... |
What does -1 mean in numpy reshape? | 18,691,084 | 21 | 2013-09-09T03:25:02Z | 18,691,098 | 19 | 2013-09-09T03:27:07Z | [
"python",
"numpy"
] | A numpy matrix can be reshaped into a vector using reshape function with parameter -1. But I don't know what -1 means here.
For example:
```
a = numpy.matrix([[1, 2, 3, 4], [5, 6, 7, 8]])
b = numpy.reshape(a, -1)
```
The result of `b` is: `matrix([[1, 2, 3, 4, 5, 6, 7, 8]])`
Does anyone know what -1 means here?
And... | According to [`the documentation`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html#numpy-reshape):
> newshape : int or tuple of ints
>
> The new shape should be compatible with the original shape. If an
> integer, then the result will be a 1-D array of that length. One shape
> dimension can be *... |
Python Logic - Simplify/more efficient multiple if elif statements | 18,692,666 | 5 | 2013-09-09T06:35:33Z | 18,692,721 | 8 | 2013-09-09T06:39:41Z | [
"python",
"if-statement"
] | [I'm taking an Intro to Python course online](http://cscircles.cemc.uwaterloo.ca/9-else-and-or-not/) and would like to solve this problem more efficiently.
> The words 1st, 2nd, 3rd, 4th, 5th, 6th, 7th, 8th, 9th are called ordinal adjectives. Write a program which reads an integer x between 1 and 9 from input. The pro... | ```
x = int(input())
if x == 1:
print("1st")
elif x == 2:
print("2nd")
elif x == 3:
print("3rd")
else:
print(str(x)+"th")
``` |
Python Logic - Simplify/more efficient multiple if elif statements | 18,692,666 | 5 | 2013-09-09T06:35:33Z | 18,692,723 | 7 | 2013-09-09T06:39:50Z | [
"python",
"if-statement"
] | [I'm taking an Intro to Python course online](http://cscircles.cemc.uwaterloo.ca/9-else-and-or-not/) and would like to solve this problem more efficiently.
> The words 1st, 2nd, 3rd, 4th, 5th, 6th, 7th, 8th, 9th are called ordinal adjectives. Write a program which reads an integer x between 1 and 9 from input. The pro... | Just use string formatting. And account for different suffixes between 1, 2, 3, and the rest of the numbers
```
x = int(raw_input())
if x == 1:
suffix = "st"
elif x == 2:
suffix = "nd"
elif x == 3:
suffix = "rd"
else:
suffix = "th"
print "{number}{suffix}".format(number=x,suffix=suffix)
```
You could ... |
Cutting string after x chars at whitespace in python | 18,693,641 | 12 | 2013-09-09T07:47:23Z | 18,693,671 | 17 | 2013-09-09T07:49:08Z | [
"python"
] | I want to cut a long text after x characters, but I don't want to cut a word in the middle, I want to cut at the last whitespace before x chars:
```
'This is a sample text'[:20]
```
gives me
```
'This is a sample tex'
```
but I want
```
'This is a sample'
```
Another example:
```
'And another sample sentence'[:1... | ```
import textwrap
lines = textwrap.wrap(text, 20)
# then use either
lines[0]
# or
'\n'.join(lines)
``` |
Test if string ONLY contains given characters | 18,694,971 | 5 | 2013-09-09T09:12:26Z | 18,695,080 | 8 | 2013-09-09T09:19:09Z | [
"python",
"string"
] | What's the easiest way to check if a string only contains certain specified characters in Python? (Without using RegEx or anything, of course)
Specifically, I have a list of stings, and I want to filter out all of them except the words that are ONLY made up of ANY of the letters in another string. For example, filteri... | You can make use of [sets](http://docs.python.org/2/library/sets.html#set-objects):
```
>>> l = ['aba', 'acba', 'caz']
>>> s = set('abc')
>>> [item for item in l if not set(item).difference(s)]
['aba', 'acba']
``` |
Test if string ONLY contains given characters | 18,694,971 | 5 | 2013-09-09T09:12:26Z | 18,695,150 | 11 | 2013-09-09T09:22:51Z | [
"python",
"string"
] | What's the easiest way to check if a string only contains certain specified characters in Python? (Without using RegEx or anything, of course)
Specifically, I have a list of stings, and I want to filter out all of them except the words that are ONLY made up of ANY of the letters in another string. For example, filteri... | Assuming the discrepancy in your example is a typo, then this should work:
```
my_list = ['aba', 'acba', 'caz']
result = [s for s in my_list if not s.strip('abc')]
```
results in `['aba', 'acba']`. [string.strip(characters)](https://docs.python.org/2/library/stdtypes.html#str.strip) will return an empty string if the... |
python pandas dataframe to dictionary | 18,695,605 | 27 | 2013-09-09T09:49:56Z | 18,695,700 | 42 | 2013-09-09T09:55:13Z | [
"python",
"dictionary",
"pandas"
] | I've a two columns dataframe, and intend to convert it to python dictionary - the first column will be the key and the second will be the value. Thank you in advance.
Dataframe:
```
id value
0 0 10.2
1 1 5.7
2 2 7.4
``` | See the docs for [`to_dict`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_dict.html). You can use it like this:
```
df.set_index('id').to_dict()
```
And if you have only one column, to avoid the column name is also a level in the dict (actually, in this case you use the `Series.to_dict()`... |
python pandas dataframe to dictionary | 18,695,605 | 27 | 2013-09-09T09:49:56Z | 24,368,660 | 10 | 2014-06-23T14:35:48Z | [
"python",
"dictionary",
"pandas"
] | I've a two columns dataframe, and intend to convert it to python dictionary - the first column will be the key and the second will be the value. Thank you in advance.
Dataframe:
```
id value
0 0 10.2
1 1 5.7
2 2 7.4
``` | The answers by joris in this thread and by punchagan in the [duplicated thread](http://stackoverflow.com/questions/18012505/python-pandas-dataframe-columns-convert-to-dict-key-and-value) are very elegant, however they will not give correct results if the column used for the keys contains any duplicated value.
For exam... |
python pandas dataframe to dictionary | 18,695,605 | 27 | 2013-09-09T09:49:56Z | 24,370,510 | 16 | 2014-06-23T16:08:36Z | [
"python",
"dictionary",
"pandas"
] | I've a two columns dataframe, and intend to convert it to python dictionary - the first column will be the key and the second will be the value. Thank you in advance.
Dataframe:
```
id value
0 0 10.2
1 1 5.7
2 2 7.4
``` | If you want a simple way to preserve duplicates, you could use `groupby`:
```
>>> ptest = pd.DataFrame([['a',1],['a',2],['b',3]], columns=['id', 'value'])
>>> ptest
id value
0 a 1
1 a 2
2 b 3
>>> {k: g["value"].tolist() for k,g in ptest.groupby("id")}
{'a': [1, 2], 'b': [3]}
``` |
Change values on matplotlib imshow() graph axis | 18,696,122 | 25 | 2013-09-09T10:23:20Z | 18,696,354 | 59 | 2013-09-09T10:38:00Z | [
"python",
"numpy",
"matplotlib"
] | Say I have some input data:
```
data = np.random.normal(loc=100,scale=10,size=(500,1,32))
hist = np.ones((32,20)) # initialise hist
for z in range(32):
hist[z],edges = np.histogram(data[:,0,z],bins=np.arange(80,122,2))
```
I can plot it using `imshow()`:
```
plt.imshow(hist,cmap='Reds')
```
getting:
![enter im... | I would try to avoid changing the `xticklabels` if possible, otherwise it can get very confusing if you for example overplot your histogram with additional data.
Defining the range of your grid is probably the best and with `imshow` it can be done by adding the `extent` keyword. This way the axes gets adjusted automat... |
Not plotting 'zero' in matplotlib or change zero to None [Python] | 18,697,417 | 3 | 2013-09-09T11:39:41Z | 18,697,529 | 7 | 2013-09-09T11:46:38Z | [
"python",
"numpy",
"matplotlib",
"zero"
] | I have the code below and I would like to convert all zero's in the data to `None`'s (as I do not want to plot the data here in matplotlib). However, the code is notworking and `0.` is still being printed
```
sd_rel_track_sum=np.sum(sd_rel_track, axis=1)
for i in sd_rel_track_sum:
print i
if i==0:
i=None
... | ```
values = [3, 5, 0, 3, 5, 1, 4, 0, 9]
def zero_to_nan(values):
"""Replace every 0 with 'nan' and return a copy."""
return [float('nan') if x==0 else x for x in values]
print(zero_to_nan(values))
```
gives you:
```
[3, 5, nan, 3, 5, 1, 4, nan, 9]
```
Matplotlib won't plot `nan` (not a number) values. |
Not plotting 'zero' in matplotlib or change zero to None [Python] | 18,697,417 | 3 | 2013-09-09T11:39:41Z | 18,700,034 | 12 | 2013-09-09T14:01:24Z | [
"python",
"numpy",
"matplotlib",
"zero"
] | I have the code below and I would like to convert all zero's in the data to `None`'s (as I do not want to plot the data here in matplotlib). However, the code is notworking and `0.` is still being printed
```
sd_rel_track_sum=np.sum(sd_rel_track, axis=1)
for i in sd_rel_track_sum:
print i
if i==0:
i=None
... | Why not use numpy for this?
```
>>> values = np.array([3, 5, 0, 3, 5, 1, 4, 0, 9], dtype=np.double)
>>> values[ values==0 ] = np.nan
>>> values
array([ 3., 5., nan, 3., 5., 1., 4., nan, 9.])
```
It should be noted that values cannot be an integer type array. |
Python argparse integer condition (>=12) | 18,700,634 | 10 | 2013-09-09T14:31:09Z | 18,700,817 | 12 | 2013-09-09T14:40:32Z | [
"python",
"argparse"
] | I need to request that an argument is >= 12 using `argparse`.
I cannot find a way to obtain this result using `argparse`, it seems there's no way to set rules to a given value but only full sets of accepted values like choices=['rock', 'paper', 'scissors'].
My code is:
```
import sys, argparse
parser = argparse.Arg... | One way is to use a custom type.
```
def bandwidth_type(x):
x = int(x)
if x < 12:
raise argparse.ArgumentTypeError("Minimum bandwidth is 12")
return x
parser.add_argument("-b", "--bandwidth", type=bandwidth_type, help="target bandwidth >= 12")
```
Note: I think `ArgumentTypeError` is a more corre... |
Why does len("".split(" ")) give 1? python | 18,701,216 | 7 | 2013-09-09T15:00:16Z | 18,701,281 | 20 | 2013-09-09T15:03:11Z | [
"python",
"string"
] | What is the pythonic explanation for `len("".split(" ")) == 1` showing True?
Why does the `"".split("")` yield `['']`
```
>>> len("".split(" "))
1
>>> "".split(" ")
['']
``` | `str.split(sep)` returns *at least* one element. If *sep* was not found in the text, that one element is the original, unsplit text.
For an empty string, the *sep* delimiter will of course never be found, and is specifically called out in the documentation:
> Splitting an empty string with a specified separator retur... |
pandas: DataFrame.mean() very slow. How can I calculate means of columns faster? | 18,701,569 | 3 | 2013-09-09T15:17:38Z | 18,702,365 | 7 | 2013-09-09T16:02:21Z | [
"python",
"performance",
"pandas",
"dataframe"
] | I have a rather large CSV file, it contains 9917530 rows (without the header), and 54 columns. Columns are real or integer, only one contains dates. There is a few NULL values on the file, which are translated to `nan` after I load it to pandas `DataFrame`, which I do like this:
```
import pandas as pd
data = pd.read_... | Here's a similar sized from , but without an object column
```
In [10]: nrows = 10000000
In [11]: df = pd.concat([DataFrame(randn(int(nrows),34),columns=[ 'f%s' % i for i in range(34) ]),DataFrame(randint(0,10,size=int(nrows*19)).reshape(int(nrows),19),columns=[ 'i%s' % i for i in range(19) ])],axis=1)
In [12]: df.i... |
Parameters to numpy's fromfunction | 18,702,105 | 11 | 2013-09-09T15:47:02Z | 18,709,458 | 10 | 2013-09-10T01:31:14Z | [
"python",
"arrays",
"numpy"
] | I haven't grokked the key concepts in `numpy` yet.
I would like to create a 3-dimensional array and populate each cell with the result of a function call - i.e. the function would be called many times with different indices and return different values.
I could create it with zeros (or empty), and then overwrite every... | I obviously didn't made myself clear. I am getting responses that `fromfunc` actually works as my test code demonstrates, which I already knew because my test code demonstrated it.
The answer I was looking for seems to be in two parts:
---
The `fromfunc` documentation is misleading. It works to populate the entire a... |
Parameters to numpy's fromfunction | 18,702,105 | 11 | 2013-09-09T15:47:02Z | 24,900,335 | 7 | 2014-07-23T01:11:15Z | [
"python",
"arrays",
"numpy"
] | I haven't grokked the key concepts in `numpy` yet.
I would like to create a 3-dimensional array and populate each cell with the result of a function call - i.e. the function would be called many times with different indices and return different values.
I could create it with zeros (or empty), and then overwrite every... | The documentation is *very* misleading in that respect. It's just as you note: instead of performing `f(0,0), f(0,1), f(1,0), f(1,1)`, numpy performs
```
f([[0., 0.], [1., 1.]], [[0., 1.], [0., 1.]])
```
Using ndarrays rather than the promised integer coordinates is quite frustrating when you try and use something li... |
Return results from multiple models with Django REST Framework | 18,702,300 | 29 | 2013-09-09T15:58:46Z | 18,703,837 | 18 | 2013-09-09T17:35:11Z | [
"python",
"django",
"django-models",
"django-views",
"django-rest-framework"
] | I have three models â articles, authors and tweets. I'm ultimately needing to use Django REST Framework to construct a feed that aggregates all the objects using the Article and Tweet models into one reverse chronological feed.
Any idea how I'd do that? I get the feeling I need to create a new serializer, but I'm re... | It looks pretty close to me. I haven't used ViewSets in DRF personally, but I think if you change your code to this you should get somewhere (sorry - not tested either of these):
```
class TimelineViewSet(viewsets.ModelViewSet):
"""
API endpoint that lists all tweet/article objects in rev-chrono.
"""
d... |
AttributeError: 'str' object has no attribute 'write' | 18,703,525 | 7 | 2013-09-09T17:16:14Z | 18,703,543 | 12 | 2013-09-09T17:17:26Z | [
"python"
] | I'm working on Python and have defined a variable called "\_headers" as shown below
```
_headers = ('id',
'recipient_address_1',
'recipient_address_2',
'recipient_address_3',
'recipient_address_4',
'recipient_address_5',
'r... | You want `f.write`, not `outfile.write`...
`outfile` is the name of the file as a string. `f` is the file object.
As noted in the comments, `file.write` expects a string, not a sequence. If you wanted to write data from a sequence, you could use `file.writelines`. e.g. `f.writelines(self._headers)`. But beware, this ... |
Correcting matplotlib colorbar ticks | 18,704,353 | 13 | 2013-09-09T18:11:04Z | 18,707,445 | 15 | 2013-09-09T21:38:42Z | [
"python",
"matplotlib",
"color-mapping"
] | I've placed a color bar alongside a choropleth map. Because the data being plotted are discrete rather than continuous values, I've used a LinearSegmentedColormap (using [the recipe from the scipy cookbook](http://wiki.scipy.org/Cookbook/Matplotlib/ColormapTransformations)), which I've initialised with my max counted v... | You are suffering from an off-by-one error. You have 10 ticklabels spread among 11 colors. You might be able to correct the error by using `np.linspace` instead of `np.arange`. Using `np.linspace` the third argument is the number of values desired. This reduces the amount of mental gymnastics needed to avoid the off-by... |
Error running PhantomJS with Selenium RemoteWebDriver | 18,704,626 | 8 | 2013-09-09T18:31:27Z | 18,808,329 | 11 | 2013-09-15T02:21:52Z | [
"python",
"selenium",
"selenium-webdriver",
"phantomjs",
"selenium-grid"
] | I start a selenium grid hub with it's default startup command,
```
java -jar selenium-server-standalone-2.33.0.jar -role hub
```
And I start up PhantomJS in it's webdriver mode, on the same machine, like,
```
phantomjs --webdriver=8080 --webdriver-selenium-grid-hub=http://127.0.0.1:4444
```
When PhantomJS starts up... | Turns out that this is a bug feature in the 1.9.2 version of phantomjs. They updated to the latest ghostdriver (v1.4) library and I imagine that is where it was introduced. Ghostdriver is adhering to the new Selenium protocol using posts for creating sessions and not utilizing a location header. Selenium now grabs the ... |
Ruby equivalent for python try | 18,705,373 | 4 | 2013-09-09T19:19:34Z | 18,705,417 | 10 | 2013-09-09T19:22:52Z | [
"python",
"ruby",
"try-catch"
] | I am a major noob at coding. I started to learn Python, but wasn't a fan and switched to Ruby, which I love. Now I'm trying to convert some python code into ruby and I am stuck at python's "try" is the a ruby equivalent to python's "try" | Use this as an example:
```
begin # "try" block
puts 'I am before the raise.'
raise 'An error has occured.' # optionally: `raise Exception, "message"`
puts 'I am after the raise.' # won't be executed
rescue # optionally: `rescue Exception => ex`
puts 'I am rescued.'
ensure # will always get execute... |
Django: Access given field's choices tuple | 18,706,098 | 16 | 2013-09-09T20:03:36Z | 18,706,191 | 22 | 2013-09-09T20:08:42Z | [
"python",
"django",
"django-models",
"views"
] | I would like to get the named values of a choices field for a choice that is not currently selected. Is this possible?
For instance: models.py
```
FILE_STATUS_CHOICES = (
('P', 'Pending'),
('A', 'Approved'),
('R', 'Rejected'),
)
class File(models.Model):
status = models.CharField(max_length=1, defaul... | This is pretty much ok to import your choice mapping `FILE_STATUS_CHOICES` from models and use it to get `Pending` by `P`:
```
from my_app.models import FILE_STATUS_CHOICES
print dict(FILE_STATUS_CHOICES).get('P')
```
`get_FIELD_display()` method on your model is doing essentially the same thing:
```
def _get_FIELD... |
Received error "Not Authorized to access this resource/api" when trying to use Google Directory API and Service Account Authentication | 18,706,339 | 4 | 2013-09-09T20:17:40Z | 18,706,858 | 16 | 2013-09-09T20:54:28Z | [
"python",
"google-apps",
"google-admin-sdk"
] | I'm really struggling with trying to use Service Account authentication to use the Google Directory API (Admin SDK).
Using client based three legged OAuth this works (tested here - <https://developers.google.com/admin-sdk/directory/v1/reference/members/insert>) but there's a problem with the permission delegation to t... | Even though you're using a Service Account you still need to act on behalf of a Google Apps user in the instance that has the proper admin permissions. Try doing:
```
credentials = SignedJwtAssertionCredentials(
'<KEY>@developer.gserviceaccount.com',
'<KEY DATA>',
scope='https://www.googleapis.com/auth/apps.grou... |
bulk insert list values with SQLAlchemy Core | 18,708,050 | 6 | 2013-09-09T22:27:30Z | 18,708,086 | 14 | 2013-09-09T22:30:47Z | [
"python",
"mysql",
"database",
"sqlalchemy"
] | I'd like to bulk insert a list of strings into a MySQL Database with SQLAlchemy Core.
```
engine = create_engine("mysql+mysqlconnector://....
meta = MetaData()
meta.bind = engine
```
My table layout looks like this - together with two currently unused columns (irrelevant1/2):
```
MyTabe = Table('MyTable', meta,
Colu... | Here's one way to do it:
```
MyTable.__table__.insert().execute([{'color': 'blue'},
{'color': 'red'},
{'color': 'green'}])
```
Or, using `connection.execute()`:
```
conn.execute(MyTable.insert(), [{'color': 'blue'},
... |
Tkinter: How to set ttk.Radiobutton activated and get its value? | 18,708,172 | 3 | 2013-09-09T22:39:50Z | 18,979,105 | 8 | 2013-09-24T10:28:32Z | [
"python",
"python-2.7",
"tkinter",
"ttk"
] | 1) I need to set one of my three ttk.Radiobuttons activated by default
when I start my gui app.
How do I do it?
2) I also need to check if one of my ttk.Radiobuttons was
activated/clicked by the user.
How do I do it?
```
rb1 = ttk.Radiobutton(self.frame, text='5', variable=self.my_var, value=5)
rb2 = ttk.Radiobut... | use `self.my_var.set(1)` to set the radiobutton with `text='5'` as the default RadioButton.
To get the selected one you have to call a function
```
rb1 = ttk.Radiobutton(self.frame, text='5', variable=self.my_var, value=5,command=self.selected)
rb2 = ttk.Radiobutton(self.frame, text='10', variable=self.my_var, value=... |
Python Floating Point Formatting | 18,709,000 | 3 | 2013-09-10T00:23:31Z | 18,709,044 | 7 | 2013-09-10T00:29:49Z | [
"python",
"floating-point",
"format"
] | I've seen a few questions about this already, but none that I read helped me actually understand why what I am trying to do is failing.
So I have a bunch of floating point values, and they have different precisions. Some are 0.1 others are 1.759374, etc. And I want to format them so they are ALL in the form of "+0.000... | ```
>>> '{:.7e}'.format(0.00000000000000365913456789)
'3.6591346e-15'
``` |
Flask session not persisting | 18,709,213 | 3 | 2013-09-10T00:53:40Z | 18,709,356 | 16 | 2013-09-10T01:15:44Z | [
"python",
"session",
"flask"
] | Am running with Python 2.7, Apache + mod\_wsgi on CentOS 6.3
Things work fine when I am on localhost. However, when I run the code on a vm in Azure, I do not see the session information being persisted across pages.
Basically in my views, I have something like:
```
@frontend.route('/')
def index():
session['foo']... | Don't use `app.secret_key = os.urandom(24)`!
You're supposed to enter a static value here, not read from `os.urandom` each time. You've probably misunderstood the example in the [docs](http://flask.pocoo.org/docs/quickstart/#sessions), it shows you how you can read random data from `os.urandom`, but it also clearly st... |
how to split a unicode string into list | 18,711,384 | 4 | 2013-09-10T05:37:43Z | 18,711,432 | 11 | 2013-09-10T05:42:11Z | [
"python",
"string",
"unicode",
"utf-8",
"unicode-string"
] | I have the following code:
```
stru = "Û°Û±Û²Û³Û´ÛµÛ¶Û·Û¸Û¹"
strlist = stru.decode("utf-8").split()
print strlist[0]
```
my output is :
```
Û°Û±Û²Û³Û´ÛµÛ¶Û·Û¸Û¹
```
But when i use:
```
print strlist[1]
```
I get the following `traceback`:
```
IndexError: list index out of range
```
**My question** is, how can I... | 1. You don't need to.
```
>>> print u"Û°Û±Û²Û³Û´ÛµÛ¶Û·Û¸Û¹"[1]
Û±
```
2. If you still *want* to...
```
>>> list(u"Û°Û±Û²Û³Û´ÛµÛ¶Û·Û¸Û¹")
[u'\u06f0', u'\u06f1', u'\u06f2', u'\u06f3', u'\u06f4', u'\u06f5', u'\u06f6', u'\u06f7', u'\u06f8', u'\u06f9']
``` |
virtualenv won't activate on windows | 18,713,086 | 3 | 2013-09-10T07:35:30Z | 18,713,789 | 8 | 2013-09-10T08:15:30Z | [
"python",
"virtualenv"
] | Essentially I cannot seem to activate my virtualenv environment which I create.
I'm doing this inside of windows powershell through using
```
scripts\activate
```
but get an error message
> "cannot be loaded because the execution of scripts is disabled on this
> system".
Could this be because I don't carry admin p... | Moving comment to answers section :)
According to [Microsoft Tech Support](http://social.technet.microsoft.com/Forums/windowsserver/en-US/964636ad-347e-4b23-8f7a-f36a558115dd/error-file-cannot-be-loaded-because-the-execution-of-scripts-is-disabled-on-this-system) it might be a problem with Execution Policy Settings. T... |
Element-wise Addition of 2 Lists in Python? | 18,713,321 | 84 | 2013-09-10T07:49:11Z | 18,713,344 | 166 | 2013-09-10T07:50:36Z | [
"python",
"list",
"elementwise-operations"
] | I have now:
```
list1=[1, 2, 3]
list2=[4, 5, 6]
```
I wish to have:
```
[1, 2, 3]
+ + +
[4, 5, 6]
||
[5, 7, 9]
```
Simply an element-wise addition of two lists.
I can surely iterate the two lists, but I don't want do that.
What is **the most Pythonic way** of doing so? | Use [`map`](https://docs.python.org/2/library/functions.html#map) with [`operator.add`](https://docs.python.org/2/library/operator.html#operator.add):
```
>>> from operator import add
>>> map(add, list1, list2)
[5, 7, 9]
```
or [`zip`](https://docs.python.org/2/library/functions.html#zip) with a list comprehension:
... |
Element-wise Addition of 2 Lists in Python? | 18,713,321 | 84 | 2013-09-10T07:49:11Z | 18,713,367 | 37 | 2013-09-10T07:51:57Z | [
"python",
"list",
"elementwise-operations"
] | I have now:
```
list1=[1, 2, 3]
list2=[4, 5, 6]
```
I wish to have:
```
[1, 2, 3]
+ + +
[4, 5, 6]
||
[5, 7, 9]
```
Simply an element-wise addition of two lists.
I can surely iterate the two lists, but I don't want do that.
What is **the most Pythonic way** of doing so? | ```
[a + b for a, b in zip(list1, list2)]
``` |
Element-wise Addition of 2 Lists in Python? | 18,713,321 | 84 | 2013-09-10T07:49:11Z | 18,713,494 | 53 | 2013-09-10T07:59:11Z | [
"python",
"list",
"elementwise-operations"
] | I have now:
```
list1=[1, 2, 3]
list2=[4, 5, 6]
```
I wish to have:
```
[1, 2, 3]
+ + +
[4, 5, 6]
||
[5, 7, 9]
```
Simply an element-wise addition of two lists.
I can surely iterate the two lists, but I don't want do that.
What is **the most Pythonic way** of doing so? | The others gave examples how to do this in pure python. If you want to do this with arrays with 100.000 elements, you should use numpy:
```
In [1]: import numpy as np
In [2]: vector1 = np.array([1, 2, 3])
In [3]: vector2 = np.array([4, 5, 6])
```
Doing the element-wise addition is now as trivial as
```
In [4]: sum_v... |
Subsample pandas dataframe | 18,713,929 | 10 | 2013-09-10T08:22:06Z | 18,714,509 | 14 | 2013-09-10T08:54:05Z | [
"python",
"numpy",
"pandas",
"subsampling"
] | I have a DataFrame loaded from a tsv file. I wanted to generate some exploratory plots. The problem is that the data set is large (~1 million rows), so it there are too many points on the plot to see a trend. Plus, it is taking a while to plot.
I wanted to sub-sample 10000 randomly distributed rows. Also, this should ... | You can select random elements from you index with [`np.random.choice`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html). Eg to select 5 random rows:
```
df = pd.DataFrame(np.random.rand(10))
df.loc[np.random.choice(df.index, 5, replace=False)]
```
This function is new in 1.7. If you wan... |
Subsample pandas dataframe | 18,713,929 | 10 | 2013-09-10T08:22:06Z | 18,714,869 | 14 | 2013-09-10T09:11:13Z | [
"python",
"numpy",
"pandas",
"subsampling"
] | I have a DataFrame loaded from a tsv file. I wanted to generate some exploratory plots. The problem is that the data set is large (~1 million rows), so it there are too many points on the plot to see a trend. Plus, it is taking a while to plot.
I wanted to sub-sample 10000 randomly distributed rows. Also, this should ... | Unfortunately `np.random.choice` appears to be quite slow for small samples (less than 10% of all rows), you may be better off using plain ol' sample:
```
from random import sample
df.loc[sample(df.index, 1000)]
```
For large DataFrame (a million rows), we see small samples:
```
In [11]: %timeit df.loc[sample(df.ind... |
How to Calculate Centroid in python | 18,714,587 | 4 | 2013-09-10T08:57:36Z | 18,721,175 | 8 | 2013-09-10T14:08:12Z | [
"python",
"math",
"numpy",
"shape"
] | I'm beginner to python coding. I'm working over structural coordinates. I have pdb structure which have xyz coordinate information (last three col)
```
ATOM 1 N SER A 1 27.130 7.770 34.390
ATOM 2 1H SER A 1 27.990 7.760 34.930
ATOM 3 2H SER A 1 27.160 6.960 ... | First of all, an easier way to read your file is with numpy's `genfromtxt` function. You don't need to import string, and you don't need to loop through all the lines and append text or count the characters.
```
from __future__ import division
import numpy as nx
data = nx.genfromtxt('file.pdb')
```
Then, the last th... |
Find common substring between two strings | 18,715,688 | 13 | 2013-09-10T09:50:24Z | 18,715,879 | 16 | 2013-09-10T09:59:58Z | [
"python",
"string",
"python-2.7"
] | I'd like to compare 2 strings and keep the matched, splitting off where the comparison fails.
So if I have 2 strings -
```
string1 = apples
string2 = appleses
answer = apples
```
Another example, as the string could have more than one word.
```
string1 = apple pie available
string2 = apple pies
answer = apple pie... | ```
def common_start(sa, sb):
""" returns the longest common substring from the beginning of sa and sb """
def _iter():
for a, b in zip(sa, sb):
if a == b:
yield a
else:
return
return ''.join(_iter())
```
```
>>> common_start("apple pie avail... |
Find common substring between two strings | 18,715,688 | 13 | 2013-09-10T09:50:24Z | 18,717,762 | 9 | 2013-09-10T11:28:39Z | [
"python",
"string",
"python-2.7"
] | I'd like to compare 2 strings and keep the matched, splitting off where the comparison fails.
So if I have 2 strings -
```
string1 = apples
string2 = appleses
answer = apples
```
Another example, as the string could have more than one word.
```
string1 = apple pie available
string2 = apple pies
answer = apple pie... | Its called Longest Common Substring problem. Here I present a simple, easy to understand but inefficient solution. It will take a long time to produce correct output for large strings, as the complexity of this algorithm is O(N^2).
```
def longestSubstringFinder(string1, string2):
answer = ""
len1, len2 = len(... |
select closest values from two different arrays | 18,716,655 | 4 | 2013-09-10T10:36:29Z | 18,716,883 | 7 | 2013-09-10T10:47:50Z | [
"python",
"numpy"
] | suppose i have a numpy array
```
A = [[1 2 3]
[2 3 3]
[1 2 3]]
```
and another array
```
B = [[3 2 3]
[1 2 3]
[4 6 3]]
```
and a array of true values:
```
C = [[1 4 3]
[8 7 3]
[4 10 3]]
```
Now I want to create an array D, the elements of which are dervied from either A or B, the con... | ```
>>> K = abs(A - C) < abs(B - C) # create array of bool
[[True, False, False],
[True, True, False],
[False, False, False]]
>>> D = where(K, A, B) # get elements of A and B respectively
``` |
Pip on Mac OS X ImportError: cannot import name walk_packages | 18,717,043 | 5 | 2013-09-10T10:55:37Z | 18,722,729 | 8 | 2013-09-10T15:15:41Z | [
"python",
"osx",
"python-2.7",
"pip"
] | I have found several pip problems on stack overflow, but unfortunately it seems like I am experiencing some that I can't find.
Particularly I am getting following error message whenever I try to use pip:
```
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/bin/pip", line 9, ... | Somehow solved by command `sudo easy_install -U pip` |
"yield" in Python | 18,717,834 | 3 | 2013-09-10T11:32:28Z | 18,717,913 | 13 | 2013-09-10T11:35:54Z | [
"python",
"iteration",
"generator",
"yield"
] | I have a function called **x** that produces a generator like this:
```
a = 5
def x():
global a
if a == 3:
raise Exception("Stop")
a = a - 1
yield a
```
Then in the python shell I call that function like this:
```
>>> print x().next()
>>> 4
>>> print x().next()
>>> 3
>>> print x().next()
>>> ... | In your first example, you were creating a **new** generator each time:
```
x().next()
```
This starts the generator *from the top*, so the first statement. When `a == 3`, the exception is raised, otherwise the generator just yields and *pauses*.
When you assigned your generator later on, the global `a` started at `... |
prevent plot from showing in ipython notebook | 18,717,877 | 11 | 2013-09-10T11:34:07Z | 18,718,162 | 13 | 2013-09-10T11:47:43Z | [
"python",
"matplotlib",
"ipython",
"ipython-notebook",
"figures"
] | How can I prevent a specific plot to be shown in iPython notebook? I have several plots in a notebook but I want a subset of them to be saved to a file and not shown on the notebook as this slows considerably.
A minimal working example for an iPython notebook is:
```
%matplotlib inline
from numpy.random import randn... | Perhaps just clear the axis, for example:
```
fig= plt.figure()
plt.plot(range(10))
fig.savefig("save_file_name.pdf")
plt.close()
```
will not plot the output in `inline` mode. I can't work out if is really clearing the data though. |
python_x64 + C library compiled with mingw_x64 on Windows7 Py_InitModule4 | 18,717,996 | 7 | 2013-09-10T11:39:36Z | 19,287,431 | 7 | 2013-10-10T05:01:45Z | [
"python",
"c",
"gcc",
"windows-7-x64",
"mingw-w64"
] | I'm trying to compile C library for python on Windows7 (64-bit) using mingw-x64.
It all worked like a charm with 32-bit versions.
I used to compile my library with
gcc -shared -IC:\Python27\include -LC:\Python27\libs myModule.c -lpython27 -o myModule.pyd
and it worked with 32-bit versions. The same procedure is worki... | Copying the answer from the comments in order to remove this question from the "Unanswered" filter:
> I hate to answer my own questions, but... adding -DMS\_WIN64 is actually enough. Remaining problems were due to gcc parameters ( for some reason -lpython27 should go right before -o myModule.pyd), which were not in co... |
Using struct pack in python | 18,718,709 | 18 | 2013-09-10T12:14:22Z | 18,718,768 | 30 | 2013-09-10T12:17:44Z | [
"python"
] | I have a number in integer form which I need to convert into 4 bytes and store it in a list . I am trying to use the struct module in python but am unable to get it to work:
```
struct.pack("i",34);
```
This returns 0 when I am expecting the binary equivalent to be printed.
Expected Output:
```
[0x00 0x00 0x00 0x22]... | The output is returned as a *byte string*, and Python will print such strings as ASCII characters whenever possible:
```
>>> import struct
>>> struct.pack("i",34)
'"\x00\x00\x00'
```
Note the quote at the start, that's ASCII codepoint 34:
```
>>> ord('"')
34
>>> hex(ord('"'))
'0x22'
```
If you expected the ordering... |
Proxy Selenium Python Firefox | 18,719,980 | 4 | 2013-09-10T13:15:54Z | 21,470,469 | 8 | 2014-01-31T00:36:38Z | [
"python",
"firefox",
"selenium",
"proxy"
] | How can I redirect the traffic of Firefox launched by Selenium in Python to a proxy? I have used the solutions suggested on the web but they doesn't work!
I have tried:
```
profile = webdriver.FirefoxProfile()
profile.set_preference("network.proxy.type", 1)
profile.set_preference("network.proxy.http", "54.213.66.208... | You need to import the following:
```
from selenium.webdriver.common.proxy import *
```
Then setup the proxies:
```
myProxy = "xx.xx.xx.xx:xxxx"
proxy = Proxy({
'proxyType': ProxyType.MANUAL,
'httpProxy': myProxy,
'ftpProxy': myProxy,
'sslProxy': myProxy,
'noProxy': '' # set this value as desire... |
OpenCV2 Python createBackgroundSubtractor module not found | 18,721,552 | 13 | 2013-09-10T14:25:17Z | 20,847,540 | 21 | 2013-12-30T21:03:59Z | [
"python",
"opencv",
"background-subtraction"
] | I am trying to use cv2.createBackgroundSubtractorMOG2 () method in Python. I have tried both on my Mac and on my Raspberry Pi, and get the same error when running the following line of code:
```
fgbg = cv2.createBackgroundSubtractorMOG2()
```
The code I am using is taken from <https://github.com/abidrahmank/OpenCV2-P... | Replace the create.... with
fgbg = cv2.BackgroundSubtractorMOG() |
OpenCV2 Python createBackgroundSubtractor module not found | 18,721,552 | 13 | 2013-09-10T14:25:17Z | 24,248,472 | 8 | 2014-06-16T16:44:53Z | [
"python",
"opencv",
"background-subtraction"
] | I am trying to use cv2.createBackgroundSubtractorMOG2 () method in Python. I have tried both on my Mac and on my Raspberry Pi, and get the same error when running the following line of code:
```
fgbg = cv2.createBackgroundSubtractorMOG2()
```
The code I am using is taken from <https://github.com/abidrahmank/OpenCV2-P... | `cv2.createbackgroundSubstractor()` works in cv 3.0 for 2.4.x use `cv2.BackgroundSubstractor()` |
How to set UTC offset for datetime? | 18,722,196 | 5 | 2013-09-10T14:52:51Z | 18,722,887 | 9 | 2013-09-10T15:22:39Z | [
"python",
"datetime",
"timezone"
] | My Python-based web server needs to perform some date manipulation using the client's timezone, represented by its UTC offset. How do I construct a datetime object with the specified UTC offset as timezone? | Using [`dateutil`](http://labix.org/python-dateutil):
```
>>> import datetime
>>> import dateutil.tz
>>> datetime.datetime(2013, 9, 11, 0, 17, tzinfo=dateutil.tz.tzoffset(None, 9*60*60))
datetime.datetime(2013, 9, 11, 0, 17, tzinfo=tzoffset(None, 32400))
>>> datetime.datetime(2013, 9, 11, 0, 17, tzinfo=dateutil.tz.tzo... |
python : import some_module through other_module | 18,722,531 | 3 | 2013-09-10T15:07:32Z | 18,722,561 | 10 | 2013-09-10T15:09:01Z | [
"python",
"python-import"
] | Why people do
```
import os
import sys
print sys.version
```
If they can do
```
import os
print os.sys.version
```
Why double-import some basic modules(random, sys ... lot of those ), if you already know that same modules are imported by other modules you are already using?
Are such calls somehow deprecated to use ... | Because you should not rely on the implementation details of another module. If the other module *stops* using `sys`, then your first module is now broken.
Importing merely creates a *reference* in the current namespace. You are not loading the module into memory twice when using the `import`, so importing a module in... |
Trying to generate a series of unique random numbers | 18,722,753 | 2 | 2013-09-10T15:16:15Z | 18,722,801 | 9 | 2013-09-10T15:18:46Z | [
"python",
"random"
] | Sorry if this is obvious, im pretty new.
Here is the code.
It should never print the same two things as i understand it, but it sometimes does. The point is that p1 being 1 should prevent p2 from being 1, and if p2 is 1, p2 should run again with the same p1 value, but should generate a new random number. It might be 1... | Instead of picking random integers, shuffle a *list* and pick the first two items:
```
import random
choices = ['Candy', 'Steak', 'Vegetables']
random.shuffle(choices)
item1, item2 = choices[:2]
```
Because we shuffled a list of *possible* choices first, then picked the first two, you can guarantee that `item1` and... |
python: rstrip one exact string, respecting order | 18,723,580 | 4 | 2013-09-10T15:53:05Z | 18,723,624 | 9 | 2013-09-10T15:54:58Z | [
"python",
"string",
"strip"
] | Is it possible to use the python command `rstrip` so that it does only remove one exact string and does not take all letters separately?
I was confused when this happened:
```
>>>"Boat.txt".rstrip(".txt")
>>>'Boa'
```
What I expected was:
```
>>>"Boat.txt".rstrip(".txt")
>>>'Boat'
```
Can I somehow use rstrip and ... | You're using wrong method. Use [`str.replace`](http://docs.python.org/2/library/stdtypes#std.replace) instead:
```
>>> "Boat.txt".replace(".txt", "")
'Boat'
```
**NOTE**: `str.replace` will replace anywhere in the string.
```
>>> "Boat.txt.txt".replace(".txt", "")
'Boat'
```
To remove the last trailing `.txt` only,... |
python: rstrip one exact string, respecting order | 18,723,580 | 4 | 2013-09-10T15:53:05Z | 18,723,694 | 8 | 2013-09-10T15:58:29Z | [
"python",
"string",
"strip"
] | Is it possible to use the python command `rstrip` so that it does only remove one exact string and does not take all letters separately?
I was confused when this happened:
```
>>>"Boat.txt".rstrip(".txt")
>>>'Boa'
```
What I expected was:
```
>>>"Boat.txt".rstrip(".txt")
>>>'Boat'
```
Can I somehow use rstrip and ... | Define a helper function:
```
def strip_suffix(s, suf):
if s.endswith(suf):
return s[:-len(suf)]
return s
s = strip_suffix(s, '.txt')
```
or use regex:
```
import re
suffix = ".txt"
s = re.sub(re.escape(suffix) + '$', '', s)
``` |
client to server, socket in python many to one relationship | 18,724,126 | 3 | 2013-09-10T16:19:50Z | 18,724,673 | 7 | 2013-09-10T16:49:34Z | [
"python",
"sockets",
"multiprocessing",
"many-to-one"
] | Hi I was able to connect my client to my server, but it has only one to one relationship( 1 client to server). My question in what should I do so that I can connect many client to my server? does anyone has an idea about my situation? any help will be appreciated, thanks in advance... Below is my codes.
server.py
```... | When your server's socket receives a connect attempt from a client, s.accept() returns the a socket object and ip,port information. What you need to do is save this new socket object in a list and poll this list.
```
while True:
c, addr = s.accept()
clients.append(c)
pressed = 0
```
Or better yet, use a d... |
Python date string formatting | 18,724,607 | 4 | 2013-09-10T16:45:47Z | 18,724,633 | 11 | 2013-09-10T16:47:20Z | [
"python",
"string",
"datetime"
] | I want to remove the padded zeroes from a string-formatted python date:
```
formatted_date = my_date.strftime("%m/%d/%Y") # outputs something like: 01/01/2013
date_out = formatted_date.replace(r'/0', r'/').replace(r'^0', r'')
```
The second replace doesnt work-- I get 01/1/2013. How do I match the zero only if it's n... | `.replace()` does *not* take regular expressions. You are trying to replace the literal text `^0`.
Use `str.format()` to create a date format without zero-padding instead:
```
'{0.month}/{0.day}/{0.year}'.format(my_date)
```
and avoid having to replace the zeros.
Demo:
```
>>> import datetime
>>> today = datetime.... |
How to save big array so that it will take less memory in python? | 18,725,541 | 7 | 2013-09-10T17:41:50Z | 18,726,185 | 7 | 2013-09-10T18:18:45Z | [
"python",
"numpy"
] | I am new to python. I have a big array, `a`, with dimensions such as `(43200, 4000)` and I need to save this, as I need it for future processing. when I try to save it with a `np.savetxt`, the txt file is too large and my program runs into memory error as I need to process 5 files of same size. Is there any way to save... | Saving your data to text file is hugely inefficient. Numpy has built-in saving commands [save](http://docs.scipy.org/doc/numpy/reference/generated/numpy.save.html), and [savez/savez\_compressed](http://docs.scipy.org/doc/numpy/reference/generated/numpy.savez.html#numpy.savez) which would be much better suited to storin... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.