title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
find_package() errors during installing package via pip | 29,477,456 | 7 | 2015-04-06T18:42:35Z | 29,477,720 | 10 | 2015-04-06T18:58:39Z | [
"python",
"pip"
] | I'm tring to install **django-dbsettings** with pip but it causes the following error:
```
Downloading django-dbsettings-0.7.4.tar.gz
Running setup.py egg_info for package django-dbsettings
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/path/virtualenv/build/django-dbsettings/setup.py", line 23, in <module>
packages=find_packages(include=['dbsettings']),
TypeError: find_packages() got an unexpected keyword argument 'include'
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/path/virtualenv/build/django-dbsettings/setup.py", line 23, in <module>
packages=find_packages(include=['dbsettings']),
TypeError: find_packages() got an unexpected keyword argument 'include'
```
Also I'm using pip 1.0 and python 2.7.
How can I fix it? | There is no `include` keyword in `find_packages()` using older versions of setuptools, you need to upgrade:
```
pip install -U setuptools
```
I would also update pip. |
There is no South database module 'south.db.postgresql_psycopg2' for your database | 29,478,400 | 14 | 2015-04-06T19:41:10Z | 30,435,113 | 57 | 2015-05-25T09:28:33Z | [
"python",
"django",
"django-south"
] | i new to django and I'm getting this error from south but i don't know what i'm missing. I search for answers but i can't found anything.
```
There is no South database module 'south.db.postgresql_psycopg2' for your database. Please either choose a supported database, check for SOUTH_DATABASE_ADAPTER[S] settings, or remove South from INSTALLED_APPS.
```
This is my base\_settings:
```
from unipath import Path
BASE_DIR = Path(__file__).ancestor(3)
SECRET_KEY = 'pp@iz7%bc7%+*11%usf7o@_e&)r2o&^3%zjse)n=6b&w^hem96'
DJANGO_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
)
THIRD_PARTY_APPS = (
'south',
)
LOCAL_APPS = (
)
INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)
ROOT_URLCONF = 'misite.urls'
WSGI_APPLICATION = 'misite.wsgi.application'
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
```
local\_settings:
```
from .base import *
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
DEBUG = True
DEFAULT_FROM_EMAIL = '[email protected]'
# ----------------------------
# POSTGRESQL
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'misite',
'USER:': 'fernandoperez',
'PASSWORD': 'admin',
'HOST':'localhost',
'PORT':'5432',
}
}
SOUTH_DATABASE_ADAPTERS = {
'default': "south.db.postgresql_psycopg2"
}
STATIC_URL = '/static/'
```
Can someone help me?? Thanks a lot. | Try `pip uninstall South`, then you will see if your application is 1.8 ready. |
Docker Compose vs. Dockerfile - which is better? | 29,480,099 | 24 | 2015-04-06T21:36:43Z | 29,487,120 | 22 | 2015-04-07T08:28:57Z | [
"python",
"django",
"docker",
"boot2docker",
"docker-compose"
] | I have been reading up and learning about [Docker](https://docs.docker.com/), and am trying to correctly choose the Django setup to use. So far there is either:
[Docker Compose](https://docs.docker.com/compose/) or [Dockerfile](https://docs.docker.com/reference/builder/)
I understand that `Dockerfiles` are used in `Docker Compose`, but I am not sure if it is good practice to put everything in one large Dockerfile with multiple `FROM` commands for the different images?
I want to use several different images that include:
```
uwsgi
nginx
postgres
redis
rabbitmq
celery with cron
```
Please advise on what is best practices in setting up this type of environment using *Docker*.
If it helps, I am on a Mac, so using [boot2docker](https://github.com/boot2docker).
Some Issues I've had:
1. Docker Compose is not compatible with Python3
2. I want to containerize my project, so if one large Dockerfile
is not ideal, then I feel I'd need to break it up using Docker Compose
3. I am ok to make the project Py2 & Py3 compatible, so am leaning towards django-compose | The answer is neither.
Docker Compose (herein referred to as compose) will use the Dockerfile if you add the build command to your project's `docker-compose.yml`.
Your Docker workflow should be to build a suitable `Dockerfile` for each image you wish to create, then use compose to assemble the images using the `build` command.
You can specify the path to your individual Dockerfiles using `build /path/to/dockerfiles/blah` where `/path/to/dockerfiles/blah` is where blah's `Dockerfile` lives. |
How can I tell if a string repeats itself in Python? | 29,481,088 | 320 | 2015-04-06T23:02:09Z | 29,481,228 | 173 | 2015-04-06T23:18:16Z | [
"python",
"string",
"pattern-matching"
] | I'm looking for a way to test whether or not a given string repeats itself for the entire string or not.
Examples:
```
[
'0045662100456621004566210045662100456621', # '00456621'
'0072992700729927007299270072992700729927', # '00729927'
'001443001443001443001443001443001443001443', # '001443'
'037037037037037037037037037037037037037037037', # '037'
'047619047619047619047619047619047619047619', # '047619'
'002457002457002457002457002457002457002457', # '002457'
'001221001221001221001221001221001221001221', # '001221'
'001230012300123001230012300123001230012300123', # '00123'
'0013947001394700139470013947001394700139470013947', # '0013947'
'001001001001001001001001001001001001001001001001001', # '001'
'001406469760900140646976090014064697609', # '0014064697609'
]
```
are strings which repeat themselves, and
```
[
'004608294930875576036866359447',
'00469483568075117370892018779342723',
'004739336492890995260663507109',
'001508295625942684766214177978883861236802413273',
'007518796992481203',
'0071942446043165467625899280575539568345323741',
'0434782608695652173913',
'0344827586206896551724137931',
'002481389578163771712158808933',
'002932551319648093841642228739',
'0035587188612099644128113879',
'003484320557491289198606271777',
'00115074798619102416570771',
]
```
are examples of ones that do not.
The repeating sections of the strings I'm given can be quite long, and the strings themselves can be 500 or more characters, so looping through each character trying to build a pattern then checking the pattern vs the rest of the string seems awful slow. Multiply that by potentially hundreds of strings and I can't see any intuitive solution.
I've looked into regexes a bit and they seem good for when you know what you're looking for, or at least the length of the pattern you're looking for. Unfortunately, I know neither.
How can I tell if a string is repeating itself and if it is, what the shortest repeating subsequence is? | Here's a solution using regular expressions.
```
import re
REPEATER = re.compile(r"(.+?)\1+$")
def repeated(s):
match = REPEATER.match(s)
return match.group(1) if match else None
```
Iterating over the examples in the question:
```
examples = [
'0045662100456621004566210045662100456621',
'0072992700729927007299270072992700729927',
'001443001443001443001443001443001443001443',
'037037037037037037037037037037037037037037037',
'047619047619047619047619047619047619047619',
'002457002457002457002457002457002457002457',
'001221001221001221001221001221001221001221',
'001230012300123001230012300123001230012300123',
'0013947001394700139470013947001394700139470013947',
'001001001001001001001001001001001001001001001001001',
'001406469760900140646976090014064697609',
'004608294930875576036866359447',
'00469483568075117370892018779342723',
'004739336492890995260663507109',
'001508295625942684766214177978883861236802413273',
'007518796992481203',
'0071942446043165467625899280575539568345323741',
'0434782608695652173913',
'0344827586206896551724137931',
'002481389578163771712158808933',
'002932551319648093841642228739',
'0035587188612099644128113879',
'003484320557491289198606271777',
'00115074798619102416570771',
]
for e in examples:
sub = repeated(e)
if sub:
print("%r: %r" % (e, sub))
else:
print("%r does not repeat." % e)
```
... produces this output:
```
'0045662100456621004566210045662100456621': '00456621'
'0072992700729927007299270072992700729927': '00729927'
'001443001443001443001443001443001443001443': '001443'
'037037037037037037037037037037037037037037037': '037'
'047619047619047619047619047619047619047619': '047619'
'002457002457002457002457002457002457002457': '002457'
'001221001221001221001221001221001221001221': '001221'
'001230012300123001230012300123001230012300123': '00123'
'0013947001394700139470013947001394700139470013947': '0013947'
'001001001001001001001001001001001001001001001001001': '001'
'001406469760900140646976090014064697609': '0014064697609'
'004608294930875576036866359447' does not repeat.
'00469483568075117370892018779342723' does not repeat.
'004739336492890995260663507109' does not repeat.
'001508295625942684766214177978883861236802413273' does not repeat.
'007518796992481203' does not repeat.
'0071942446043165467625899280575539568345323741' does not repeat.
'0434782608695652173913' does not repeat.
'0344827586206896551724137931' does not repeat.
'002481389578163771712158808933' does not repeat.
'002932551319648093841642228739' does not repeat.
'0035587188612099644128113879' does not repeat.
'003484320557491289198606271777' does not repeat.
'00115074798619102416570771' does not repeat.
```
The regular expression `(.+?)\1+$` is divided into three parts:
1. `(.+?)` is a matching group containing at least one (but as few as possible) of any character (because [`+?` is non-greedy](https://docs.python.org/3/library/re.html#regular-expression-syntax)).
2. `\1+` checks for at least one repetition of the matching group in the first part.
3. `$` checks for the end of the string, to ensure that there's no extra, non-repeating content after the repeated substrings (and using [`re.match()`](https://docs.python.org/3/library/re.html#re.match) ensures that there's no non-repeating text *before* the repeated substrings).
In Python 3.4 and later, you could drop the `$` and use [`re.fullmatch()`](https://docs.python.org/3/library/re.html#re.fullmatch) instead, or (in any Python at least as far back as 2.3) go the other way and use [`re.search()`](https://docs.python.org/3/library/re.html#re.search) with the regex `^(.+?)\1+$`, all of which are more down to personal taste than anything else. |
How can I tell if a string repeats itself in Python? | 29,481,088 | 320 | 2015-04-06T23:02:09Z | 29,481,244 | 37 | 2015-04-06T23:20:38Z | [
"python",
"string",
"pattern-matching"
] | I'm looking for a way to test whether or not a given string repeats itself for the entire string or not.
Examples:
```
[
'0045662100456621004566210045662100456621', # '00456621'
'0072992700729927007299270072992700729927', # '00729927'
'001443001443001443001443001443001443001443', # '001443'
'037037037037037037037037037037037037037037037', # '037'
'047619047619047619047619047619047619047619', # '047619'
'002457002457002457002457002457002457002457', # '002457'
'001221001221001221001221001221001221001221', # '001221'
'001230012300123001230012300123001230012300123', # '00123'
'0013947001394700139470013947001394700139470013947', # '0013947'
'001001001001001001001001001001001001001001001001001', # '001'
'001406469760900140646976090014064697609', # '0014064697609'
]
```
are strings which repeat themselves, and
```
[
'004608294930875576036866359447',
'00469483568075117370892018779342723',
'004739336492890995260663507109',
'001508295625942684766214177978883861236802413273',
'007518796992481203',
'0071942446043165467625899280575539568345323741',
'0434782608695652173913',
'0344827586206896551724137931',
'002481389578163771712158808933',
'002932551319648093841642228739',
'0035587188612099644128113879',
'003484320557491289198606271777',
'00115074798619102416570771',
]
```
are examples of ones that do not.
The repeating sections of the strings I'm given can be quite long, and the strings themselves can be 500 or more characters, so looping through each character trying to build a pattern then checking the pattern vs the rest of the string seems awful slow. Multiply that by potentially hundreds of strings and I can't see any intuitive solution.
I've looked into regexes a bit and they seem good for when you know what you're looking for, or at least the length of the pattern you're looking for. Unfortunately, I know neither.
How can I tell if a string is repeating itself and if it is, what the shortest repeating subsequence is? | Non-regex solution:
```
def repeat(string):
for i in range(1, len(string)//2+1):
if not len(string)%len(string[0:i]) and string[0:i]*(len(string)//len(string[0:i])) == string:
return string[0:i]
```
Faster non-regex solution, thanks to @ThatWeirdo (see comments):
```
def repeat(string):
l = len(string)
for i in range(1, len(string)//2+1):
if l%i: continue
s = string[0:i]
if s*(l//i) == string:
return s
```
The above solution is very rarely slower than the original by a few percent, but it's usually a good bit faster - sometimes a whole lot faster. It's still not faster than davidism's for longer strings, and zero's regex solution is superior for short strings. It comes out to the fastest (according to davidism's test on github - see his answer) with strings of about 1000-1500 characters. Regardless, it's reliably second-fastest (or better) in all cases I tested. Thanks, ThatWeirdo.
Test:
```
print(repeat('009009009'))
print(repeat('254725472547'))
print(repeat('abcdeabcdeabcdeabcde'))
print(repeat('abcdefg'))
print(repeat('09099099909999'))
print(repeat('02589675192'))
```
Results:
```
009
2547
abcde
None
None
None
``` |
How can I tell if a string repeats itself in Python? | 29,481,088 | 320 | 2015-04-06T23:02:09Z | 29,481,262 | 89 | 2015-04-06T23:22:51Z | [
"python",
"string",
"pattern-matching"
] | I'm looking for a way to test whether or not a given string repeats itself for the entire string or not.
Examples:
```
[
'0045662100456621004566210045662100456621', # '00456621'
'0072992700729927007299270072992700729927', # '00729927'
'001443001443001443001443001443001443001443', # '001443'
'037037037037037037037037037037037037037037037', # '037'
'047619047619047619047619047619047619047619', # '047619'
'002457002457002457002457002457002457002457', # '002457'
'001221001221001221001221001221001221001221', # '001221'
'001230012300123001230012300123001230012300123', # '00123'
'0013947001394700139470013947001394700139470013947', # '0013947'
'001001001001001001001001001001001001001001001001001', # '001'
'001406469760900140646976090014064697609', # '0014064697609'
]
```
are strings which repeat themselves, and
```
[
'004608294930875576036866359447',
'00469483568075117370892018779342723',
'004739336492890995260663507109',
'001508295625942684766214177978883861236802413273',
'007518796992481203',
'0071942446043165467625899280575539568345323741',
'0434782608695652173913',
'0344827586206896551724137931',
'002481389578163771712158808933',
'002932551319648093841642228739',
'0035587188612099644128113879',
'003484320557491289198606271777',
'00115074798619102416570771',
]
```
are examples of ones that do not.
The repeating sections of the strings I'm given can be quite long, and the strings themselves can be 500 or more characters, so looping through each character trying to build a pattern then checking the pattern vs the rest of the string seems awful slow. Multiply that by potentially hundreds of strings and I can't see any intuitive solution.
I've looked into regexes a bit and they seem good for when you know what you're looking for, or at least the length of the pattern you're looking for. Unfortunately, I know neither.
How can I tell if a string is repeating itself and if it is, what the shortest repeating subsequence is? | You can make the observation that for a string to be considered repeating, its length must be divisible by the length of its repeated sequence. Given that, here is a solution that generates divisors of the length from `1` to `n / 2` inclusive, divides the original string into substrings with the length of the divisors, and tests the equality of the result set:
```
from math import sqrt, floor
def divquot(n):
if n > 1:
yield 1, n
swapped = []
for d in range(2, int(floor(sqrt(n))) + 1):
q, r = divmod(n, d)
if r == 0:
yield d, q
swapped.append((q, d))
while swapped:
yield swapped.pop()
def repeats(s):
n = len(s)
for d, q in divquot(n):
sl = s[0:d]
if sl * q == s:
return sl
return None
```
**EDIT:** In Python 3, the `/` operator has changed to do float division by default. To get the `int` division from Python 2, you can use the `//` operator instead. Thank you to @TigerhawkT3 for bringing this to my attention.
The `//` operator performs integer division in both Python 2 and Python 3, so I've updated the answer to support both versions. The part where we test to see if all the substrings are equal is now a short-circuiting operation using `all` and a generator expression.
**UPDATE:** In response to a change in the original question, the code has now been updated to return the smallest repeating substring if it exists and `None` if it does not. @godlygeek has suggested using `divmod` to reduce the number of iterations on the `divisors` generator, and the code has been updated to match that as well. It now returns all positive divisors of `n` in ascending order, exclusive of `n` itself.
**Further update for high performance:** After multiple tests, I've come to the conclusion that simply testing for string equality has the best performance out of any slicing or iterator solution in Python. Thus, I've taken a leaf out of @TigerhawkT3 's book and updated my solution. It's now over 6x as fast as before, noticably faster than Tigerhawk's solution but slower than David's. |
How can I tell if a string repeats itself in Python? | 29,481,088 | 320 | 2015-04-06T23:02:09Z | 29,482,465 | 16 | 2015-04-07T01:55:31Z | [
"python",
"string",
"pattern-matching"
] | I'm looking for a way to test whether or not a given string repeats itself for the entire string or not.
Examples:
```
[
'0045662100456621004566210045662100456621', # '00456621'
'0072992700729927007299270072992700729927', # '00729927'
'001443001443001443001443001443001443001443', # '001443'
'037037037037037037037037037037037037037037037', # '037'
'047619047619047619047619047619047619047619', # '047619'
'002457002457002457002457002457002457002457', # '002457'
'001221001221001221001221001221001221001221', # '001221'
'001230012300123001230012300123001230012300123', # '00123'
'0013947001394700139470013947001394700139470013947', # '0013947'
'001001001001001001001001001001001001001001001001001', # '001'
'001406469760900140646976090014064697609', # '0014064697609'
]
```
are strings which repeat themselves, and
```
[
'004608294930875576036866359447',
'00469483568075117370892018779342723',
'004739336492890995260663507109',
'001508295625942684766214177978883861236802413273',
'007518796992481203',
'0071942446043165467625899280575539568345323741',
'0434782608695652173913',
'0344827586206896551724137931',
'002481389578163771712158808933',
'002932551319648093841642228739',
'0035587188612099644128113879',
'003484320557491289198606271777',
'00115074798619102416570771',
]
```
are examples of ones that do not.
The repeating sections of the strings I'm given can be quite long, and the strings themselves can be 500 or more characters, so looping through each character trying to build a pattern then checking the pattern vs the rest of the string seems awful slow. Multiply that by potentially hundreds of strings and I can't see any intuitive solution.
I've looked into regexes a bit and they seem good for when you know what you're looking for, or at least the length of the pattern you're looking for. Unfortunately, I know neither.
How can I tell if a string is repeating itself and if it is, what the shortest repeating subsequence is? | Here's a straight forward solution, without regexes.
For substrings of `s` starting from zeroth index, of lengths 1 through `len(s)`, check if that substring, `substr` is the repeated pattern. This check can be performed by concatenating `substr` with itself `ratio` times, such that the length of the string thus formed is equal to the length of `s`. Hence `ratio=len(s)/len(substr)`.
Return when first such substring is found. This would provide the smallest possible substring, if one exists.
```
def check_repeat(s):
for i in range(1, len(s)):
substr = s[:i]
ratio = len(s)/len(substr)
if substr * ratio == s:
print 'Repeating on "%s"' % substr
return
print 'Non repeating'
>>> check_repeat('254725472547')
Repeating on "2547"
>>> check_repeat('abcdeabcdeabcdeabcde')
Repeating on "abcde"
``` |
How can I tell if a string repeats itself in Python? | 29,481,088 | 320 | 2015-04-06T23:02:09Z | 29,482,830 | 23 | 2015-04-07T02:42:00Z | [
"python",
"string",
"pattern-matching"
] | I'm looking for a way to test whether or not a given string repeats itself for the entire string or not.
Examples:
```
[
'0045662100456621004566210045662100456621', # '00456621'
'0072992700729927007299270072992700729927', # '00729927'
'001443001443001443001443001443001443001443', # '001443'
'037037037037037037037037037037037037037037037', # '037'
'047619047619047619047619047619047619047619', # '047619'
'002457002457002457002457002457002457002457', # '002457'
'001221001221001221001221001221001221001221', # '001221'
'001230012300123001230012300123001230012300123', # '00123'
'0013947001394700139470013947001394700139470013947', # '0013947'
'001001001001001001001001001001001001001001001001001', # '001'
'001406469760900140646976090014064697609', # '0014064697609'
]
```
are strings which repeat themselves, and
```
[
'004608294930875576036866359447',
'00469483568075117370892018779342723',
'004739336492890995260663507109',
'001508295625942684766214177978883861236802413273',
'007518796992481203',
'0071942446043165467625899280575539568345323741',
'0434782608695652173913',
'0344827586206896551724137931',
'002481389578163771712158808933',
'002932551319648093841642228739',
'0035587188612099644128113879',
'003484320557491289198606271777',
'00115074798619102416570771',
]
```
are examples of ones that do not.
The repeating sections of the strings I'm given can be quite long, and the strings themselves can be 500 or more characters, so looping through each character trying to build a pattern then checking the pattern vs the rest of the string seems awful slow. Multiply that by potentially hundreds of strings and I can't see any intuitive solution.
I've looked into regexes a bit and they seem good for when you know what you're looking for, or at least the length of the pattern you're looking for. Unfortunately, I know neither.
How can I tell if a string is repeating itself and if it is, what the shortest repeating subsequence is? | First, halve the string as long as it's a "2 part" duplicate. This reduces the search space if there are an even number of repeats. Then, working forwards to find the smallest repeating string, check if splitting the full string by increasingly larger sub-string results in only empty values. Only sub-strings up to `length // 2` need to be tested since anything over that would have no repeats.
```
def shortest_repeat(orig_value):
if not orig_value:
return None
value = orig_value
while True:
len_half = len(value) // 2
first_half = value[:len_half]
if first_half != value[len_half:]:
break
value = first_half
len_value = len(value)
split = value.split
for i in (i for i in range(1, len_value // 2) if len_value % i == 0):
if not any(split(value[:i])):
return value[:i]
return value if value != orig_value else None
```
This returns the shortest match or None if there is no match. |
How can I tell if a string repeats itself in Python? | 29,481,088 | 320 | 2015-04-06T23:02:09Z | 29,482,936 | 81 | 2015-04-07T02:55:24Z | [
"python",
"string",
"pattern-matching"
] | I'm looking for a way to test whether or not a given string repeats itself for the entire string or not.
Examples:
```
[
'0045662100456621004566210045662100456621', # '00456621'
'0072992700729927007299270072992700729927', # '00729927'
'001443001443001443001443001443001443001443', # '001443'
'037037037037037037037037037037037037037037037', # '037'
'047619047619047619047619047619047619047619', # '047619'
'002457002457002457002457002457002457002457', # '002457'
'001221001221001221001221001221001221001221', # '001221'
'001230012300123001230012300123001230012300123', # '00123'
'0013947001394700139470013947001394700139470013947', # '0013947'
'001001001001001001001001001001001001001001001001001', # '001'
'001406469760900140646976090014064697609', # '0014064697609'
]
```
are strings which repeat themselves, and
```
[
'004608294930875576036866359447',
'00469483568075117370892018779342723',
'004739336492890995260663507109',
'001508295625942684766214177978883861236802413273',
'007518796992481203',
'0071942446043165467625899280575539568345323741',
'0434782608695652173913',
'0344827586206896551724137931',
'002481389578163771712158808933',
'002932551319648093841642228739',
'0035587188612099644128113879',
'003484320557491289198606271777',
'00115074798619102416570771',
]
```
are examples of ones that do not.
The repeating sections of the strings I'm given can be quite long, and the strings themselves can be 500 or more characters, so looping through each character trying to build a pattern then checking the pattern vs the rest of the string seems awful slow. Multiply that by potentially hundreds of strings and I can't see any intuitive solution.
I've looked into regexes a bit and they seem good for when you know what you're looking for, or at least the length of the pattern you're looking for. Unfortunately, I know neither.
How can I tell if a string is repeating itself and if it is, what the shortest repeating subsequence is? | Here are some benchmarks for the various answers to this question. There were some surprising results, including wildly different performance depending on the string being tested.
Some functions were modified to work with Python 3 (mainly by replacing `/` with `//` to ensure integer division). If you see something wrong, want to add your function, or want to add another test string, ping @ZeroPiraeus in the [Python chatroom](http://chat.stackoverflow.com/rooms/6/python).
In summary: there's about a 50x difference between the best- and worst-performing solutions for the large set of example data supplied by OP [here](http://paste.ubuntu.com/10765231/) (via [this](http://stackoverflow.com/questions/29481088/how-can-i-tell-if-a-string-repeats-itself-in-python#comment47156601_29481088) comment). [David Zhang's solution](http://stackoverflow.com/a/29489919) is the clear winner, outperforming all others by around 5x for the large example set.
A couple of the answers are *very* slow in extremely large "no match" cases. Otherwise, the functions seem to be equally matched or clear winners depending on the test.
Here are the results, including plots made using matplotlib and seaborn to show the different distributions:
---
**Corpus 1 (supplied examples - small set)**
```
mean performance:
0.0003 david_zhang
0.0009 zero
0.0013 antti
0.0013 tigerhawk_2
0.0015 carpetpython
0.0029 tigerhawk_1
0.0031 davidism
0.0035 saksham
0.0046 shashank
0.0052 riad
0.0056 piotr
median performance:
0.0003 david_zhang
0.0008 zero
0.0013 antti
0.0013 tigerhawk_2
0.0014 carpetpython
0.0027 tigerhawk_1
0.0031 davidism
0.0038 saksham
0.0044 shashank
0.0054 riad
0.0058 piotr
```
[](http://i.stack.imgur.com/Xx34F.png)
---
**Corpus 2 (supplied examples - large set)**
```
mean performance:
0.0006 david_zhang
0.0036 tigerhawk_2
0.0036 antti
0.0037 zero
0.0039 carpetpython
0.0052 shashank
0.0056 piotr
0.0066 davidism
0.0120 tigerhawk_1
0.0177 riad
0.0283 saksham
median performance:
0.0004 david_zhang
0.0018 zero
0.0022 tigerhawk_2
0.0022 antti
0.0024 carpetpython
0.0043 davidism
0.0049 shashank
0.0055 piotr
0.0061 tigerhawk_1
0.0077 riad
0.0109 saksham
```
[](http://i.stack.imgur.com/KZgxr.png)
---
**Corpus 3 (edge cases)**
```
mean performance:
0.0123 shashank
0.0375 david_zhang
0.0376 piotr
0.0394 carpetpython
0.0479 antti
0.0488 tigerhawk_2
0.2269 tigerhawk_1
0.2336 davidism
0.7239 saksham
3.6265 zero
6.0111 riad
median performance:
0.0107 tigerhawk_2
0.0108 antti
0.0109 carpetpython
0.0135 david_zhang
0.0137 tigerhawk_1
0.0150 shashank
0.0229 saksham
0.0255 piotr
0.0721 davidism
0.1080 zero
1.8539 riad
```
[](http://i.stack.imgur.com/w0qqT.png)
---
The tests and raw results are available [here](https://bitbucket.org/snippets/schesis/nMnR/benchmarking-answers-to-http). |
How can I tell if a string repeats itself in Python? | 29,481,088 | 320 | 2015-04-06T23:02:09Z | 29,484,962 | 16 | 2015-04-07T06:13:35Z | [
"python",
"string",
"pattern-matching"
] | I'm looking for a way to test whether or not a given string repeats itself for the entire string or not.
Examples:
```
[
'0045662100456621004566210045662100456621', # '00456621'
'0072992700729927007299270072992700729927', # '00729927'
'001443001443001443001443001443001443001443', # '001443'
'037037037037037037037037037037037037037037037', # '037'
'047619047619047619047619047619047619047619', # '047619'
'002457002457002457002457002457002457002457', # '002457'
'001221001221001221001221001221001221001221', # '001221'
'001230012300123001230012300123001230012300123', # '00123'
'0013947001394700139470013947001394700139470013947', # '0013947'
'001001001001001001001001001001001001001001001001001', # '001'
'001406469760900140646976090014064697609', # '0014064697609'
]
```
are strings which repeat themselves, and
```
[
'004608294930875576036866359447',
'00469483568075117370892018779342723',
'004739336492890995260663507109',
'001508295625942684766214177978883861236802413273',
'007518796992481203',
'0071942446043165467625899280575539568345323741',
'0434782608695652173913',
'0344827586206896551724137931',
'002481389578163771712158808933',
'002932551319648093841642228739',
'0035587188612099644128113879',
'003484320557491289198606271777',
'00115074798619102416570771',
]
```
are examples of ones that do not.
The repeating sections of the strings I'm given can be quite long, and the strings themselves can be 500 or more characters, so looping through each character trying to build a pattern then checking the pattern vs the rest of the string seems awful slow. Multiply that by potentially hundreds of strings and I can't see any intuitive solution.
I've looked into regexes a bit and they seem good for when you know what you're looking for, or at least the length of the pattern you're looking for. Unfortunately, I know neither.
How can I tell if a string is repeating itself and if it is, what the shortest repeating subsequence is? | This version tries only those candidate sequence lengths that are factors of the string length; and uses the `*` operator to build a full-length string from the candidate sequence:
```
def get_shortest_repeat(string):
length = len(string)
for i in range(1, length // 2 + 1):
if length % i: # skip non-factors early
continue
candidate = string[:i]
if string == candidate * (length // i):
return candidate
return None
```
Thanks to TigerhawkT3 for noticing that `length // 2` without `+ 1` would fail to match the `abab` case. |
How can I tell if a string repeats itself in Python? | 29,481,088 | 320 | 2015-04-06T23:02:09Z | 29,489,919 | 536 | 2015-04-07T10:58:10Z | [
"python",
"string",
"pattern-matching"
] | I'm looking for a way to test whether or not a given string repeats itself for the entire string or not.
Examples:
```
[
'0045662100456621004566210045662100456621', # '00456621'
'0072992700729927007299270072992700729927', # '00729927'
'001443001443001443001443001443001443001443', # '001443'
'037037037037037037037037037037037037037037037', # '037'
'047619047619047619047619047619047619047619', # '047619'
'002457002457002457002457002457002457002457', # '002457'
'001221001221001221001221001221001221001221', # '001221'
'001230012300123001230012300123001230012300123', # '00123'
'0013947001394700139470013947001394700139470013947', # '0013947'
'001001001001001001001001001001001001001001001001001', # '001'
'001406469760900140646976090014064697609', # '0014064697609'
]
```
are strings which repeat themselves, and
```
[
'004608294930875576036866359447',
'00469483568075117370892018779342723',
'004739336492890995260663507109',
'001508295625942684766214177978883861236802413273',
'007518796992481203',
'0071942446043165467625899280575539568345323741',
'0434782608695652173913',
'0344827586206896551724137931',
'002481389578163771712158808933',
'002932551319648093841642228739',
'0035587188612099644128113879',
'003484320557491289198606271777',
'00115074798619102416570771',
]
```
are examples of ones that do not.
The repeating sections of the strings I'm given can be quite long, and the strings themselves can be 500 or more characters, so looping through each character trying to build a pattern then checking the pattern vs the rest of the string seems awful slow. Multiply that by potentially hundreds of strings and I can't see any intuitive solution.
I've looked into regexes a bit and they seem good for when you know what you're looking for, or at least the length of the pattern you're looking for. Unfortunately, I know neither.
How can I tell if a string is repeating itself and if it is, what the shortest repeating subsequence is? | Here's a concise solution which avoids regular expressions and slow in-Python loops:
```
def principal_period(s):
i = (s+s).find(s, 1, -1)
return None if i == -1 else s[:i]
```
See the [Community Wiki answer](http://stackoverflow.com/a/29482936/2447250) started by @davidism for benchmark results. In summary,
> David Zhang's solution is the clear winner, outperforming all others by at least 5x for the large example set.
(That answer's words, not mine.)
This is based on the observation that a string is periodic if and only if it is equal to a nontrivial rotation of itself. Kudos to @AleksiTorhamo for realizing that we can then recover the principal period from the index of the first occurrence of `s` in `(s+s)[1:-1]`, and for informing me of the optional `start` and `end` arguments of Python's `string.find`. |
How can I tell if a string repeats itself in Python? | 29,481,088 | 320 | 2015-04-06T23:02:09Z | 29,519,746 | 16 | 2015-04-08T16:04:38Z | [
"python",
"string",
"pattern-matching"
] | I'm looking for a way to test whether or not a given string repeats itself for the entire string or not.
Examples:
```
[
'0045662100456621004566210045662100456621', # '00456621'
'0072992700729927007299270072992700729927', # '00729927'
'001443001443001443001443001443001443001443', # '001443'
'037037037037037037037037037037037037037037037', # '037'
'047619047619047619047619047619047619047619', # '047619'
'002457002457002457002457002457002457002457', # '002457'
'001221001221001221001221001221001221001221', # '001221'
'001230012300123001230012300123001230012300123', # '00123'
'0013947001394700139470013947001394700139470013947', # '0013947'
'001001001001001001001001001001001001001001001001001', # '001'
'001406469760900140646976090014064697609', # '0014064697609'
]
```
are strings which repeat themselves, and
```
[
'004608294930875576036866359447',
'00469483568075117370892018779342723',
'004739336492890995260663507109',
'001508295625942684766214177978883861236802413273',
'007518796992481203',
'0071942446043165467625899280575539568345323741',
'0434782608695652173913',
'0344827586206896551724137931',
'002481389578163771712158808933',
'002932551319648093841642228739',
'0035587188612099644128113879',
'003484320557491289198606271777',
'00115074798619102416570771',
]
```
are examples of ones that do not.
The repeating sections of the strings I'm given can be quite long, and the strings themselves can be 500 or more characters, so looping through each character trying to build a pattern then checking the pattern vs the rest of the string seems awful slow. Multiply that by potentially hundreds of strings and I can't see any intuitive solution.
I've looked into regexes a bit and they seem good for when you know what you're looking for, or at least the length of the pattern you're looking for. Unfortunately, I know neither.
How can I tell if a string is repeating itself and if it is, what the shortest repeating subsequence is? | The problem may also be solved in `O(n)` in worst case with prefix function.
Note, it may be slower in general case(UPD: and is much slower) than other solutions which depend on number of divisors of `n`, but usually find fails sooner, I think one of bad cases for them will be `aaa....aab`, where there are `n - 1 = 2 * 3 * 5 * 7 ... *p_n - 1` `a`'s
First of all you need to calculate prefix function
```
def prefix_function(s):
n = len(s)
pi = [0] * n
for i in xrange(1, n):
j = pi[i - 1]
while(j > 0 and s[i] != s[j]):
j = pi[j - 1]
if (s[i] == s[j]):
j += 1
pi[i] = j;
return pi
```
then either there's no answer or the shortest period is
```
k = len(s) - prefix_function(s[-1])
```
and you just have to check if `k != n and n % k == 0` (if `k != n and n % k == 0` then answer is `s[:k]`, else there's no answer
You may check the proof [here](http://e-maxx.ru/algo/prefix_function#header_12) (in Russian, but online translator will probably do the trick)
```
def riad(s):
n = len(s)
pi = [0] * n
for i in xrange(1, n):
j = pi[i - 1]
while(j > 0 and s[i] != s[j]):
j = pi[j - 1]
if (s[i] == s[j]):
j += 1
pi[i] = j;
k = n - pi[-1]
return s[:k] if (n != k and n % k == 0) else None
``` |
How can I tell if a string repeats itself in Python? | 29,481,088 | 320 | 2015-04-06T23:02:09Z | 29,543,743 | 9 | 2015-04-09T16:15:01Z | [
"python",
"string",
"pattern-matching"
] | I'm looking for a way to test whether or not a given string repeats itself for the entire string or not.
Examples:
```
[
'0045662100456621004566210045662100456621', # '00456621'
'0072992700729927007299270072992700729927', # '00729927'
'001443001443001443001443001443001443001443', # '001443'
'037037037037037037037037037037037037037037037', # '037'
'047619047619047619047619047619047619047619', # '047619'
'002457002457002457002457002457002457002457', # '002457'
'001221001221001221001221001221001221001221', # '001221'
'001230012300123001230012300123001230012300123', # '00123'
'0013947001394700139470013947001394700139470013947', # '0013947'
'001001001001001001001001001001001001001001001001001', # '001'
'001406469760900140646976090014064697609', # '0014064697609'
]
```
are strings which repeat themselves, and
```
[
'004608294930875576036866359447',
'00469483568075117370892018779342723',
'004739336492890995260663507109',
'001508295625942684766214177978883861236802413273',
'007518796992481203',
'0071942446043165467625899280575539568345323741',
'0434782608695652173913',
'0344827586206896551724137931',
'002481389578163771712158808933',
'002932551319648093841642228739',
'0035587188612099644128113879',
'003484320557491289198606271777',
'00115074798619102416570771',
]
```
are examples of ones that do not.
The repeating sections of the strings I'm given can be quite long, and the strings themselves can be 500 or more characters, so looping through each character trying to build a pattern then checking the pattern vs the rest of the string seems awful slow. Multiply that by potentially hundreds of strings and I can't see any intuitive solution.
I've looked into regexes a bit and they seem good for when you know what you're looking for, or at least the length of the pattern you're looking for. Unfortunately, I know neither.
How can I tell if a string is repeating itself and if it is, what the shortest repeating subsequence is? | I started with more than eight solutions to this problem. Some were bases on regex (match, findall, split), some of string slicing and testing, and some with string methods (find, count, split). Each had benefits in code clarity, code size, speed and memory consumption. I was going to post my answer here when I noticed that execution speed was ranked as important, so I did more testing and improvement to arrive at this:
```
def repeating(s):
size = len(s)
incr = size % 2 + 1
for n in xrange(1, size//2+1, incr):
if size % n == 0:
if s[:n] * (size//n) == s:
return s[:n]
```
This answer seems similar to a few other answers here, but it has a few speed optimisations others have not used:
* `xrange` is a little faster in this application,
* if an input string is an odd length, do not check any even length substrings,
* by using `s[:n]` directly, we avoid creating a variable in each loop.
I would be interested to see how this performs in the standard tests with common hardware. I believe it will be well short of David Zhang's excellent algorithm in most tests, but should be quite fast otherwise.
I found this problem to be very counter-intuitive. The solutions I thought would be fast were slow. The solutions that looked slow were fast! It seems that Python's string creation with the multiply operator and string comparisons are highly optimised. |
Unable to upgrade python six package in mac osx 10.10.2 | 29,485,741 | 7 | 2015-04-07T07:04:42Z | 29,498,533 | 10 | 2015-04-07T18:24:28Z | [
"python",
"osx",
"pip",
"six"
] | I am trying to install latest version of six python package but I have following issues. Can't get rid of six 1.4.1 in mac OSX 10.10.2
```
sudo pip install six --upgrade
Requirement already up-to-date: six in /Library/Python/2.7/site-packages
Cleaning up...
pip search six
six - Python 2 and 3 compatibility utilities
INSTALLED: 1.9.0 (latest)
python -c "import six; print six.version"
1.4.1
which -a python
/usr/bin/python
which -a pip
/usr/local/bin/pip
```
What is wrong here? Can't upgrade six! | Your `pip` binary belongs to `/usr/local/bin/python`, whereas `python` points to `/usr/bin/python`. As a consequence
```
pip install --upgrade six
```
will install to `/usr/local/bin/python`.
The command below will make sure that the right version of pip is used:
```
python -m pip install --upgrade six
``` |
Unable to upgrade python six package in mac osx 10.10.2 | 29,485,741 | 7 | 2015-04-07T07:04:42Z | 33,956,146 | 8 | 2015-11-27T11:13:58Z | [
"python",
"osx",
"pip",
"six"
] | I am trying to install latest version of six python package but I have following issues. Can't get rid of six 1.4.1 in mac OSX 10.10.2
```
sudo pip install six --upgrade
Requirement already up-to-date: six in /Library/Python/2.7/site-packages
Cleaning up...
pip search six
six - Python 2 and 3 compatibility utilities
INSTALLED: 1.9.0 (latest)
python -c "import six; print six.version"
1.4.1
which -a python
/usr/bin/python
which -a pip
/usr/local/bin/pip
```
What is wrong here? Can't upgrade six! | For me, just using [homebrew](http://brew.sh/) fixed everything.
```
brew install python
``` |
Pypi: Not allowed to store or edit package information | 29,485,874 | 4 | 2015-04-07T07:13:11Z | 29,562,043 | 7 | 2015-04-10T12:54:50Z | [
"python",
"packages",
"setuptools",
"pypi"
] | Pypi problems: Not allowed to store or edit package information. I'm following [this tutorial](http://peterdowns.com/posts/first-time-with-pypi.html%20%22this%20tutorial%22.).
.pypirc
```
[distutils]
index-servers =
pypi
pypitest
[pypi]
respository: https://pypi.python.org/pypi
username: Redacted
password: Redacted
[pypitest]
respository: https://testpypi.python.org/pypi
username: Redacted
password: Redacted
```
setup.py
```
from setuptools import setup, find_packages
with open('README.rst') as f:
readme = f.read()
setup(
name = "quick",
version = "0.1",
packages = find_packages(),
install_requires = ['numba>=0.17.0',
'numpy>=1.9.1',],
url = 'https://github.com/David-OConnor/quick',
description = "Fast implementation of numerical functions using Numba",
long_description = readme,
license = "apache",
keywords = "fast, numba, numerical, optimized",
)
```
Command:
```
python setup.py register -r pypitest
```
Error:
```
Server response (403): You are not allowed to store 'quick' package information
```
I was able to successfully register using the form on pypi's test site, but when I upload using this:
```
python setup.py sdist upload -r pypitest
```
I get this, similiar, message:
```
error: HTTP Error 403: You are not allowed to edit 'quick' package information
```
I get the same error message when using Twine and Wheel, per [these instructions](https://python-packaging-user-guide.readthedocs.org/en/latest/distributing.html#requirements-for-packaging-and-distributing). This problem comes up several times here and elsewhere, and has been resolved by registering before uploading, and verifying the PyPi account via email. I'm running into something else. | From this list one can see all the packages on PyPi:
<https://pypi.python.org/simple/>
*quick* is there. The question author says he/she cannot create quick package, so he/she is not the package author on PyPi and somebody else has created a package with the same name before. |
Bare words / new keywords in Python | 29,492,895 | 5 | 2015-04-07T13:31:10Z | 29,492,897 | 7 | 2015-04-07T13:31:10Z | [
"python",
"keyword",
"bareword"
] | I wanted to see if it was possible to define new keywords or, as they're called in [WAT's Destroy All Software talk](https://www.destroyallsoftware.com/talks/wat) when discussing Ruby, bare words, in Python.
I came up with an answer that I couldn't find elsewhere, so I decided to share it Q&A style on StackOverflow. | I've only tried this in the REPL, outside any block, so far. It may be possible to make it work elsewhere, too.
I put this in my python startup file:
```
def bareWordHandler(type_, value, traceback_):
if isinstance(value, SyntaxError):
import traceback
# You can probably modify this next line so that it'll work within blocks, as well as outside them:
bareWords = traceback.format_exception(type_, value, traceback_)[1].split()
# At this point we have the raw string that was entered.
# Use whatever logic you want on it to decide what to do.
if bareWords[0] == 'Awesome':
print(' '.join(bareWords[1:]).upper() + '!')
return
bareWordsHandler.originalExceptHookFunction(type_, value, traceback_)
import sys
bareWordsHandler.originalExceptHookFunction = sys.excepthook
sys.excepthook = bareWordsHandler
```
Quick REPL session demonstration afterwords:
```
>>> Awesome bare words
BARE WORDS!
```
Use responsibly.
Edit: Here's a more useful example. I added in a `run` keyword.
```
if bareWords[0] == 'from' and bareWords[2] == 'run':
atPrompt.autoRun = ['from ' + bareWords[1] + ' import ' + bareWords[3].split('(')[0],
' '.join(bareWords[3:])]
return
```
`atPrompt.autoRun` is a list of variables that, when my prompt is displayed, will automatically be checked and fed back. So, for example, I can do this:
```
>>> from loadBalanceTester run loadBalancerTest(runJar = False)
```
And this gets interpreted as:
```
from loadBalancerTest import loadBalancerTest
loadBalancerTest(runJar = False)
```
It's kind of like a macro - it's common for me to want to do this kind of thing, so I decided to add in a keyword that lets me do it in fewer keystrokes. |
How to compile OpenCV with OpenMP | 29,494,503 | 2 | 2015-04-07T14:49:05Z | 30,619,632 | 7 | 2015-06-03T12:05:21Z | [
"python",
"c++",
"opencv",
"raspberry-pi",
"raspberry-pi2"
] | A user in [this SOF post](http://stackoverflow.com/questions/28938644/opencv-multi-core-support) suggests building OpenCV with a `WITH_OPENMP` flag to enable (some) multi-core support. I have tried building OpenCV-2.4.10 with OpenMP but I am unable to then import cv2 in Python.
**Note:** I am able to build and use OpenCV-2.4.10 in Python. The problem is building with the `WITH_OPENMP` flag.
I am replacing lines 49-58 in `opencv-2.4.10/cmake/OpenCVFindLibsPerf.cmake`, as suggested in [this](http://answers.opencv.org/question/20955/enabling-openmp-while-building-opencv-libraries/) blog post, with the following:
```
# --- OpenMP ---
if(NOT HAVE_TBB AND NOT HAVE_CSTRIPES)
include (FindOpenMP) # --- since cmake version 2.6.3
if (OPENMP_FOUND)
set (HAVE_OPENMP TRUE)
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
else()
set ( HAVE_OPENMP FALSE)
endif()
else()
set(HAVE_OPENMP 0)
endif()
```
And then executing this command before building:
```
cmake -D WITH_OPENMP=ON -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_NEW_PYTHON_SUPPORT=ON \
-D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON \
-D BUILD_EXAMPLES=ON ..
```
Is `WITH_OPENMP=ON` the correct flag to use here? Does the problem lie with using OpenCV-2.4.10? (I notice the blog post I referenced is with OpenCV-2.4.6.1) | I had a similiar question. cmake with -D WITH\_OPENMP=ON worked fine for me. |
Asyncio Making HTTP Requests Slower? | 29,495,598 | 5 | 2015-04-07T15:43:03Z | 29,498,115 | 7 | 2015-04-07T18:01:34Z | [
"python",
"performance",
"python-3.x",
"benchmarking",
"python-asyncio"
] | I'm using Asyncio and Requests to benchmark a series of HTTP requests.
For some reason, it's slightly slower to use Asyncio than just straight Requests. Any idea why? Am I using Asyncio incorrectly?
```
import asyncio
import functools
import requests
import time
ts = time.time()
for i in range(10):
@asyncio.coroutine
def do_checks():
loop = asyncio.get_event_loop()
req = loop.run_in_executor(None, functools.partial(requests.get, "http://google.com", timeout=3))
resp = yield from req
print(resp.status_code)
loop = asyncio.get_event_loop()
loop.run_until_complete(do_checks())
te = time.time()
print("Version A: " + str(te - ts))
ts = time.time()
for i in range(10):
r = requests.get("http://google.com", timeout=3)
print(r.status_code)
te = time.time()
print("Version B: " + str(te - ts))
```
Output:
Version A = Asyncio; Version B = Requests
```
200
200
200
200
200
200
200
200
200
200
Version A: 5.7215821743011475
200
200
200
200
200
200
200
200
200
200
Version B: 5.320340156555176
``` | You are waiting for each request to finish before you start the next one. So you have the overhead of the event loop with no benefits.
Try this:
```
import asyncio
import functools
import requests
import time
ts = time.time()
loop = asyncio.get_event_loop()
@asyncio.coroutine
def do_checks():
futures = []
for i in range(10):
futures.append(loop.run_in_executor(None, functools.partial(requests.get, "http://google.com", timeout=3)))
for req in asyncio.as_completed(futures):
resp = yield from req
print(resp.status_code)
loop.run_until_complete(do_checks())
te = time.time()
print("Version A: " + str(te - ts))
ts = time.time()
for i in range(10):
r = requests.get("http://google.com", timeout=3)
print(r.status_code)
te = time.time()
print("Version B: " + str(te - ts))
```
This is what I get when i run it:
```
$ python test.py
200
...
Version A: 0.43438172340393066
200
...
Version B: 1.6541109085083008
```
Much faster, but really this is just spawning threads and waiting for the http library to finish, you don't need `asyncio` to do that.
You might want to checkout `aiohttp` as it was built for use with `asyncio`. `requests` is a fabulous library, but it is not made for `asyncio`. |
Plot bar graph from Pandas DataFrame | 29,498,652 | 7 | 2015-04-07T18:30:59Z | 29,499,109 | 10 | 2015-04-07T18:56:00Z | [
"python",
"pandas",
"plot"
] | Assuming i have a `DataFrame` that looks like this:
```
Hour | V1 | V2 | A1 | A2
0 | 15 | 13 | 25 | 37
1 | 26 | 52 | 21 | 45
2 | 18 | 45 | 45 | 25
3 | 65 | 38 | 98 | 14
```
Im trying to create a bar plot to compare columns `V1` and `V2` by the `Hour`.
When I do:
```
import matplotlib.pyplot as plt
ax = df.plot(kind='bar', title ="V comp",figsize=(15,10),legend=True, fontsize=12)
ax.set_xlabel("Hour",fontsize=12)
ax.set_ylabel("V",fontsize=12)
```
I get a plot and a legend with all the columns' values and names. How can I modify my code so the plot and legend only displays the columns `V1` and `V2` | To plot just a selection of your columns you can select the columns of interest by passing a list to the subscript operator:
```
ax = df[['V1','V2']].plot(kind='bar', title ="V comp",figsize=(15,10),legend=True, fontsize=12)
```
What you tried was `df['V1','V2']` this will raise a `KeyError` as correctly no column exists with that label, although it looks funny at first you have to consider that your are passing a list hence the double square brackets `[[]]`.
```
import matplotlib.pyplot as plt
ax = df[['V1','V2']].plot(kind='bar', title ="V comp",figsize=(15,10),legend=True, fontsize=12)
ax.set_xlabel("Hour",fontsize=12)
ax.set_ylabel("V",fontsize=12)
plt.show()
```
 |
How to check if there is only consecutive ones and zeros in bytestream | 29,499,638 | 2 | 2015-04-07T19:25:23Z | 29,500,110 | 9 | 2015-04-07T19:52:44Z | [
"python",
"algorithm",
"bit-manipulation"
] | I want to check if an int/long consists only of one set of consecutive ones or zero. For example `111100000`, `1100000` but not `101000`.
I have a basic implementation as follows:
```
def is_consecutive(val):
count = 0
while val %2 == 0:
count += 1
val = val >> 1
return (0xFFFFFFFF >> count ) & val
```
Is there any better way to achieve this? | For a number `n`, `n | (n - 1)` will be all ones *if and only if* it meets the pattern you describe.
A number `x` that is all ones is one less than a power of two. You can [check for a power of two by ANDing it with itself minus one](https://graphics.stanford.edu/~seander/bithacks.html#DetermineIfPowerOf2). Or in other words, `x` is all one bits if `x & (x + 1) == 0`.
```
def is_consecutive(n):
x = n | (n - 1)
return x & (x + 1) == 0
```
This test program checks numbers against both a regex and `is_consecutive`, printing asterisks when each of the two tests passes.
```
#!/usr/bin/env python3
import re
def is_consecutive(n):
x = n | (n - 1)
return x & (x + 1) == 0
for n in range(64):
print('*' if re.fullmatch('0b1*0*', bin(n)) else ' ',
'*' if is_consecutive(n) else ' ',
n, bin(n))
```
Empirical testing confirms this works up to at least 64. As you can see, the asterisks match up perfectly.
```
* * 0 0b0
* * 1 0b1
* * 2 0b10
* * 3 0b11
* * 4 0b100
5 0b101
* * 6 0b110
* * 7 0b111
* * 8 0b1000
9 0b1001
10 0b1010
11 0b1011
* * 12 0b1100
13 0b1101
* * 14 0b1110
* * 15 0b1111
* * 16 0b10000
17 0b10001
18 0b10010
19 0b10011
20 0b10100
21 0b10101
22 0b10110
23 0b10111
* * 24 0b11000
25 0b11001
26 0b11010
27 0b11011
* * 28 0b11100
29 0b11101
* * 30 0b11110
* * 31 0b11111
* * 32 0b100000
33 0b100001
34 0b100010
35 0b100011
36 0b100100
37 0b100101
38 0b100110
39 0b100111
40 0b101000
41 0b101001
42 0b101010
43 0b101011
44 0b101100
45 0b101101
46 0b101110
47 0b101111
* * 48 0b110000
49 0b110001
50 0b110010
51 0b110011
52 0b110100
53 0b110101
54 0b110110
55 0b110111
* * 56 0b111000
57 0b111001
58 0b111010
59 0b111011
* * 60 0b111100
61 0b111101
* * 62 0b111110
* * 63 0b111111
``` |
How to install numpy on windows using pip install? | 29,499,815 | 10 | 2015-04-07T19:35:27Z | 29,503,549 | 7 | 2015-04-08T00:01:09Z | [
"python",
"python-2.7",
"visual-c++",
"numpy",
"pip"
] | I want to install numpy using `pip install numpy` command but i get follwing error:
```
RuntimeError: Broken toolchain: cannot link a simple C program
```
I'm using windows 7 32bit, python 2.7.9, pip 6.1.1 and some MSVC compiler. I think it uses compiler from Visual C++ 2010 Express, but actually I'm not sure which one because I have several visual studio installations.
I know that there are prebuilt packages for windows but I want to figure out if there is some way to do it just by typing `pip install numpy`?
Edit:
I think that there could be other packages which must be compiled before usage, so it's not only about numpy. I want to solve the problem with my compiler so I could easily install any other similar package without necessity to search for prebuilt packages (and hope that there is some at all) | Installing extension modules can be an issue with pip. This is why conda exists. conda is an open-source BSD-licensed cross-platform package manager. It can easily install NumPy.
Two options:
* Install Anaconda [here](http://www.continuum.io/downloads)
* Install Miniconda [here](http://repo.continuum.io/miniconda/index.html) and then go to a command-line and type `conda install numpy` (make sure your PATH includes the location conda was installed to). |
How to install numpy on windows using pip install? | 29,499,815 | 10 | 2015-04-07T19:35:27Z | 34,587,391 | 8 | 2016-01-04T08:45:38Z | [
"python",
"python-2.7",
"visual-c++",
"numpy",
"pip"
] | I want to install numpy using `pip install numpy` command but i get follwing error:
```
RuntimeError: Broken toolchain: cannot link a simple C program
```
I'm using windows 7 32bit, python 2.7.9, pip 6.1.1 and some MSVC compiler. I think it uses compiler from Visual C++ 2010 Express, but actually I'm not sure which one because I have several visual studio installations.
I know that there are prebuilt packages for windows but I want to figure out if there is some way to do it just by typing `pip install numpy`?
Edit:
I think that there could be other packages which must be compiled before usage, so it's not only about numpy. I want to solve the problem with my compiler so I could easily install any other similar package without necessity to search for prebuilt packages (and hope that there is some at all) | Frustratingly the Numpy package published to PyPI won't install on most Windows computers <https://github.com/numpy/numpy/issues/5479>
Instead:
1. Download the Numpy wheel for your Python version from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>
2. Install it from the command line `pip install numpy-1.10.2+mkl-cp35-none-win_amd64.whl` |
Bradley adaptive thresholding algorithm | 29,502,241 | 7 | 2015-04-07T22:08:23Z | 29,503,184 | 7 | 2015-04-07T23:28:27Z | [
"python",
"python-imaging-library",
"adaptive-threshold"
] | I am currently working on implementing a thresholding algorithm called `Bradley Adaptive Thresholding`.
I have been following mainly two links in order to work out how to implement this algorithm. I have also successfully been able to implement two other thresholding algorithms, mainly, [Otsu's Method](http://en.wikipedia.org/wiki/Otsu%27s_method) and [Balanced Histogram Thresholding](http://en.wikipedia.org/wiki/Balanced_histogram_thresholding).
Here are the two links that I have been following in order to create the `Bradley Adaptive Thresholding` algorithm.
<http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.420.7883&rep=rep1&type=pdf>
[Bradley Adaptive Thresholding Github Example](https://github.com/rmtheis/bradley-adaptive-thresholding/blob/master/main.cpp)
Here is the section of my source code in `Python` where I am running the algorithm and saving the image. I use the `Python Imaging Library` and no other tools to accomplish what I want to do.
```
def get_bradley_binary(inp_im):
w, h = inp_im.size
s, t = (w / 8, 0.15)
int_im = Image.new('L', (w, h))
out_im = Image.new('L', (w, h))
for i in range(w):
summ = 0
for j in range(h):
index = j * w + i
summ += get_pixel_offs(inp_im, index)
if i == 0:
set_pixel_offs(int_im, index, summ)
else:
temp = get_pixel_offs(int_im, index - 1) + summ
set_pixel_offs(int_im, index, temp)
for i in range(w):
for j in range(h):
index = j * w + i
x1,x2,y1,y2 = (i-s/2, i+s/2, j-s/2, j+s/2)
x1 = 0 if x1 < 0 else x1
x2 = w - 1 if x2 >= w else x2
y1 = 0 if y1 < 0 else y1
y2 = h - 1 if y2 >= h else y2
count = (x2 - x1) * (y2 - y1)
a1 = get_pixel_offs(int_im, y2 * w + x2)
a2 = get_pixel_offs(int_im, y1 * w + x2)
a3 = get_pixel_offs(int_im, y2 * w + x1)
a4 = get_pixel_offs(int_im, y1 * w + x1)
summ = a1 - a2 - a3 + a4
temp = get_pixel_offs(inp_im, index)
if temp * count < summ * (1.0 - t):
set_pixel_offs(out_im, index, 0)
else:
set_pixel_offs(out_im, index, 255)
return out_im
```
Here is the section of my code that illustrates the implementation of these set and get methods that you have not seen before.
```
def get_offs(image, x, y):
return y * image.size[0] + x
def get_xy(image, offs):
return (offs % image.size[0], int(offs / image.size[0]))
def set_pixel_xy(image, x, y, data):
image.load()[x, y] = data
def set_pixel_offs(image, offs, data):
x, y = get_xy(image, offs)
image.load()[x, y] = data
def get_pixel_offs(image, offs):
return image.getdata()[offs]
def get_pixel_xy(image, x, y):
return image.getdata()[get_offs(image, x, y)]
```
And finally, here are the input and output images. These are the same images that are used in the original research paper in the first link that I provided you. Note: The output image is almost completely white and it may be hard to see, but I provided it anyway in case anyone really wanted to have it for reference.

 | You cannot create the integral image with PIL the way that you are doing it because the image that you are packing data into cannot accept values over 255. The values in the integral image get very large because they are the sums of the pixels above and to the left (see page 3 of your white paper). They will grow much much larger than 255, so you need 32 bits per pixel to store them.
You can test this by creating a PIL image in "L" mode and then setting a pixel to 1000000 or some large number. Then when you read back the value, it will return 255.
```
>>> from PIL import Image
>>> img = Image.new('L', (100,100))
>>> img.putpixel((0,0), 100000)
>>> print list(img.getdata())[0]
255
```
EDIT: After reading the PIL documentation, you may be able to use PIL if you create your integral image in "I" mode instead of "L" mode. This should provide 32 bits per pixel.
For that reason I recommend Numpy instead of PIL.
Below is a rewrite of your threshold function using Numpy instead of PIL, and I get the correct/expected result. Notice that I create my integral image using a uint32 array. I used the exact same C example on Github that you used for your translation:
```
import numpy as np
def adaptive_thresh(input_img):
h, w = input_img.shape
S = w/8
s2 = S/2
T = 15.0
#integral img
int_img = np.zeros_like(input_img, dtype=np.uint32)
for col in range(w):
for row in range(h):
int_img[row,col] = input_img[0:row,0:col].sum()
#output img
out_img = np.zeros_like(input_img)
for col in range(w):
for row in range(h):
#SxS region
y0 = max(row-s2, 0)
y1 = min(row+s2, h-1)
x0 = max(col-s2, 0)
x1 = min(col+s2, w-1)
count = (y1-y0)*(x1-x0)
sum_ = int_img[y1, x1]-int_img[y0, x1]-int_img[y1, x0]+int_img[y0, x0]
if input_img[row, col]*count < sum_*(100.-T)/100.:
out_img[row,col] = 0
else:
out_img[row,col] = 255
return out_img
```
 |
aiohttp - exception ignored message | 29,502,779 | 7 | 2015-04-07T22:52:09Z | 29,503,234 | 7 | 2015-04-07T23:33:30Z | [
"python",
"python-3.x",
"python-asyncio",
"aiohttp"
] | I'm running the following code which makes 5 requests via aiohttp:
```
import aiohttp
import asyncio
def fetch_page(url, idx):
try:
url = 'http://google.com'
response = yield from aiohttp.request('GET', url)
print(response.status)
except Exception as e:
print(e)
def main():
try:
url = 'http://google.com'
urls = [url] * 5
coros = []
for idx, url in enumerate(urls):
coros.append(asyncio.Task(fetch_page(url, idx)))
yield from asyncio.gather(*coros)
except Exception as e:
print(e)
if __name__ == '__main__':
try:
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
except Exception as e:
print(e)
```
Output:
```
200
200
200
200
200
Exception ignored in: Exception ignored in: Exception ignored in: Exception ignored in: Exception ignored in:
```
Note: There is no additional information as to what/where the exception is.
What's causing this and are there any tips to debug it? | I'm not exactly sure why, but it seems that leaving the `aiohttp.ClientResponse` object open is causing an unraisable exception to be thrown when the interpreter exits. On my system, this results in warnings like this, rather than "Exception ignored in" messages:
```
sys:1: ResourceWarning: unclosed <socket object at 0x7f44fce557a8>
sys:1: ResourceWarning: unclosed <socket object at 0x7f44fce55718>
sys:1: ResourceWarning: unclosed <socket object at 0x7f44fcc24a78>
sys:1: ResourceWarning: unclosed <socket object at 0x7f44fcc248c8>
sys:1: ResourceWarning: unclosed <socket object at 0x7f44fcc24958>
sys:1: ResourceWarning: unclosed <socket object at 0x7f44fcc249e8>
sys:1: ResourceWarning: unclosed <socket object at 0x7f44fcc24b08>
```
In any case, you can fix it by explicitly closing the `ClientResponse` object at the end of `fetch_objects`, by calling `response.close()`. |
using the * (splat) operator with print | 29,503,331 | 2 | 2015-04-07T23:40:29Z | 29,503,361 | 8 | 2015-04-07T23:43:08Z | [
"python",
"python-2.7"
] | I often use Python's `print` statement to display data. Yes, I know about the `'%s %d' % ('abc', 123)` method, and the `'{} {}'.format('abc', 123)` method, and the `' '.join(('abc', str(123)))` method. I also know that the splat operator (`*`) can be used to expand an iterable into function arguments. However, I can't seem to do that with the `print` statement. Using a list:
```
>>> l = [1, 2, 3]
>>> l
[1, 2, 3]
>>> print l
[1, 2, 3]
>>> '{} {} {}'.format(*l)
'1 2 3'
>>> print *l
File "<stdin>", line 1
print *l
^
SyntaxError: invalid syntax
```
Using a tuple:
```
>>> t = (4, 5, 6)
>>> t
(4, 5, 6)
>>> print t
(4, 5, 6)
>>> '%d %d %d' % t
'4 5 6'
>>> '{} {} {}'.format(*t)
'4 5 6'
>>> print *t
File "<stdin>", line 1
print *t
^
SyntaxError: invalid syntax
```
Am I missing something? Is this simply not possible? What exactly are the things that follow `print`? The [documentation](https://docs.python.org/2/reference/simple_stmts.html#the-print-statement) says that a comma-separated list of expressions follow the `print` keyword, but I am guessing this is not the same as a list data type. I did a lot of digging in SO and on the web and did not find a clear explanation for this.
I am using Python 2.7.6. | `print` is a statement in Python 2.x and does not support the `*` syntax. You can see this from the grammar for `print` listed in the [documentation](https://docs.python.org/2/reference/simple_stmts.html#grammar-token-print_stmt):
```
print_stmt ::= "print" ([expression ("," expression)* [","]]
| ">>" expression [("," expression)+ [","]])
```
Notice how there is no option for using `*` after the `print` keyword.
---
However, the `*` syntax *is* supported inside function calls and it just so happens that [`print` is a function in Python 3.x](https://docs.python.org/3/library/functions.html#print). This means that you could import it from [`__future__`](https://docs.python.org/2/library/__future__.html):
```
from __future__ import print_function
```
and then use:
```
print(*l)
```
Demo:
```
>>> # Python 2.x interpreter
>>> from __future__ import print_function
>>> l = [1, 2, 3]
>>> print(*l)
1 2 3
>>>
``` |
How to get all values from python enum class? | 29,503,339 | 12 | 2015-04-07T23:41:10Z | 29,503,414 | 7 | 2015-04-07T23:47:55Z | [
"python",
"django",
"enums"
] | I'm using Enum4 library to create an enum class as follows:
```
class Color(Enum):
RED = 1
BLUE = 2
```
I want to print `[1, 2]` as a list somewhere. How can I achieve this? | You can use [IntEnum](https://docs.python.org/3/library/enum.html#intenum):
```
from enum import IntEnum
class Color(IntEnum):
RED = 1
BLUE = 2
print(int(Color.RED)) # prints 1
```
To get list of the ints:
```
enum_list = list(map(int, Color))
print(enum_list) # prints [1, 2]
``` |
How to get all values from python enum class? | 29,503,339 | 12 | 2015-04-07T23:41:10Z | 29,503,454 | 35 | 2015-04-07T23:51:35Z | [
"python",
"django",
"enums"
] | I'm using Enum4 library to create an enum class as follows:
```
class Color(Enum):
RED = 1
BLUE = 2
```
I want to print `[1, 2]` as a list somewhere. How can I achieve this? | You can do the following:
```
[e.value for e in Color]
``` |
Why do new style class and old style class have different behavior in this case? | 29,511,332 | 11 | 2015-04-08T09:50:53Z | 29,519,325 | 10 | 2015-04-08T15:45:23Z | [
"python",
"python-3.x",
"python-2.x",
"python-internals"
] | I found something interesting, here is a snippet of code:
```
class A(object):
def __init__(self):
print "A init"
def __del__(self):
print "A del"
class B(object):
a = A()
```
If I run this code, I will get:
```
A init
```
But if I change `class B(object)` to `class B()`, I will get:
```
A init
A del
```
I found a note in the [\_\_del\_\_ doc](https://docs.python.org/2/reference/datamodel.html#object.__del__):
> It is not guaranteed that **del**() methods are called for objects
> that still exist when the interpreter exits.
Then, I guess it's because that `B.a` is still referenced(referenced by class `B`) when the interpreter exists.
So, I added a `del B` before the interpreter exists manually, and then I found `a.__del__()` was called.
Now, I am a little confused about that. Why is `a.__del__()` called when using old style class? Why do new and old style classes have different behavior?
I found a similar question [here](http://bytes.com/topic/python/answers/467672-__del__-not-called), but I think the answers are not clear enough. | TL;DR: this is an [old issue](http://bugs.python.org/issue1545463) in CPython, that was finally fixed in [CPython 3.4](https://docs.python.org/3/whatsnew/3.4.html#pep-442-safe-object-finalization). Objects kept live by reference cycles that are referred to by module globals are not properly finalized on interpreter exit in CPython versions prior to 3.4. New-style classes have implicit cycles in their `type` instances; old-style classes (of type `classobj`) do not have implicit reference cycles.
Even though fixed in this case, the CPython 3.4 documentation still recommends to not depend on [`__del__`](https://docs.python.org/3.4/reference/datamodel.html#object.__del__) being called on interpreter exit - consider yourself warned.
---
New style classes have reference cycles in themselves: most notably
```
>>> class A(object):
... pass
>>> A.__mro__[0] is A
True
```
This means that they cannot be deleted instantly\*, but only when the garbage collector is run. Since a reference to them is being held by the main module, they will stay in memory until the interpreter shutdown. At the end, during the module clean-up, all the module global names in the main are set to point to `None`, and whichever objects had their reference counts decreased to zero (your old-style class for example) were also deleted. However, the new-style classes, having reference cycles, would not be released/finalized by this.
The cyclic garbage collector would not be run at the interpreter exit (which is allowed by the [CPython documentation](https://docs.python.org/3.4/reference/datamodel.html#object.__del__):
> It is not guaranteed that `__del__()` methods are called for objects that still exist when the interpreter exits.
---
Now, old-style classes in Python 2 do not have implicit cycles. When the CPython module cleanup/shutdown code sets the global variables to `None`, the only remaining reference to class `B` is dropped; then `B` is deleted, and the last reference to `a` is dropped, and `a` too is finalized.
---
To demonstrate the fact that the new-style classes have cycles and require a GC sweep, whereas the old-style classes do not, you can try the following program in CPython 2 (CPython 3 does not have old-style classes any more):
```
import gc
class A(object):
def __init__(self):
print("A init")
def __del__(self):
print("A del")
class B(object):
a = A()
del B
print("About to execute gc.collect()")
gc.collect()
```
With `B` as new-style class as above, the output is
```
A init
About to execute gc.collect()
A del
```
With `B` as old-style class (`class B:`), the output is
```
A init
A del
About to execute gc.collect()
```
That is, the new-style class was deleted only after `gc.collect()` even though the last outside reference to it was dropped already; but the old-style class was deleted instantly.
---
Much of this is already [fixed](http://bugs.python.org/issue1545463) in [Python 3.4](https://docs.python.org/3.4/whatsnew/3.4.html#pep-442-safe-object-finalization): thanks to [PEP 442](https://www.python.org/dev/peps/pep-0442/), which included the [*module shutdown procedure based on GC code*](http://bugs.python.org/issue812369). Now even on interpreter exit the module globals are finalized using the ordinary garbage collection. If you run your program under Python 3.4, the program will print
```
A init
A del
```
Whereas with Python <=3.3 it will print
```
A init
```
*(**Do note** that other implementations still might or might not execute `__del__` at this moment, regardless of the version of them being above, at, or below, 3.4)* |
Is there a way to rerun an Upgrade Step in Plone? | 29,513,201 | 3 | 2015-04-08T11:21:14Z | 29,515,070 | 8 | 2015-04-08T12:48:09Z | [
"python",
"python-2.7",
"plone",
"plone-4.x"
] | I've got a Plone 4.2.4 application and from time to time I need to create an Upgrade Step.
So, I register it in the `configure.zcml`, create the function to invoke and increase the profile version number in the `metadata.xml` file.
However, it might happen that something goes not really as expected during the upgrade process and one would like to rerun the Upgrade with the corrected Upgrade Step.
Is there a way to rerun the Upgrade Step or do I always need to increase the version and create new Upgrade Step to fix the previous one? | Go to `portal_setup` (from ZMI), then:
* go in the "Upgrades" tab
* select your profile (the one where you defined the metadata.xml)
From here you commonly can run upgrade step not yet ran. In your case click on the "Show" button of "Show old upgrades". |
Get request body as string in Django | 29,514,077 | 10 | 2015-04-08T12:02:53Z | 29,514,222 | 16 | 2015-04-08T12:10:00Z | [
"python",
"django"
] | I'm sending a POST request with JSON body to a Django server (fairly standard). On the server I need to decode this using `json.loads()`.
The problem is how do I get the body of the request in a string format?
I have the following code currently:
```
body_data = {}
if request.META.get('CONTENT_TYPE', '').lower() == 'application/json' and len(request.body) > 0:
try:
body_data = json.loads(request.body)
except Exception as e:
return HttpResponseBadRequest(json.dumps({'error': 'Invalid request: {0}'.format(str(e))}), content_type="application/json")
```
However, this gives an error `the JSON object must be str, not 'bytes'`.
How do I retrieve the body of the request as a string, with the correct encoding applied? | The request body, `request.body`, is a byte string. In Python 3, `json.loads()` will only accept a unicode string, so you must decode `request.body` before passing it to `json.loads()`.
```
body_unicode = request.body.decode('utf-8')
body_data = json.loads(body_unicode)
```
In Python 2, `json.loads` will accept a unicode string or a byte sting, so the decode step is not necessary.
When decoding the string, I think you're safe to assume 'utf-8' - I can't find a definitive source for this, but see the quote below from the [jQuery docs](http://api.jquery.com/jquery.ajax/):
> Note: The W3C XMLHttpRequest specification dictates that the charset is always UTF-8; specifying another charset will not force the browser to change the encoding. |
"sudo pip install Django" => sudo: pip: command not found | 29,514,136 | 4 | 2015-04-08T12:06:02Z | 29,514,178 | 10 | 2015-04-08T12:08:00Z | [
"python",
"django",
"terminal",
"install"
] | This what my terminal is saying when trying to install Django.
```
MacBook-XXXX:~ Stephane$ sudo pip install Django
sudo: pip: command not found
```
I have tested in idle shell if pip is installed:
```
>>> import easy_install
>>> import pip
>>>
```
What am I doing wrong? | you need to install pip
```
sudo easy_install pip
``` |
Filling WTForms FormField FieldList with data results in HTML in fields | 29,514,798 | 5 | 2015-04-08T12:35:27Z | 29,517,799 | 8 | 2015-04-08T14:39:08Z | [
"python",
"flask",
"wtforms",
"flask-wtforms",
"fieldlist"
] | I have a Flask app in which I can populate form data by uploading a CSV file which is then read. I want to populate a FieldList with the data read from the CSV. However, when I try to populate the data, it enters raw HTML into the TextFields instead of just the value that I want. What am I doing wrong?
**app.py**
```
from flask import Flask, render_template, request, url_for
from flask.ext.wtf import Form
from wtforms import StringField, FieldList, FormField, SelectField
from wtforms.validators import DataRequired
from werkzeug.datastructures import MultiDict
app = Flask(__name__)
app.config['SECRET_KEY']='asdfjlkghdsf'
# normally student data is read in from a file uploaded, but for this demo we use dummy data
student_info=[("123","Bob Jones"),("234","Peter Johnson"),("345","Carly Everett"),
("456","Josephine Edgewood"),("567","Pat White"),("678","Jesse Black")]
class FileUploadForm(Form):
pass
class StudentForm(Form):
student_id = StringField('Student ID', validators = [DataRequired()])
student_name = StringField('Student Name', validators = [DataRequired()])
class AddClassForm(Form):
name = StringField('classname', validators=[DataRequired()])
day = SelectField('classday',
choices=[(1,"Monday"),(2,"Tuesday"),(3,"Wednesday"),(4,"Thursday"),(5,"Friday")],
coerce=int)
students = FieldList(FormField(StudentForm), min_entries = 5) # show at least 5 blank fields by default
@app.route('/', methods=['GET', 'POST'])
def addclass():
fileform = FileUploadForm()
classform = AddClassForm()
# Check which 'submit' button was called to validate the correct form
if 'addclass' in request.form and classform.validate_on_submit():
# Add class to DB - not relevant for this example.
return redirect(url_for('addclass'))
if 'upload' in request.form and fileform.validate_on_submit():
# get the data file from the post - not relevant for this example.
# overwrite the classform by populating it with values read from file
classform = PopulateFormFromFile()
return render_template('addclass.html', classform=classform)
return render_template('addclass.html', fileform=fileform, classform=classform)
def PopulateFormFromFile():
classform = AddClassForm()
# normally we would read the file passed in as an argument and pull data out,
# but let's just use the dummy data from the top of this file, and some hardcoded values
classform.name.data = "Super Awesome Class"
classform.day.data = 4 # Thursday
# pop off any blank fields already in student info
while len(classform.students) > 0:
classform.students.pop_entry()
for student_id, name in student_info:
# either of these ways have the same end result.
#
# studentform = StudentForm()
# studentform.student_id.data = student_id
# studentform.student_name.data = name
#
# OR
student_data = MultiDict([('student_id',student_id), ('student_name',name)])
studentform = StudentForm(student_data)
classform.students.append_entry(studentform)
return classform
if __name__ == '__main__':
app.run(debug=True, port=5001)
```
**templates/addclass.html**
```
<html>
<head>
<title>Flask FieldList Demo</title>
</head>
<body>
<h1>Add Class</h1>
{% if fileform %}
<div>
<p>Add class from file:</p>
<form action="" method="post" enctype="multipart/form-data" name="fileform">
{{ fileform.hidden_tag() }}
<p><input type="submit" name="upload" value="Upload"></p>
</form>
</div>
<hr>
{% endif %}
<div>
<form action="" method="post" name="classform">
{{ classform.hidden_tag() }}
Class Name: {{ classform.name }}<br>
Day: {{ classform.day }}<br>
<br>
<div>
<table>
<tr>
<th> Student Number </th>
<th> Name </th>
</tr>
{% for student in classform.students %}
<tr>
<td>{{ student.student_id }}</td>
<td>{{ student.student_name }}</td>
</tr>
{% endfor %}
</table>
</div>
<p><input type="submit" name="addclass" value="Add Class"></p>
</form>
</div>
</body>
</html>
```
The offending code snippet happens at the line `classform.students.append_entry(studentform)`.
I can post the output HTML if required. What I expect is something like this:

What I get instead is:
 | OK, I spent hours on this and in the end it was such a trivial code change.
Most fields let you change their value by modifying the `data` attribute (as I was doing above). In fact, in my code, I had this comment as above:
```
### either of these ways have the same end result.
#
# studentform = StudentForm()
# studentform.student_id.data = student_id
# studentform.student_name.data = name
#
### OR
#
# student_data = MultiDict([('student_id',student_id), ('student_name',name)])
# studentform = StudentForm(student_data)
```
However, in the case of a FieldList of FormFields, we should not edit the `data` attribute, but rather the field itself. The following code works as expected:
```
for student_id, name in student_info:
studentform = StudentForm()
studentform.student_id = student_id # not student_id.data
studentform.student_name = name
classform.students.append_entry(studentform)
```
I hope this helps someone experiencing the same problem. |
Add column to dataframe with default value | 29,517,072 | 12 | 2015-04-08T14:09:22Z | 29,517,089 | 12 | 2015-04-08T14:09:52Z | [
"python",
"pandas"
] | I have an existing dataframe which I need to add an additional column to which will contain the same value for every row.
Existing df:
```
Date, Open, High, Low, Close
01-01-2015, 565, 600, 400, 450
```
New df:
```
Name, Date, Open, High, Low, Close
abc, 01-01-2015, 565, 600, 400, 450
```
I know how to append an existing series / dataframe column. But this is a different situation, because all I need is to add the 'Name' column and set every row to the same value, in this case 'abc'.
Im not entirely sure how to do that. | `df['Name']='abc'` will add the new column and set all rows to that value:
```
In [79]:
df
Out[79]:
Date, Open, High, Low, Close
0 01-01-2015, 565, 600, 400, 450
In [80]:
df['Name'] = 'abc'
df
Out[80]:
Date, Open, High, Low, Close Name
0 01-01-2015, 565, 600, 400, 450 abc
``` |
error while following Tumblelog Application with Flask and MongoEngine | 29,517,930 | 8 | 2015-04-08T14:44:49Z | 29,518,117 | 11 | 2015-04-08T14:51:23Z | [
"python",
"mongodb",
"flask",
"flask-mongoengine"
] | I am following tumbleblog application [here](http://docs.mongodb.org/ecosystem/tutorial/write-a-tumblelog-application-with-flask-mongoengine/)
my `__init__.py`:
```
from flask import Flask
from flask.ext.mongoengine import MongoEngine
app = Flask(__name__)
app.config["MONGODB_SETTINGS"] = {'DB': "sencha_web_service", 'username': "<username>", "password": "<password>"}
app.config["SECRET_KEY"] = "KeepThisS3cr3t"
db = MongoEngine(app)
if __name__ == '__main__':
app.run()
```
I get the error:
```
mongoengine.connection.ConnectionError: Cannot connect to database default :
False is not a read preference.
```
---
I tried passing in `"alias"="default"` in `app.config["MONGODB_SETTINGS"]` but still getting the same error. | In your MONGODB\_SETTINGS dictionary, the key for the database name should be 'db', not 'DB' (i.e. all lowercase).
The error you're getting is because the MongoEngine extension cannot find the 'db' entry in your configuration, and so uses 'default' as the database name.
**Edit**
Upon further inspection, it seems this is a bug somewhere in (Flask-)MongoEngine (or possible pymongo) where the default value of read\_preference in mongoengine.connect is False instead of an actual read preference, and is not transformed to the actual default in pymongo
If you add
```
from pymongo import read_preferences
```
to your imports and
```
'read_preference': read_preferences.ReadPreference.PRIMARY
```
to your config dictionary, it should work (that's the default read\_preference in pymongo) |
numpy.asarray: how to check up that its result dtype is numeric? | 29,518,923 | 3 | 2015-04-08T15:27:28Z | 29,519,728 | 7 | 2015-04-08T16:04:03Z | [
"python",
"numpy"
] | I have to create a `numpy.ndarray` from array-like data with int, float or complex numbers.
I hope to do it with `numpy.asarray` function.
I don't want to give it a strict `dtype` argument, because I want to convert complex values to `complex64` or `complex128`, floats to `float32` or `float64`, etc.
But if I just simply run `numpy.ndarray(some_unknown_data)` and look at the dtype of its result, how can I understand, that the data is numeric, not object or string or something else? | You could check if the dtype of the array is a sub-dtype of `np.number`. For example:
```
>>> np.issubdtype(np.complex128, np.number)
True
>>> np.issubdtype(np.int32, np.number)
True
>>> np.issubdtype(np.str_, np.number)
False
>>> np.issubdtype('O', np.number) # 'O' is object
False
```
Essentially, this just checks whether the dtype is below 'number' in the [NumPy dtype hierarchy](http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html):
 |
SQLAlchemy set default nullable=False | 29,522,557 | 4 | 2015-04-08T18:28:17Z | 29,522,697 | 11 | 2015-04-08T18:34:48Z | [
"python",
"sqlalchemy",
"flask-sqlalchemy"
] | I'm using SQLAlchemy for Flask to create some models. The problem is, nearly all my columns need `nullable=False`, so I'm looking for a way to set this option as default when creating a column. Surely I could add them manually (as a Vim exercise), but I just don't feel like it today. For a reference, this is how my setup (`models.py`) looks like:
```
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy()
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), nullable=False)
```
and many more. Is there a simple way of doing this?
Thanks in advance. | just create a wrapper that sets it
```
def NullColumn(*args,**kwargs):
kwargs["nullable"] = kwargs.get("nullable",True)
return db.Column(*args,**kwargs)
...
username = NullColumn(db.String(80))
```
using `functools.partial` as recommended in the comments
```
from functools import partial
NullColumn = partial(Column,nullable=True)
``` |
SQLAlchemy ORM conversion to pandas DataFrame | 29,525,808 | 24 | 2015-04-08T21:36:34Z | 29,528,804 | 43 | 2015-04-09T02:40:13Z | [
"python",
"pandas",
"sqlalchemy",
"flask-sqlalchemy"
] | This topic hasn't been addressed in a while, here or elsewhere. Is there a solution converting a SQLAlchemy `<Query object>` to a pandas DataFrame?
Pandas has the capability to use `pandas.read_sql` but this requires use of raw SQL. I have two reasons for wanting to avoid it: 1) I already have everything using the ORM (a good reason in and of itself) and 2) I'm using python lists as part of the query (eg: `.db.session.query(Item).filter(Item.symbol.in_(add_symbols)` where `Item` is my model class and `add_symbols` is a list). This is the equivalent of SQL `SELECT ... from ... WHERE ... IN`.
Is anything possible? | Below should work in most cases:
```
df = pd.read_sql(query.statement, query.session.bind)
``` |
numpy.isnan(value) not the same as value == numpy.nan? | 29,528,092 | 4 | 2015-04-09T01:17:10Z | 29,528,160 | 7 | 2015-04-09T01:24:43Z | [
"python",
"numpy",
"types",
"boolean",
null
] | Why am I getting the following:
```
>>> v
nan
>>> type(v)
<type 'numpy.float64'>
>>> v == np.nan
False
>>> np.isnan(v)
True
```
I would have thought the two should be equivalent? | `nan != nan`. That's just how equality comparisons on `nan` are defined. It was decided that this result is more convenient for numerical algorithms than the alternative. This is specifically why `isnan` exists. |
Python pandas: check if any value is NaN in DataFrame | 29,530,232 | 50 | 2015-04-09T05:09:39Z | 29,530,559 | 26 | 2015-04-09T05:37:26Z | [
"python",
"pandas",
null
] | In python pandas, what's the best way to check whether a DataFrame has one (or more) NaN values?
I know about the function `pd.isnan`, but this returns a DataFrame of booleans for each element. [This post](http://stackoverflow.com/questions/27754891/python-nan-value-in-pandas) right here doesn't exactly answer my question either. | You have a couple options.
```
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10,6))
# Make a few areas have NaN values
df.iloc[1:3,1] = np.nan
df.iloc[5,3] = np.nan
df.iloc[7:9,5] = np.nan
```
Now the data frame looks something like this:
```
0 1 2 3 4 5
0 0.520113 0.884000 1.260966 -0.236597 0.312972 -0.196281
1 -0.837552 NaN 0.143017 0.862355 0.346550 0.842952
2 -0.452595 NaN -0.420790 0.456215 1.203459 0.527425
3 0.317503 -0.917042 1.780938 -1.584102 0.432745 0.389797
4 -0.722852 1.704820 -0.113821 -1.466458 0.083002 0.011722
5 -0.622851 -0.251935 -1.498837 NaN 1.098323 0.273814
6 0.329585 0.075312 -0.690209 -3.807924 0.489317 -0.841368
7 -1.123433 -1.187496 1.868894 -2.046456 -0.949718 NaN
8 1.133880 -0.110447 0.050385 -1.158387 0.188222 NaN
9 -0.513741 1.196259 0.704537 0.982395 -0.585040 -1.693810
```
* **Option 1**: `df.isnull().any().any()` - This returns a boolean value
You know of the `isnull()` which would return a dataframe like this:
```
0 1 2 3 4 5
0 False False False False False False
1 False True False False False False
2 False True False False False False
3 False False False False False False
4 False False False False False False
5 False False False True False False
6 False False False False False False
7 False False False False False True
8 False False False False False True
9 False False False False False False
```
If you make it `df.isnull().any()`, you can find just the columns that have `NaN` values:
```
0 False
1 True
2 False
3 True
4 False
5 True
dtype: bool
```
One more `.any()` will tell you if any of the above are `True`
```
> df.isnull().any().any()
True
```
* **Option 2**: `df.isnull().sum().sum()` - This returns an integer of the total number of `NaN` values:
This operates the same way as the `.any().any()` does, by first giving a summation of the number of `NaN` values in a column, then the summation of those values:
```
df.isnull().sum()
0 0
1 2
2 0
3 1
4 0
5 2
dtype: int64
```
Then to get the total:
```
df.isnull().sum().sum()
5
``` |
Python pandas: check if any value is NaN in DataFrame | 29,530,232 | 50 | 2015-04-09T05:09:39Z | 29,530,601 | 56 | 2015-04-09T05:39:54Z | [
"python",
"pandas",
null
] | In python pandas, what's the best way to check whether a DataFrame has one (or more) NaN values?
I know about the function `pd.isnan`, but this returns a DataFrame of booleans for each element. [This post](http://stackoverflow.com/questions/27754891/python-nan-value-in-pandas) right here doesn't exactly answer my question either. | [jwilner](http://stackoverflow.com/users/1567452/jwilner)'s response is spot on. I was exploring to see if there's a faster option, since in my experience, summing flat arrays is (strangely) faster than counting. This code seems faster:
```
df.isnull().values.any()
```
For example:
```
In [2]: df = pd.DataFrame(np.random.randn(1000,1000))
In [3]: df[df > 0.9] = pd.np.nan
In [4]: %timeit df.isnull().any().any()
100 loops, best of 3: 14.7 ms per loop
In [5]: %timeit df.isnull().values.sum()
100 loops, best of 3: 2.15 ms per loop
In [6]: %timeit df.isnull().sum().sum()
100 loops, best of 3: 18 ms per loop
In [7]: %timeit df.isnull().values.any()
1000 loops, best of 3: 948 µs per loop
```
`df.isnull().sum().sum()` is a bit slower, but of course, has additional information -- the number of `NaNs`. |
iPython/Jupyter Notebook and Pandas, how to plot multiple graphs in a for loop? | 29,532,894 | 5 | 2015-04-09T07:55:00Z | 29,533,687 | 7 | 2015-04-09T08:38:12Z | [
"python",
"pandas",
"matplotlib",
"plot",
"ipython-notebook"
] | Consider the following code running in iPython Notebook:
```
from pandas import *
%matplotlib inline
ys = [[0,1,2,3,4],[4,3,2,1,0]]
x_ax = [0,1,2,3,4]
for y_ax in ys:
ts = Series(y_ax,index=x_ax)
ts.plot(kind='bar', figsize=(15,5))
```
I would expect to have 2 separate plots as output, instead I got the two series merged in one single plot.
Why is that? How can I get two separate plots keeping the for loop? | In the IPython notebook the best way to do this is often with subplots. You create multiple axes on the same figure and then render the figure in the notebook. For example:
```
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
ys = [[0,1,2,3,4],[4,3,2,1,0]]
x_ax = [0,1,2,3,4]
fig, axs = plt.subplots(ncols=2, figsize=(10, 4))
for i, y_ax in enumerate(ys):
pd.Series(y_ax, index=x_ax).plot(kind='bar', ax=axs[i])
axs[i].set_title('Plot number {}'.format(i+1))
```
generates the following charts
 |
django admin error - Unknown column 'django_content_type.name' in 'field list' | 29,538,388 | 7 | 2015-04-09T12:23:07Z | 29,680,795 | 8 | 2015-04-16T16:29:45Z | [
"python",
"django"
] | My django project had a working admin page, but all of the sudden I started receiving:
`"Unknown column 'django_content_type.name' in 'field list'"` whenever I try to access the admin page. I can still access some portions of the admin, just not the main page.
I'm pretty new to django and python, so I have no idea where to look.
Here's the full error:
```
InternalError at /admin/
(1054, u"Unknown column 'django_content_type.name' in 'field list'")
Request Method: GET
Request URL: http://127.0.0.1:8000/admin/
Django Version: 1.7.7
Exception Type: InternalError
Exception Value:
(1054, u"Unknown column 'django_content_type.name' in 'field list'")
Exception Location: c:\Python27\lib\site-packages\pymysql\err.py in _check_mysql_exception, line 115
Python Executable: c:\Python27\python.exe
Python Version: 2.7.9
Python Path:
['c:\\users\\dhysong\\Documents\\School\\CS6310\\Project4\\CS6310',
'C:\\Windows\\system32\\python27.zip',
'c:\\Python27\\DLLs',
'c:\\Python27\\lib',
'c:\\Python27\\lib\\plat-win',
'c:\\Python27\\lib\\lib-tk',
'c:\\Python27',
'c:\\Python27\\lib\\site-packages']
Server time: Thu, 9 Apr 2015 08:17:05 -0400
```
html error occurs on line 63:
```
In template c:\Python27\lib\site-packages\django\contrib\admin\templates\admin\index.html, error at line 63
1054
53 <div id="content-related">
54 <div class="module" id="recent-actions-module">
55 <h2>{% trans 'Recent Actions' %}</h2>
56 <h3>{% trans 'My Actions' %}</h3>
57 {% load log %}
58 {% get_admin_log 10 as admin_log for_user user %}
59 {% if not admin_log %}
60 <p>{% trans 'None available' %}</p>
61 {% else %}
62 <ul class="actionlist">
63 {% for entry in admin_log %}
64 <li class="{% if entry.is_addition %}addlink{% endif %}{% if entry.is_change %}changelink{% endif %}{% if entry.is_deletion %}deletelink{% endif %}">
65 {% if entry.is_deletion or not entry.get_admin_url %}
66 {{ entry.object_repr }}
67 {% else %}
68 <a href="{{ entry.get_admin_url }}">{{ entry.object_repr }}</a>
69 {% endif %}
70 <br/>
71 {% if entry.content_type %}
72 <span class="mini quiet">{% filter capfirst %}{% trans entry.content_type.name %}{% endfilter %}</span>
73 {% else %}
``` | I had this same issue just now and it was related to different versions of django. I updated all of the machines working on my project to django 1.8 using pip install -U Django and everything worked fine after that. |
Python 2 - How would you round up/down to the nearest 6 minutes? | 29,545,758 | 15 | 2015-04-09T18:02:27Z | 29,545,984 | 20 | 2015-04-09T18:16:06Z | [
"python",
"python-2.7",
"datetime"
] | There are numerous examples of people rounding to the nearest ten minutes but I can't figure out the logic behind rounding to the nearest six. I thought it would be a matter of switching a few numbers around but I can't get it to work.
The code I'm working with is located at [my Github](https://github.com/minorsecond/Timeclock/blob/dict/tc.py). The block I've got that isn't even close to working (won't give any output) is:
```
def companyTimer():
if minutes % 6 > .5:
companyMinutes = minutes + 1
elif minutes % 6 < 5:
companyMinutes = minutes - 1
else:
companyMinutes = minutes
print companyMinutes
```
Looking at it now, I see that my logic is incorrect - even if it were working, the add and subtract 1 minute portion of the code doesn't make sense.
Anyway, I have no idea how to remedy this - could someone point me in the right direction, please?
PS - this is something I'm making for personal use at work.. not asking for help with my job but this will help me keep track of my hours at work. Don't want there to be any issues with that.
Thanks! | Here's a general function to round to nearest `x`:
```
def round_to_nearest(num, base):
n = num + (base//2)
return n - (n % base)
[round_to_nearest(i, 6) for i in range(20)]
# [0, 0, 0, 6, 6, 6, 6, 6, 6, 12, 12, 12, 12, 12, 12, 18, 18, 18, 18, 18]
```
Explanation:
* `n % base` is the remainder left over when dividing `n` by `base`. Also known as the [modulo operator.](https://docs.python.org/2/reference/expressions.html#binary-arithmetic-operations)
* Simply subtracting `num%6` from `num` would give you 0 for 0-5, 6 for 6-11, and so on.
* Since we want to "round" instead of "floor", we can bias this result by adding half of the base (`base//2`) beforehand. |
Trouble installing pygame using pip install | 29,548,982 | 7 | 2015-04-09T21:04:29Z | 29,579,587 | 14 | 2015-04-11T15:28:32Z | [
"python",
"pygame",
"shared-libraries",
"pip"
] | I tried: `c:/python34/scripts/pip install http://bitbucket.org/pygame/pygame`
and got this error:
```
Cannot unpack file C:\Users\Marius\AppData\Local\Temp\pip-b60d5tho-unpack\pygame
(downloaded from C:\Users\Marius\AppData\Local\Temp\pip-rqmpq4tz-build, conte
nt-type: text/html; charset=utf-8); cannot detect archive format
Cannot determine archive format of C:\Users\Marius\AppData\Local\Temp\pip-rqmp
q4tz-build
```
Please if anyone have any solutions please feel free to share them!
I also tried
`pip install --allow-unverified`, but that gave me an error as well. | This is the only method that works for me.
```
pip install pygame==1.9.1release --allow-external pygame --allow-unverified pygame
```
--
These are the steps that lead me to this command (I put them so people finds it easily):
```
$ pip install pygame
Collecting pygame
Could not find any downloads that satisfy the requirement pygame
Some externally hosted files were ignored as access to them may be unreliable (use --allow-external pygame to allow).
No distributions at all found for pygame
```
Then, as suggestes I allow external:
```
$ pip install pygame --allow-external pygame
Collecting pygame
Could not find any downloads that satisfy the requirement pygame
Some insecure and unverifiable files were ignored (use --allow-unverified pygame to allow).
No distributions at all found for pygame
```
So I also allow unverifiable:
```
$ pip install pygame --allow-external pygame --allow-unverified pygame
Collecting pygame
pygame is potentially insecure and unverifiable.
HTTP error 400 while getting http://www.pygame.org/../../ftp/pygame-1.6.2.tar.bz2 (from http://www.pygame.org/download.shtml)
Could not install requirement pygame because of error 400 Client Error: Bad Request
Could not install requirement pygame because of HTTP error 400 Client Error: Bad Request for URL http://www.pygame.org/../../ftp/pygame-1.6.2.tar.bz2 (from http://www.pygame.org/download.shtml)
```
So, after a visit to <http://www.pygame.org/download.shtml>, I thought about adding the version number (1.9.1release is the currently stable one).
--
Hope it helps. |
how to split column of tuples in pandas dataframe? | 29,550,414 | 7 | 2015-04-09T22:50:38Z | 29,550,458 | 12 | 2015-04-09T22:55:11Z | [
"python",
"numpy",
"pandas",
"dataframe",
"tuples"
] | I have a pandas dataframe (this is only a little piece)
```
>>> d1
y norm test y norm train len(y_train) len(y_test) \
0 64.904368 116.151232 1645 549
1 70.852681 112.639876 1645 549
SVR RBF \
0 (35.652207342877873, 22.95533537448393)
1 (39.563683797747622, 27.382483096332511)
LCV \
0 (19.365430594452338, 13.880062435173587)
1 (19.099614489458364, 14.018867136617146)
RIDGE CV \
0 (4.2907610988480362, 12.416745648065584)
1 (4.18864306788194, 12.980833914392477)
RF \
0 (9.9484841581029428, 16.46902345373697)
1 (10.139848213735391, 16.282141345406522)
GB \
0 (0.012816232716538605, 15.950164822266007)
1 (0.012814519804493328, 15.305745202851712)
ET DATA
0 (0.00034337162272515505, 16.284800366214057) j2m
1 (0.00024811554516431878, 15.556506191784194) j2m
>>>
```
I want to split all the columns that contain tuples. For example I want to replace the column `LCV` with the columns `LCV-a` and `LCV-b` .
How can I do that?
EDIT:
The proposed solution does not work why??
```
>>> d1['LCV'].apply(pd.Series)
0
0 (19.365430594452338, 13.880062435173587)
1 (19.099614489458364, 14.018867136617146)
>>>
```
EDIT:
This seems to be working
```
>>> d1['LCV'].apply(eval).apply(pd.Series)
0 1
0 19.365431 13.880062
1 19.099614 14.018867
>>>
``` | You can do this by `apply(pd.Series)` on that column:
```
In [13]: df = pd.DataFrame({'a':[1,2], 'b':[(1,2), (3,4)]})
In [14]: df
Out[14]:
a b
0 1 (1, 2)
1 2 (3, 4)
In [16]: df['b'].apply(pd.Series)
Out[16]:
0 1
0 1 2
1 3 4
In [17]: df[['b1', 'b2']] = df['b'].apply(pd.Series)
In [18]: df
Out[18]:
a b b1 b2
0 1 (1, 2) 1 2
1 2 (3, 4) 3 4
```
This works because it makes of each tuple a Series, which is then seen as a row of a dataframe. |
how to split column of tuples in pandas dataframe? | 29,550,414 | 7 | 2015-04-09T22:50:38Z | 33,763,855 | 7 | 2015-11-17T17:58:08Z | [
"python",
"numpy",
"pandas",
"dataframe",
"tuples"
] | I have a pandas dataframe (this is only a little piece)
```
>>> d1
y norm test y norm train len(y_train) len(y_test) \
0 64.904368 116.151232 1645 549
1 70.852681 112.639876 1645 549
SVR RBF \
0 (35.652207342877873, 22.95533537448393)
1 (39.563683797747622, 27.382483096332511)
LCV \
0 (19.365430594452338, 13.880062435173587)
1 (19.099614489458364, 14.018867136617146)
RIDGE CV \
0 (4.2907610988480362, 12.416745648065584)
1 (4.18864306788194, 12.980833914392477)
RF \
0 (9.9484841581029428, 16.46902345373697)
1 (10.139848213735391, 16.282141345406522)
GB \
0 (0.012816232716538605, 15.950164822266007)
1 (0.012814519804493328, 15.305745202851712)
ET DATA
0 (0.00034337162272515505, 16.284800366214057) j2m
1 (0.00024811554516431878, 15.556506191784194) j2m
>>>
```
I want to split all the columns that contain tuples. For example I want to replace the column `LCV` with the columns `LCV-a` and `LCV-b` .
How can I do that?
EDIT:
The proposed solution does not work why??
```
>>> d1['LCV'].apply(pd.Series)
0
0 (19.365430594452338, 13.880062435173587)
1 (19.099614489458364, 14.018867136617146)
>>>
```
EDIT:
This seems to be working
```
>>> d1['LCV'].apply(eval).apply(pd.Series)
0 1
0 19.365431 13.880062
1 19.099614 14.018867
>>>
``` | On much larger datasets, I found that `.apply()` is few orders slower than `pd.DataFrame(df['b'].values.tolist())` |
Algorithm to group sets of points together that follow a direction | 29,550,785 | 29 | 2015-04-09T23:26:46Z | 29,561,234 | 10 | 2015-04-10T12:17:33Z | [
"python",
"image",
"algorithm",
"matlab",
"image-processing"
] | **Note: I am placing this question in both the MATLAB and Python tags as I am the most proficient in these languages. However, I welcome solutions in any language.**
---
# Question Preamble
I have taken an image with a fisheye lens. This image consists of a pattern with a bunch of square objects. What I want to do with this image is detect the centroid of each of these squares, then use these points to perform an undistortion of the image - specifically, I am seeking the right distortion model parameters. It should be noted that not all of the squares need to be detected. As long as a good majority of them are, then that's totally fine.... but that isn't the point of this post. The parameter estimation algorithm I have already written, but the problem is that it requires points that appear collinear in the image.
The base question I want to ask is given these points, what is the best way to group them together so that each group consists of a horizontal line or vertical line?
# Background to my problem
This isn't really important with regards to the question I'm asking, but if you'd like to know where I got my data from and to further understand the question I'm asking, please read. If you're not interested, then you can skip right to the **Problem setup** section below.
---
An example of an image I am dealing with is shown below:

It is a 960 x 960 image. The image was originally higher resolution, but I subsample the image to facilitate faster processing time. As you can see, there are a bunch of square patterns that are dispersed in the image. Also, the centroids I have calculated are with respect to the above subsampled image.
The pipeline I have set up to retrieve the centroids is the following:
1. Perform a Canny Edge Detection
2. Focus on a region of interest that minimizes false positives. This region of interest is basically the squares without any of the black tape that covers one of their sides.
3. Find all distinct closed contours
4. For each distinct closed contour...
a. Perform a Harris Corner Detection
b. Determine if the result has 4 corner points
c. If this does, then this contour belonged to a square and find the centroid of this shape
d. If it doesn't, then skip this shape
5. Place all detected centroids from Step #4 into a matrix for further examination.
Here's an example result with the above image. Each detected square has the four points colour coded according to the location of where it is with respect to the square itself. For each centroid that I have detected, I write an ID right where that centroid is in the image itself.

With the above image, there are 37 detected squares.
# Problem Setup
Suppose I have some image pixel points stored in a `N x 3` matrix. The first two columns are the `x` (horizontal) and `y` (vertical) coordinates where in image coordinate space, the `y` coordinate is **inverted**, which means that positive `y` moves downwards. The third column is an ID associated with the point.
Here is some code written in MATLAB that takes these points, plots them onto a 2D grid and labels each point with the third column of the matrix. If you read the above background, these are the points that were detected by my algorithm outlined above.
```
data = [ 475. , 605.75, 1.;
571. , 586.5 , 2.;
233. , 558.5 , 3.;
669.5 , 562.75, 4.;
291.25, 546.25, 5.;
759. , 536.25, 6.;
362.5 , 531.5 , 7.;
448. , 513.5 , 8.;
834.5 , 510. , 9.;
897.25, 486. , 10.;
545.5 , 491.25, 11.;
214.5 , 481.25, 12.;
271.25, 463. , 13.;
646.5 , 466.75, 14.;
739. , 442.75, 15.;
340.5 , 441.5 , 16.;
817.75, 421.5 , 17.;
423.75, 417.75, 18.;
202.5 , 406. , 19.;
519.25, 392.25, 20.;
257.5 , 382. , 21.;
619.25, 368.5 , 22.;
148. , 359.75, 23.;
324.5 , 356. , 24.;
713. , 347.75, 25.;
195. , 335. , 26.;
793.5 , 332.5 , 27.;
403.75, 328. , 28.;
249.25, 308. , 29.;
495.5 , 300.75, 30.;
314. , 279. , 31.;
764.25, 249.5 , 32.;
389.5 , 249.5 , 33.;
475. , 221.5 , 34.;
565.75, 199. , 35.;
802.75, 173.75, 36.;
733. , 176.25, 37.];
figure; hold on;
axis ij;
scatter(data(:,1), data(:,2),40, 'r.');
text(data(:,1)+10, data(:,2)+10, num2str(data(:,3)));
```
Similarly in Python, using `numpy` and `matplotlib`, we have:
```
import numpy as np
import matplotlib.pyplot as plt
data = np.array([[ 475. , 605.75, 1. ],
[ 571. , 586.5 , 2. ],
[ 233. , 558.5 , 3. ],
[ 669.5 , 562.75, 4. ],
[ 291.25, 546.25, 5. ],
[ 759. , 536.25, 6. ],
[ 362.5 , 531.5 , 7. ],
[ 448. , 513.5 , 8. ],
[ 834.5 , 510. , 9. ],
[ 897.25, 486. , 10. ],
[ 545.5 , 491.25, 11. ],
[ 214.5 , 481.25, 12. ],
[ 271.25, 463. , 13. ],
[ 646.5 , 466.75, 14. ],
[ 739. , 442.75, 15. ],
[ 340.5 , 441.5 , 16. ],
[ 817.75, 421.5 , 17. ],
[ 423.75, 417.75, 18. ],
[ 202.5 , 406. , 19. ],
[ 519.25, 392.25, 20. ],
[ 257.5 , 382. , 21. ],
[ 619.25, 368.5 , 22. ],
[ 148. , 359.75, 23. ],
[ 324.5 , 356. , 24. ],
[ 713. , 347.75, 25. ],
[ 195. , 335. , 26. ],
[ 793.5 , 332.5 , 27. ],
[ 403.75, 328. , 28. ],
[ 249.25, 308. , 29. ],
[ 495.5 , 300.75, 30. ],
[ 314. , 279. , 31. ],
[ 764.25, 249.5 , 32. ],
[ 389.5 , 249.5 , 33. ],
[ 475. , 221.5 , 34. ],
[ 565.75, 199. , 35. ],
[ 802.75, 173.75, 36. ],
[ 733. , 176.25, 37. ]])
plt.figure()
plt.gca().invert_yaxis()
plt.plot(data[:,0], data[:,1], 'r.', markersize=14)
for idx in np.arange(data.shape[0]):
plt.text(data[idx,0]+10, data[idx,1]+10, str(int(data[idx,2])), size='large')
plt.show()
```
We get:

---
# Back to the question
As you can see, these points are more or less in a grid pattern and you can see that we can form lines between the points. Specifically, you can see that there are lines that can be formed horizontally and vertically.
For example, if you reference the image in the background section of my problem, we can see that there are 5 groups of points that can be grouped in a horizontal manner. For example, points 23, 26, 29, 31, 33, 34, 35, 37 and 36 form one group. Points 19, 21, 24, 28, 30 and 32 form another group and so on and so forth. Similarly in a vertical sense, we can see that points 26, 19, 12 and 3 form one group, points 29, 21, 13 and 5 form another group and so on.
---
# Question to ask
My question is this: *What is a method that can successfully group points in horizontal groupings and vertical groupings separately, given that the points could be in any orientation?*
## Conditions
1. There **must be at least three** points per line. If there is anything less than that, then this does not qualify as a segment. Therefore, the points 36 and 10 don't qualify as a vertical line, and similarly the isolated point 23 shouldn't quality as a vertical line, but it is part of the first horizontal grouping.
2. The above calibration pattern can be in any orientation. However, for what I'm dealing with, the worst kind of orientation you can get is what you see above in the background section.
---
# Expected Output
The output would be a pair of lists where the first list has elements where each element gives you a sequence of point IDs that form a horizontal line. Similarly, the second list has elements where each element gives you a sequence of point IDs that form a vertical line.
Therefore, the expected output for the horizontal sequences would look something like this:
## MATLAB
```
horiz_list = {[23, 26, 29, 31, 33, 34, 35, 37, 36], [19, 21, 24, 28, 30, 32], ...};
vert_list = {[26, 19, 12, 3], [29, 21, 13, 5], ....};
```
## Python
```
horiz_list = [[23, 26, 29, 31, 33, 34, 35, 37, 36], [19, 21, 24, 28, 30, 32], ....]
vert_list = [[26, 19, 12, 3], [29, 21, 13, 5], ...]
```
# What I have tried
Algorithmically, what I have tried is to undo the rotation that is experienced at these points. I've performed [Principal Components Analysis](http://en.wikipedia.org/wiki/Principal_component_analysis) and I tried projecting the points with respect to the computed orthogonal basis vectors so that the points would more or less be on a straight rectangular grid.
Once I have that, it's just a simple matter of doing some scanline processing where you could group points based on a differential change on either the horizontal or vertical coordinates. You'd sort the coordinates by either the `x` or `y` values, then examine these sorted coordinates and look for a large change. Once you encounter this change, then you can group points in between the changes together to form your lines. Doing this with respect to each dimension would give you either the horizontal or vertical groupings.
With regards to PCA, here's what I did in MATLAB and Python:
## MATLAB
```
%# Step #1 - Get just the data - no IDs
data_raw = data(:,1:2);
%# Decentralize mean
data_nomean = bsxfun(@minus, data_raw, mean(data_raw,1));
%# Step #2 - Determine covariance matrix
%# This already decentralizes the mean
cov_data = cov(data_raw);
%# Step #3 - Determine right singular vectors
[~,~,V] = svd(cov_data);
%# Step #4 - Transform data with respect to basis
F = V.'*data_nomean.';
%# Visualize both the original data points and transformed data
figure;
plot(F(1,:), F(2,:), 'b.', 'MarkerSize', 14);
axis ij;
hold on;
plot(data(:,1), data(:,2), 'r.', 'MarkerSize', 14);
```
## Python
```
import numpy as np
import numpy.linalg as la
# Step #1 and Step #2 - Decentralize mean
centroids_raw = data[:,:2]
mean_data = np.mean(centroids_raw, axis=0)
# Transpose for covariance calculation
data_nomean = (centroids_raw - mean_data).T
# Step #3 - Determine covariance matrix
# Doesn't matter if you do this on the decentralized result
# or the normal result - cov subtracts the mean off anyway
cov_data = np.cov(data_nomean)
# Step #4 - Determine right singular vectors via SVD
# Note - This is already V^T, so there's no need to transpose
_,_,V = la.svd(cov_data)
# Step #5 - Transform data with respect to basis
data_transform = np.dot(V, data_nomean).T
plt.figure()
plt.gca().invert_yaxis()
plt.plot(data[:,0], data[:,1], 'b.', markersize=14)
plt.plot(data_transform[:,0], data_transform[:,1], 'r.', markersize=14)
plt.show()
```
---
The above code not only reprojects the data, but it also plots both the original points and the projected points together in a single figure. However, when I tried reprojecting my data, this is the plot I get:

The points in red are the original image coordinates while the points in blue are reprojected onto the basis vectors to try and remove the rotation. It still doesn't quite do the job. There is still some orientation with respect to the points so if I tried to do my scanline algorithm, points from the lines below for horizontal tracing or to the side for vertical tracing would be inadvertently grouped and this isn't correct.
---
Perhaps I'm overthinking the problem, but any insights you have regarding this would be greatly appreciated. If the answer is indeed superb, I would be inclined to award a high bounty as I've been stuck on this problem for quite some time.
I hope this question wasn't long winded. If you don't have an idea of how to solve this, then I thank you for your time in reading my question regardless.
Looking forward to any insights that you may have. Thanks very much! | Note 1: It has a number of settings -> which for other images may need to altered to get the result you want *see % Settings - play around with these values*
Note 2: It doesn't find all of the lines you want -> but its a starting point....
To call this function, invoke this in the command prompt:
```
>> [h, v] = testLines;
```
We get:
```
>> celldisp(h)
h{1} =
1 2 4 6 9 10
h{2} =
3 5 7 8 11 14 15 17
h{3} =
1 2 4 6 9 10
h{4} =
3 5 7 8 11 14 15 17
h{5} =
1 2 4 6 9 10
h{6} =
3 5 7 8 11 14 15 17
h{7} =
3 5 7 8 11 14 15 17
h{8} =
1 2 4 6 9 10
h{9} =
1 2 4 6 9 10
h{10} =
12 13 16 18 20 22 25 27
h{11} =
13 16 18 20 22 25 27
h{12} =
3 5 7 8 11 14 15 17
h{13} =
3 5 7 8 11 14 15
h{14} =
12 13 16 18 20 22 25 27
h{15} =
3 5 7 8 11 14 15 17
h{16} =
12 13 16 18 20 22 25 27
h{17} =
19 21 24 28 30
h{18} =
21 24 28 30
h{19} =
12 13 16 18 20 22 25 27
h{20} =
19 21 24 28 30
h{21} =
12 13 16 18 20 22 24 25
h{22} =
12 13 16 18 20 22 24 25 27
h{23} =
23 26 29 31 33 34 35
h{24} =
23 26 29 31 33 34 35 37
h{25} =
23 26 29 31 33 34 35 36 37
h{26} =
33 34 35 37 36
h{27} =
31 33 34 35 37
>> celldisp(v)
v{1} =
33 28 18 8 1
v{2} =
34 30 20 11 2
v{3} =
26 19 12 3
v{4} =
35 22 14 4
v{5} =
29 21 13 5
v{6} =
25 15 6
v{7} =
31 24 16 7
v{8} =
37 32 27 17 9
```
A figure is also generated that draws the lines through each proper set of points:

```
function [horiz_list, vert_list] = testLines
global counter;
global colours;
close all;
data = [ 475. , 605.75, 1.;
571. , 586.5 , 2.;
233. , 558.5 , 3.;
669.5 , 562.75, 4.;
291.25, 546.25, 5.;
759. , 536.25, 6.;
362.5 , 531.5 , 7.;
448. , 513.5 , 8.;
834.5 , 510. , 9.;
897.25, 486. , 10.;
545.5 , 491.25, 11.;
214.5 , 481.25, 12.;
271.25, 463. , 13.;
646.5 , 466.75, 14.;
739. , 442.75, 15.;
340.5 , 441.5 , 16.;
817.75, 421.5 , 17.;
423.75, 417.75, 18.;
202.5 , 406. , 19.;
519.25, 392.25, 20.;
257.5 , 382. , 21.;
619.25, 368.5 , 22.;
148. , 359.75, 23.;
324.5 , 356. , 24.;
713. , 347.75, 25.;
195. , 335. , 26.;
793.5 , 332.5 , 27.;
403.75, 328. , 28.;
249.25, 308. , 29.;
495.5 , 300.75, 30.;
314. , 279. , 31.;
764.25, 249.5 , 32.;
389.5 , 249.5 , 33.;
475. , 221.5 , 34.;
565.75, 199. , 35.;
802.75, 173.75, 36.;
733. , 176.25, 37.];
figure; hold on;
axis ij;
% Change due to Benoit_11
scatter(data(:,1), data(:,2),40, 'r.'); text(data(:,1)+10, data(:,2)+10, num2str(data(:,3)));
text(data(:,1)+10, data(:,2)+10, num2str(data(:,3)));
% Process your data as above then run the function below(note it has sub functions)
counter = 0;
colours = 'bgrcmy';
[horiz_list, vert_list] = findClosestPoints ( data(:,1), data(:,2) );
function [horiz_list, vert_list] = findClosestPoints ( x, y )
% calc length of points
nX = length(x);
% set up place holder flags
modelledH = false(nX,1);
modelledV = false(nX,1);
horiz_list = {};
vert_list = {};
% loop for all points
for p=1:nX
% have we already modelled a horizontal line through these?
% second last param - true - horizontal, false - vertical
if modelledH(p)==false
[modelledH, index] = ModelPoints ( p, x, y, modelledH, true, true );
horiz_list = [horiz_list index];
else
[~, index] = ModelPoints ( p, x, y, modelledH, true, false );
horiz_list = [horiz_list index];
end
% make a temp copy of the x and y and remove any of the points modelled
% from the horizontal -> this is to avoid them being found in the
% second call.
tempX = x;
tempY = y;
tempX(index) = NaN;
tempY(index) = NaN;
tempX(p) = x(p);
tempY(p) = y(p);
% Have we found a vertial line?
if modelledV(p)==false
[modelledV, index] = ModelPoints ( p, tempX, tempY, modelledV, false, true );
vert_list = [vert_list index];
end
end
end
function [modelled, index] = ModelPoints ( p, x, y, modelled, method, fullRun )
% p - row in your original data matrix
% x - data(:,1)
% y - data(:,2)
% modelled - array of flags to whether rows have been modelled
% method - horizontal or vertical (used to calc graadients)
% fullRun - full calc or just to get indexes
% this could be made better by storing the indexes of each horizontal in the method above
% Settings - play around with these values
gradDelta = 0.2; % find points where gradient is less than this value
gradLimit = 0.45; % if mean gradient of line is above this ignore
numberOfPointsToCheck = 7; % number of points to check when look along the line
% to find other points (this reduces chance of it
% finding other points far away
% I optimised this for your example to be 7
% Try varying it and you will see how it effect the result.
% Find the index of points which are inline.
[index, grad] = CalcIndex ( x, y, p, gradDelta, method );
% check gradient of line
if abs(mean(grad))>gradLimit
index = [];
return
end
% add point of interest to index
index = [p index];
% loop through all points found above to find any other points which are in
% line with these points (this allows for slight curvature
combineIndex = [];
for ii=2:length(index)
% Find inedex of the points found above (find points on curve)
[index2] = CalcIndex ( x, y, index(ii), gradDelta, method, numberOfPointsToCheck, grad(ii-1) );
% Check that the point on this line are on the original (i.e. inline -> not at large angle
if any(ismember(index,index2))
% store points found
combineIndex = unique([index2 combineIndex]);
end
end
% copy to index
index = combineIndex;
if fullRun
% do some plotting
% TODO: here you would need to calculate your arrays to output.
xx = x(index);
[sX,sOrder] = sort(xx);
% Check its found at least 3 points
if length ( index(sOrder) ) > 2
% flag the modelled on the points found
modelled(index(sOrder)) = true;
% plot the data
plot ( x(index(sOrder)), y(index(sOrder)), colours(mod(counter,numel(colours)) + 1));
counter = counter + 1;
end
index = index(sOrder);
end
end
function [index, gradCheck] = CalcIndex ( x, y, p, gradLimit, method, nPoints2Consider, refGrad )
% x - data(:,1)
% y - data(:,2)
% p - point of interest
% method (x/y) or (y\x)
% nPoints2Consider - only look at N points (options)
% refgrad - rather than looking for gradient of closest point -> use this
% - reference gradient to find similar points (finds points on curve)
nX = length(x);
% calculate gradient
for g=1:nX
if method
grad(g) = (x(g)-x(p))\(y(g)-y(p));
else
grad(g) = (y(g)-y(p))\(x(g)-x(p));
end
end
% find distance to all other points
delta = sqrt ( (x-x(p)).^2 + (y-y(p)).^2 );
% set its self = NaN
delta(delta==min(delta)) = NaN;
% find the closest points
[m,order] = sort(delta);
if nargin == 7
% for finding along curve
% set any far away points to be NaN
grad(order(nPoints2Consider+1:end)) = NaN;
% find the closest points to the reference gradient within the allowable limit
index = find(abs(grad-refGrad)<gradLimit==1);
% store output
gradCheck = grad(index);
else
% find the points which are closes to the gradient of the closest point
index = find(abs(grad-grad(order(1)))<gradLimit==1);
% store gradients to output
gradCheck = grad(index);
end
end
end
``` |
Why is "if not (a and b)" faster than "if not a or not b"? | 29,551,438 | 21 | 2015-04-10T00:39:18Z | 29,551,787 | 24 | 2015-04-10T01:21:36Z | [
"python",
"python-2.7",
"if-statement",
"logical-operators",
"micro-optimization"
] | On a whim, I recently tested these two methods with `timeit`, to see which evaluation method was faster:
```
import timeit
"""Test method returns True if either argument is falsey, else False."""
def and_chk((a, b)):
if not (a and b):
return True
return False
def not_or_chk((a, b)):
if not a or not b:
return True
return False
```
...and got these results:
```
VALUES FOR a,b -> 0,0 0,1 1,0 1,1
method
and_chk(a,b) 0.95559 0.98646 0.95138 0.98788
not_or_chk(a,b) 0.96804 1.07323 0.96015 1.05874
...seconds per 1,111,111 cycles.
```
The difference in efficiency is between one and nine percent, always in favour of `if not (a and b)`, which is the opposite of what I might expect since I understand that `if not a or not b` will evaluate its terms (`if not a` and then `if not b`) in order, running the `if` block once it encounters a true expression (and there are no `and` clauses). In contrast, the `and_chk` method needs to evaluate *both* clauses before it can return any result to the `if not..` that wraps it.
The timing results, however, disprove this understanding. How, then, is the `if` condition being evaluated? I am perfectly aware of the fact that this degree of microoptimization is practically, if not completely, pointless. I just want to understand how Python is going about it.
---
For completion's sake, this is how I set up `timeit`...
```
cyc = 1111111
bothFalse_and = iter([(0,0)] * cyc)
zeroTrue_and = iter([(1,0)] * cyc)
oneTrue_and = iter([(0,1)] * cyc)
bothTrue_and = iter([(1,1)] * cyc)
bothFalse_notor = iter([(0,0)] * cyc)
zeroTrue_notor = iter([(1,0)] * cyc)
oneTrue_notor = iter([(0,1)] * cyc)
bothTrue_notor = iter([(1,1)] * cyc)
time_bothFalse_and = timeit.Timer('and_chk(next(tups))', 'from __main__ import bothFalse_and as tups, and_chk')
time_zeroTrue_and = timeit.Timer('and_chk(next(tups))', 'from __main__ import zeroTrue_and as tups, and_chk')
time_oneTrue_and = timeit.Timer('and_chk(next(tups))', 'from __main__ import oneTrue_and as tups, and_chk')
time_bothTrue_and = timeit.Timer('and_chk(next(tups))', 'from __main__ import bothTrue_and as tups, and_chk')
time_bothFalse_notor = timeit.Timer('not_or_chk(next(tups))', 'from __main__ import bothFalse_notor as tups, not_or_chk')
time_zeroTrue_notor = timeit.Timer('not_or_chk(next(tups))', 'from __main__ import zeroTrue_notor as tups, not_or_chk')
time_oneTrue_notor = timeit.Timer('not_or_chk(next(tups))', 'from __main__ import oneTrue_notor as tups, not_or_chk')
time_bothTrue_notor = timeit.Timer('not_or_chk(next(tups))', 'from __main__ import bothTrue_notor as tups, not_or_chk')
```
...then ran each `timeit.Timer(..)` function with `.timeit(cyc)` to get the results posted. | ## TL;DR
The `not_or_chk` function requires two unary operations in *addition* to two jumps (in the worst case), while the `and_chk` function only has the two jumps (again, in the worst case).
## Details
The [dis module](https://docs.python.org/2/library/dis.html) to the rescue! The `dis` module lets you take a look at the Python bytecode disassembly of your code. For example:
```
import dis
"""Test method returns True if either argument is falsey, else False."""
def and_chk((a, b)):
if not (a and b):
return True
return False
def not_or_chk((a, b)):
if not a or not b:
return True
return False
print("And Check:\n")
print(dis.dis(and_chk))
print("Or Check:\n")
print(dis.dis(not_or_chk))
```
Produces this output:
```
And Check:
5 0 LOAD_FAST 0 (.0)
3 UNPACK_SEQUENCE 2
6 STORE_FAST 1 (a)
9 STORE_FAST 2 (b)
6 12 LOAD_FAST 1 (a) * This block is the *
15 JUMP_IF_FALSE_OR_POP 21 * disassembly of *
18 LOAD_FAST 2 (b) * the "and_chk" *
>> 21 POP_JUMP_IF_TRUE 28 * function *
7 24 LOAD_GLOBAL 0 (True)
27 RETURN_VALUE
8 >> 28 LOAD_GLOBAL 1 (False)
31 RETURN_VALUE
None
Or Check:
10 0 LOAD_FAST 0 (.0)
3 UNPACK_SEQUENCE 2
6 STORE_FAST 1 (a)
9 STORE_FAST 2 (b)
11 12 LOAD_FAST 1 (a) * This block is the *
15 UNARY_NOT * disassembly of *
16 POP_JUMP_IF_TRUE 26 * the "not_or_chk" *
19 LOAD_FAST 2 (b) * function *
22 UNARY_NOT
23 POP_JUMP_IF_FALSE 30
12 >> 26 LOAD_GLOBAL 0 (True)
29 RETURN_VALUE
13 >> 30 LOAD_GLOBAL 1 (False)
33 RETURN_VALUE
None
```
Take a look at the two blocks of Python bytecode that I've marked with the asterisks. Those blocks are your two disassembled functions. Note that `and_chk` only has two jumps, and the calculations in the function are made *while deciding whether or not to take the jump*.
On the other hand, the `not_or_chk`function requires the `not` operation to be carried out twice in the worst case, *in addition* to the interpreter deciding whether or not to take the jump. |
Python program running in PyCharm but not from the command line | 29,553,668 | 5 | 2015-04-10T04:54:12Z | 29,553,778 | 8 | 2015-04-10T05:03:26Z | [
"python",
"startup",
"fedora"
] | When I try to run my program from the PyCharm IDE everything works fine but if I type in Fedora:
```
python myScript.py
```
in a shell prompt I get an import error from 1 of the module.
```
ImportError : No modue named myDependency.
```
What does PyCharm do that allows the interpreter to find my dependencies when launched from the IDE? How can I get my script to find its dependencies so it can be launched with a singe command? | There are a few possible things that can be causing this:
1. The same python interpreter? Check with `import sys; print(sys.executable)`
2. Is it the same working directory? Check with `import os; print(os.getcwd())`
3. Discrepancies in `sys.path`, which is the list python searches sequentially for import locations, can possibly caused by environment variables. Check with `import sys; print(sys.path)`. |
Django 1.8 HStore field throwing Progamming Error | 29,557,782 | 4 | 2015-04-10T09:22:19Z | 29,566,352 | 11 | 2015-04-10T16:28:39Z | [
"python",
"database",
"postgresql",
"psycopg2",
"django-1.8"
] | I'm following the code in the documentation
```
from django.contrib.postgres.fields import HStoreField
from django.db import models
class Dog(models.Model):
name = models.CharField(max_length=200)
data = HStoreField()
def __str__(self): # __unicode__ on Python 2
return self.name
```
Running this code results in:
```
ProgrammingError: can't adapt type 'dict'
```
I'm using Postgres==9.3.6, psycopg2==2.6, and I've checked that the HStore extension is enabled.
Any ideas? | Ensure you add `'django.contrib.postgres'` to `settings.INSTALLED_APPS`. |
How can I generate a list of consecutive numbers? | 29,558,007 | 6 | 2015-04-10T09:34:50Z | 29,558,077 | 9 | 2015-04-10T09:38:41Z | [
"python",
"list",
"python-3.x"
] | Say if you had a number input `8` in python and you wanted to generate a list of consecutive numbers up to `8` like
```
[0, 1, 2, 3, 4, 5, 6, 7, 8]
```
How could you do this? | In Python 3, you can use the builtin [`range`](https://docs.python.org/3/library/functions.html#func-range) function like this
```
>>> list(range(9))
[0, 1, 2, 3, 4, 5, 6, 7, 8]
```
**Note 1:** Python 3.x's `range` function, returns a `range` object. If you want a list you need to explicitly convert that to a list, with the [`list`](https://docs.python.org/3/library/functions.html#func-list) function like I have shown in the answer.
**Note 2:** We pass number 9 to `range` function because, `range` function will generate numbers till the given number but not including the number. So, we give the actual number + 1.
**Note 3:** There is a small difference in functionality of `range` in Python 2 and 3. You can read more about that in [this answer](http://stackoverflow.com/a/23221045/1903116). |
How to suppress the deprecation warnings in Django? | 29,562,070 | 23 | 2015-04-10T12:56:43Z | 29,983,195 | 8 | 2015-05-01T07:32:52Z | [
"python",
"django",
"django-admin"
] | Every time I'm using the `django-admin` command â even on TABâcompletion â it throws a `RemovedInDjango19Warning` (and a lot more if I use the *test* command). How can I suppress those warnings?
I'm using Django 1.8 with Python 3.4 (in a virtual environment).
As far as I can tell, all those warnings come from libraries not from my code, here are some examples:
* `â¦/lib/python3.4/importlib/_bootstrap.py:321: RemovedInDjango19Warning: django.contrib.contenttypes.generic is deprecated and will be removed in Django 1.9. Its contents have been moved to the fields, forms, and admin submodules of django.contrib.contenttypes.
return f(*args, **kwds)`
* `â¦/lib/python3.4/site-packages/django/contrib/admin/util.py:7: RemovedInDjango19Warning: The django.contrib.admin.util module has been renamed. Use django.contrib.admin.utils instead.
"Use django.contrib.admin.utils instead.", RemovedInDjango19Warning)`
* ``` â¦/lib/python3.4/site-packages/django/templatetags/future.py:25: RemovedInDjango19Warning: Loading the ``url`` tag from the ``future`` library is deprecated and will be removed in Django 1.9. Use the default ``url`` tag instead.
RemovedInDjango19Warning) ``` | In manage.py, add this to the top line --
```
#!/usr/bin/env PYTHONWARNINGS=ignore python
```
This will suppress all warnings, which I agree can in some situations be undesirable if you're using a lot of third party libraries.
Disclaimer: Recommended only after you've already seen the warnings at least 1,000 too many times already, and should be removed when you upgrade Django.
Note: this may have some undesirable effects on some platforms, eg swallowing more output than just warnings. |
How to suppress the deprecation warnings in Django? | 29,562,070 | 23 | 2015-04-10T12:56:43Z | 31,103,483 | 27 | 2015-06-28T18:48:37Z | [
"python",
"django",
"django-admin"
] | Every time I'm using the `django-admin` command â even on TABâcompletion â it throws a `RemovedInDjango19Warning` (and a lot more if I use the *test* command). How can I suppress those warnings?
I'm using Django 1.8 with Python 3.4 (in a virtual environment).
As far as I can tell, all those warnings come from libraries not from my code, here are some examples:
* `â¦/lib/python3.4/importlib/_bootstrap.py:321: RemovedInDjango19Warning: django.contrib.contenttypes.generic is deprecated and will be removed in Django 1.9. Its contents have been moved to the fields, forms, and admin submodules of django.contrib.contenttypes.
return f(*args, **kwds)`
* `â¦/lib/python3.4/site-packages/django/contrib/admin/util.py:7: RemovedInDjango19Warning: The django.contrib.admin.util module has been renamed. Use django.contrib.admin.utils instead.
"Use django.contrib.admin.utils instead.", RemovedInDjango19Warning)`
* ``` â¦/lib/python3.4/site-packages/django/templatetags/future.py:25: RemovedInDjango19Warning: Loading the ``url`` tag from the ``future`` library is deprecated and will be removed in Django 1.9. Use the default ``url`` tag instead.
RemovedInDjango19Warning) ``` | Adding a logging filter to settings.py can suppress these console warnings (at least for manage.py commands in Django 1.7, Python 3.4).
A filter can selectively suppress warnings. The following code creates a new "suppress\_deprecated" filter for the console and appends it to the default logging filters. Add this block to settings.py to configure the LOGGING variable:
```
import logging, copy
from django.utils.log import DEFAULT_LOGGING
LOGGING = copy.deepcopy(DEFAULT_LOGGING)
LOGGING['filters']['suppress_deprecated'] = {
'()': 'mysite.settings.SuppressDeprecated'
}
LOGGING['handlers']['console']['filters'].append('suppress_deprecated')
class SuppressDeprecated(logging.Filter):
def filter(self, record):
WARNINGS_TO_SUPPRESS = [
'RemovedInDjango18Warning',
'RemovedInDjango19Warning'
]
# Return false to suppress message.
return not any([warn in record.getMessage() for warn in WARNINGS_TO_SUPPRESS])
```
The 'mysite.settings.SuppressDeprecated' string needs to change if the root website module (or filter location and/or name) is different. |
How to suppress the deprecation warnings in Django? | 29,562,070 | 23 | 2015-04-10T12:56:43Z | 34,895,696 | 8 | 2016-01-20T09:07:53Z | [
"python",
"django",
"django-admin"
] | Every time I'm using the `django-admin` command â even on TABâcompletion â it throws a `RemovedInDjango19Warning` (and a lot more if I use the *test* command). How can I suppress those warnings?
I'm using Django 1.8 with Python 3.4 (in a virtual environment).
As far as I can tell, all those warnings come from libraries not from my code, here are some examples:
* `â¦/lib/python3.4/importlib/_bootstrap.py:321: RemovedInDjango19Warning: django.contrib.contenttypes.generic is deprecated and will be removed in Django 1.9. Its contents have been moved to the fields, forms, and admin submodules of django.contrib.contenttypes.
return f(*args, **kwds)`
* `â¦/lib/python3.4/site-packages/django/contrib/admin/util.py:7: RemovedInDjango19Warning: The django.contrib.admin.util module has been renamed. Use django.contrib.admin.utils instead.
"Use django.contrib.admin.utils instead.", RemovedInDjango19Warning)`
* ``` â¦/lib/python3.4/site-packages/django/templatetags/future.py:25: RemovedInDjango19Warning: Loading the ``url`` tag from the ``future`` library is deprecated and will be removed in Django 1.9. Use the default ``url`` tag instead.
RemovedInDjango19Warning) ``` | Nothing of the above have worked for me, django 1.9. I fixed this by adding the following lines to settings.py:
```
import logging
def filter_deprecation_warnings(record):
warnings_to_suppress = [
'RemovedInDjango110Warning'
]
# Return false to suppress message.
return not any([warn in record.getMessage()
for warn in warnings_to_suppress])
warn_logger = logging.getLogger('py.warnings')
warn_logger.addFilter(filter_deprecation_warnings)
``` |
Conditional mocking: Call original function if condition does match | 29,562,460 | 7 | 2015-04-10T13:17:12Z | 29,563,665 | 7 | 2015-04-10T14:11:08Z | [
"python",
"mocking"
] | How can I conditionally call the orignal method in a mock?
In this example I only want to fake a return value if `bar=='x'`. Otherwise I want to call the original method.
```
def mocked_some_method(bar):
if bar=='x':
return 'fake'
return some_how_call_original_method(bar)
with mock.patch('mylib.foo.some_method', mocked_some_method):
do_some_stuff()
```
I know that it is a bit strange. If I want to fake `mylib.foo.some_method` in side `do_some_stuff()` it should be condition-less. All (not some) calls to `some_method` should be mocked.
In my case it is an integration test, not a s tiny unittest and `mylib.foo.some_method` is a kind of dispatcher which gets used very often. And in one case I need to fake the result. | If you need just replace behavior without care of mock's calls assert function you can use `new` argument; otherwise you can use [`side_effect`](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.side_effect) that take a callable.
I guess that `some_method` is a object method (instead of a `staticmethod`) so you need a reference its object to call it. Your wrapper should declare as first argument the object and your patch use `autospec=True` to use the correct signature for `side_effect` case.
Final trick is save the original method reference and use it to make the call.
```
orig = mylib.foo.some_method
def mocked_some_method(self, bar):
if bar=='x':
return 'fake'
return orig(self, bar)
#Just replace:
with mock.patch('mylib.foo.some_method', new=mocked_some_method):
do_some_stuff()
#Replace by mock
with mock.patch('mylib.foo.some_method', side_effect=mocked_some_method, autospec=True) as mock_some_method:
do_some_stuff()
assert mock_some_method.called
``` |
How do I load session and cookies from Selenium browser to requests library in Python? | 29,563,335 | 6 | 2015-04-10T13:56:12Z | 29,563,698 | 9 | 2015-04-10T14:12:24Z | [
"python",
"selenium-webdriver",
"python-requests"
] | How can I load session and cookies from Selenium browser? The following code:
```
import requests
cookies = [{u'domain': u'academics.vit.ac.in',
u'name': u'ASPSESSIONIDAEQDTQRB',
u'value': u'ADGIJGJDDGLFIIOCEZJHJCGC',
u'expiry': None, u'path': u'/',
u'secure': True}]
response = requests.get(url2, cookies=cookies)
```
gives me the following exception:
```
Traceback (most recent call last):
File "F:\PYTHON\python_scripts\cookies\cookies3.py", line 23, in <module>
response = requests.get(url2, cookies=cookies)
File "C:\Python27\lib\site-packages\requests\api.py", line 68, in get
return request('get', url, **kwargs)<br/>
File "C:\Python27\lib\site-packages\requests\sessions.py", line 450, in request
prep = self.prepare_request(req)
cookies = cookiejar_from_dict(cookies)
File "C:\Python27\lib\site-packages\requests\cookies.py", line 439, in cookiejar_from_dict
cookiejar.set_cookie(create_cookie(name, cookie_dict[name]))
TypeError: list indices must be integers, not dict
``` | First you have to get the cookies from your driver instance:
```
cookies = driver.get_cookies()
```
This returns a [set of cookie dictionaries](http://selenium-python.readthedocs.io/api.html?highlight=get_cookies#selenium.webdriver.remote.webdriver.WebDriver.get_cookies) for your session.
Next, set those cookies in `requests`:
```
s = requests.Session()
for cookie in cookies:
s.cookies.set(cookie['name'], cookie['value'])
``` |
Why don't I get any syntax errors when I execute my Python script with Perl? | 29,563,832 | 80 | 2015-04-10T14:19:28Z | 29,563,961 | 110 | 2015-04-10T14:25:08Z | [
"python",
"perl"
] | I just wrote some testing python code into `test.py`, and I'm launching it as follows:
```
perl test.py
```
After a while I realized my mistake. I say "after a while", because the
Python code gets actually correctly executed, as if in Python interpreter!
Why is my Perl interpreting my Python? `test.py` looks like this:
```
#!/usr/bin/python
...Python code here...
```
Interestingly, if I do the opposite (i.e. call `python something.pl`) I get a good deal of syntax errors. | From [perlrun](http://perldoc.perl.org/perlrun.html),
> If the `#!` line does not contain the word "perl" nor the word "indir" the program named after the `#!` is executed instead of the Perl interpreter. This is slightly bizarre, but it helps people on machines that don't do `#!` , because they can tell a program that their SHELL is */usr/bin/perl*, and Perl will then dispatch the program to the correct interpreter for them.
For example,
```
$ cat >a
#!/bin/cat
meow
$ perl a
#!/bin/cat
meow
``` |
Most pythonic way to remove tuples from a list if first element is a duplicate | 29,563,953 | 2 | 2015-04-10T14:24:48Z | 29,564,118 | 8 | 2015-04-10T14:32:40Z | [
"python",
"list",
"tuples"
] | The code I have so far is pretty ugly:
```
orig = [(1,2),(1,3),(2,3),(3,3)]
previous_elem = []
unique_tuples = []
for tuple in orig:
if tuple[0] not in previous_elem:
unique_tuples += [tuple]
previous_elem += [tuple[0]]
assert unique_tuples == [(1,2),(2,3),(3,3)]
```
There must be a more pythonic solution. | If you don't care which tuple round you return for duplicates, you could always convert your list to a dictionary and back:
```
>>> orig = [(1,2),(1,3),(2,3),(3,3)]
>>> list(dict(orig).items())
[(1, 3), (2, 3), (3, 3)]
```
If you want to return the ***first*** tuple round, you could reverse your list twice and use an [`OrderedDict`](https://docs.python.org/3.4/library/collections.html#collections.OrderedDict), like this:
```
>>> from collections import OrderedDict
>>> orig = [(1,2),(1,3),(2,3),(3,3)]
>>> new = list(OrderedDict(orig[::-1]).items())[::-1]
[(1, 2), (2, 3), (3, 3)]
```
These are not the most efficient solutions (*if that's of great importance*) , but they do make for nice idiomatic one-liners.
---
# Some benchmarking
Note the difference in speed, and that should you not care which tuple round you return, the first option is much more efficient:
```
>>> import timeit
>>> setup = '''
orig = [(1,2),(1,3),(2,3),(3,3)]
'''
>>> print (min(timeit.Timer('(list(dict(orig).items()))', setup=setup).repeat(7, 1000)))
0.0015771419037069459
```
compared to
```
>>>setup = '''
orig = [(1,2),(1,3),(2,3),(3,3)]
from collections import OrderedDict
'''
>>> print (min(timeit.Timer('(list(OrderedDict(orig[::-1]).items())[::-1])',
setup=setup).repeat(7, 1000)))
0.024554947372323
```
The first option is nearly 15 times faster according to these speed tests.
That being said however, [Saksham's answer](http://stackoverflow.com/a/29612692/4686625) is also `O(n)` and smashes these dictionary methods efficiency wise:
```
>>> setup = '''
orig = [(1,2),(1,3),(2,3),(3,3)]
newlist = []
seen = set()
def fun():
for (a, b) in orig:
if not a in seen:
newlist.append((a, b))
seen.add(a)
return newlist
'''
>>> print (min(timeit.Timer('fun()', setup=setup).repeat(7, 1000)))
0.0004833390384996095
``` |
Avoiding code repetition in default arguments in Python | 29,566,400 | 14 | 2015-04-10T16:31:40Z | 29,566,493 | 7 | 2015-04-10T16:36:36Z | [
"python",
"default",
"argument-passing"
] | Consider a typical function with default arguments:
```
def f(accuracy=1e-3, nstep=10):
...
```
This is compact and easy to understand. But what if we have another function `g` that will call `f`, and we want to pass on some arguments of `g` to `f`? A natural way of doing this is:
```
def g(accuracy=1e-3, nstep=10):
f(accuracy, nstep)
...
```
The problem with this way of doing things is that the default values of the optional arguments get repeated. Usually when propagating default arguments like this, one wants the same default in the upper function (`g`) as in the lower function (`f`), and hence any time the default changes in `f` one needs to go through all the functions that call it and update the defaults of any of their arguments they would propagate to `f`.
Another way of doing this is to use a placeholder argument, and fill in its value inside the function:
```
def f(accuracy=None, nstep=None):
if accuracy is None: accuracy = 1e-3
if nstep is None: nstep=10
...
def g(accuracy=None, nstep=None):
f(accuracy, nstep)
...
```
Now the calling function doesn't need to know what `f`'s defaults are. But the `f` interface is now a bit more cumbersome, and less clear. This is the typical approach in languages without explicit default argument support, like fortran or javascript. But if one does everything this way in python, one is throwing away most of the language's default argument support.
Is there a better approach than these two? What is the standard, pythonic way of doing this? | Define global constants:
```
ACCURACY = 1e-3
NSTEP = 10
def f(accuracy=ACCURACY, nstep=NSTEP):
...
def g(accuracy=ACCURACY, nstep=NSTEP):
f(accuracy, nstep)
```
---
If `f` and `g` are defined in different modules, then you could make a `constants.py` module too:
```
ACCURACY = 1e-3
NSTEP = 10
```
and then define `f` with:
```
from constants import ACCURACY, NSTEP
def f(accuracy=ACCURACY, nstep=NSTEP):
...
```
and similarly for `g`. |
Generators and for loops in Python | 29,570,348 | 21 | 2015-04-10T20:42:45Z | 29,570,364 | 45 | 2015-04-10T20:44:01Z | [
"python",
"generator"
] | So I have a generator function, that looks like this.
```
def generator():
while True:
for x in range(3):
for j in range(5):
yield x
```
After I load up this function and call "next" a bunch of times, I'd expect it to yield values
`0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 0 0 0 0 0 ...`
But instead it just yields 0 all the time. Why is that?
```
>>> execfile("test.py")
>>> generator
<function generator at 0x10b6121b8>
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
``` | `generator()` initializes new generator object:
```
In [4]: generator() is generator() # Creating 2 separate objects
Out[4]: False
```
Then `generator().next()` gets the first value from the newly created generator object (*0* in your case).
You should call `generator` once:
```
In [5]: gen = generator() # Storing new generator object, will reuse it
In [6]: [gen.next() for _ in range(6)] # Get first 6 values for demonstration purposes
Out[6]: [0, 0, 0, 0, 0, 1]
```
Note: [`generator.next`](https://docs.python.org/2/reference/expressions.html?highlight=next#generator.next) was removed from Python 3 ([PEP 3114](https://www.python.org/dev/peps/pep-3114/)) - use [`next` function](https://docs.python.org/3/library/functions.html?highlight=next#next) instead. |
Generators and for loops in Python | 29,570,348 | 21 | 2015-04-10T20:42:45Z | 29,570,452 | 17 | 2015-04-10T20:49:30Z | [
"python",
"generator"
] | So I have a generator function, that looks like this.
```
def generator():
while True:
for x in range(3):
for j in range(5):
yield x
```
After I load up this function and call "next" a bunch of times, I'd expect it to yield values
`0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 0 0 0 0 0 ...`
But instead it just yields 0 all the time. Why is that?
```
>>> execfile("test.py")
>>> generator
<function generator at 0x10b6121b8>
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
``` | With each call of `generator` you are creating a new generator object:
```
generator().next() # 1st item in 1st generator
generator().next() # 1st item in 2nd generator
```
Create one generator, and then call the `next` for subsequent items:
```
g = generator()
g.next() # 1st item in 1st generator
g.next() # 2nd item in 1st generator
``` |
Django not able to render context when in shell | 29,571,606 | 6 | 2015-04-10T22:26:01Z | 29,571,809 | 8 | 2015-04-10T22:46:44Z | [
"python",
"django",
"shell",
"django-templates",
"django-context"
] | This is what I am trying to run. When I run the server and run these lines within a view and then return an HttpResponse, then everything goes fine. However when I run `python manage.py shell` and then try to run through these lines then I get an error:
```
product = Product.objects.get(pk=4)
template = loader.get_template('weekly-email.html')
user = User.objects.get(pk=1)
body = template.render(Context({
'user': user,
'product': product,
}))
```
Output:
```
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/template/backends/django.py", line 74, in render
return self.template.render(context)
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/template/base.py", line 209, in render
return self._render(context)
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/template/base.py", line 201, in _render
return self.nodelist.render(context)
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/template/base.py", line 917, in render_node
return node.render(context)
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/template/base.py", line 963, in render
return render_value_in_context(output, context)
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/template/base.py", line 939, in render_value_in_context
value = localize(value, use_l10n=context.use_l10n)
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/utils/formats.py", line 181, in localize
return number_format(value, use_l10n=use_l10n)
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/utils/formats.py", line 162, in number_format
get_format('DECIMAL_SEPARATOR', lang, use_l10n=use_l10n),
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/utils/formats.py", line 110, in get_format
for module in get_format_modules(lang):
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/utils/formats.py", line 82, in get_format_modules
modules = _format_modules_cache.setdefault(lang, list(iter_format_modules(lang, settings.FORMAT_MODULE_PATH)))
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/utils/formats.py", line 51, in iter_format_modules
if not check_for_language(lang):
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/utils/translation/__init__.py", line 181, in check_for_language
return _trans.check_for_language(lang_code)
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/functools.py", line 472, in wrapper
result = user_function(*args, **kwds)
File "/Users/croberts/.virtualenvs/testproj/lib/python3.4/site-packages/django/utils/translation/trans_real.py", line 409, in check_for_language
if not language_code_re.search(lang_code):
TypeError: expected string or buffer
```
**edit**: and here is my settings.py:
```
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
DEBUG = True
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
SECRET_KEY = 'SECRET'
ALLOWED_HOSTS = []
AUTH_USER_MODEL = 'crunch.User'
STATICFILES_DIRS = (
'/Users/croberts/testproj/static/',
)
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'crunch',
'emailmanager',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)
ROOT_URLCONF = 'testproj.urls'
WSGI_APPLICATION = 'testproj.wsgi.application'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'database'),
}
}
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'MST'
USE_I18N = True
USE_L10N = True
USE_TZ = False
STATIC_URL = '/static/'
MEDIA_ROOT = BASE_DIR+'/media/'
MEDIA_URL = '/media/'
```
Also, I am using django 1.8. | This is a known issue and will be fixed in 1.8.1.
Meanwhile, you can manually activate a language in your shell to fix it:
```
from django.utils.translation import activate
activate('en') # or any language code
```
**UPDATE**: 1.8.1 has been released, so the best solution is to upgrade to the latest 1.8.x version. |
Remove multiple elements from a list of index with Python | 29,571,621 | 4 | 2015-04-10T22:27:54Z | 29,571,633 | 7 | 2015-04-10T22:29:14Z | [
"python",
"list",
"collections"
] | I have a list of values, and a list of indices, and I need to remove the elements that the indices point to.
This is my solution, but I don't like the implementation as it requires to import packages, doesn't work when the values contain maxint, and iterate over the values multiple times.
```
def remove_abnormalities(values, indices):
v = list(values)
for i in indices:
v[i] = sys.maxint
return filter(lambda i: i != sys.maxint, v)
```
Any better solutions? | This should work:
```
def remove_abnormalities(values, indices):
return [val for i, val in enumerate(values) if i not in indices]
```
Additionally you can turn `indices` into a set before filtering for more performance, if the number of indices is large. |
Test if python Counter is contained in another Counter | 29,575,660 | 3 | 2015-04-11T08:12:23Z | 29,576,074 | 8 | 2015-04-11T09:07:48Z | [
"python",
"algorithm",
"counter",
"inclusion"
] | How to test if a python [`Counter`](https://docs.python.org/2/library/collections.html#collections.Counter) is *contained* in another one using the following definition:
> *A Counter `a` is contained in a Counter `b` if, and only if, for every key `k` in `a`, the value `a[k]` is less or equal to the value `b[k]`. The `Counter({'a': 1, 'b': 1})` is contained in `Counter({'a': 2, 'b': 2})` but it is not contained in `Counter({'a': 2, 'c': 2})`.*
I think it is a poor design choice but in python *2.x* the comparison operators (`<`, `<=`, `>=`, `>`) do not use the previous definition, so the third Counter is considered *greater-than* the first. In python *3.x*, instead, `Counter` is an *unorderable type*. | While `Counter` instances are not comparable with the `<` and `>` operators, you can find their difference with the `-` operator. The difference never returns negative counts, so if `A - B` is empty, you know that `B` contains all the items in `A`.
```
def contains(larger, smaller):
return not smaller - larger
``` |
Django migration file in an other app? | 29,575,802 | 4 | 2015-04-11T08:31:49Z | 29,587,968 | 7 | 2015-04-12T09:32:21Z | [
"python",
"django",
"django-models",
"django-admin"
] | Let's imagine a following simplified Django project:
```
<root>/lib/python2.7/site-packages/externalapp/shop
<root>/myapp
```
`myapp` also extends `externalapp.shop.models` models by adding a few fields. `manage.py makemigrations` did generated following schema migration file called *0004\_auto\_20150410\_2001.py*:
```
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
# __init__ is added by me as an attempt how to tell django's
# migration loader operations are for the different application
def __init__(self, name, app_label):
super(Migration, self).__init__(name, 'shop')
dependencies = [
('myapp', '__first__'),
('shop', '0003_auto_20150408_0958'),
]
operations = [
migrations.AddField(
model_name='product',
name='vat',
field=models.ForeignKey(to='myapp.VAT', null=True),
),
]
```
If the above migration schema is placed in `<root>/lib/python2.7/site-packages/externalapp/shop/migrations/` path by default, `manage.py migrate` succeeds and table fields are correctly added.
However if I do move the above migration file into `myapp/migrations/`, following `manage.py migrate` fails with
*django.core.management.base.CommandError: Conflicting migrations detected (0001\_initial, 0004\_auto\_20150410\_2001 in myapp).
To fix them run 'python manage.py makemigrations --merge'*
error message I can't quite understand and suggested `makemigrations --merge` fails with expected:
*ValueError: Could not find common ancestor of set([u'0001\_initial', u'0004\_auto\_20150410\_2001'])*
I've tried to override `migrations.Migration.__init__` to alter derived `app_label` but seems migration loader ignores it.
How adjust migration file so it can work from other application ?
The reason is in production `externalapp` sources can't be directly touched, are read only. | To move a migration file around a Django project, like in case of injecting models of other applications, you need to ensure in your `django.db.migrations.Migration` descendant:
* explicitly set application name, as migrations loader derives it automatically by app where migration file resides and would attempt to perform operations on different models otherwise
* notify migrations recorder it provides migration for other application or it would still consider migration as unapplied (records about applied migrations are stored in a table, currently named `django_migrations`)
I've solved the issue in migration initializer which may look like:
```
from django.db import migrations
TARGET_APP = 'shop' # application label migration is for
class Migration(migrations.Migration):
def __init__(self, name, app_label):
# overriding application operated upon
super(Migration, self).__init__(name, TARGET_APP)
# specify what original migration file it replaces
# or leave migration loader confused about unapplied migration
replaces = ((TARGET_APP, __module__.rsplit('.', 1)[-1]),)
```
It does work for me and find it enough generic way.
Eager to hear about a better/simpler solution if possible. |
Shuffle DataFrame rows | 29,576,430 | 12 | 2015-04-11T09:47:57Z | 29,576,803 | 20 | 2015-04-11T10:26:59Z | [
"python",
"pandas",
"dataframe",
"permutation",
"shuffle"
] | I have the following DataFrame:
```
Col1 Col2 Col3 Type
0 1 2 3 1
1 4 5 6 1
...
20 7 8 9 2
21 10 11 12 2
...
45 13 14 15 3
46 16 17 18 3
...
```
The DataFrame is read from a csv file. All rows which have `Type` 1 are on top, followed by the rows with `Type` 2, followed by the rows with `Type` 3, etc.
I would like to shuffle the DataFrame's rows, so that all `Type`'s are mixed. A possible result could be:
```
Col1 Col2 Col3 Type
0 7 8 9 2
1 13 14 15 3
...
20 1 2 3 1
21 10 11 12 2
...
45 4 5 6 1
46 16 17 18 3
...
```
As can be seen from the result, the order of the rows is shuffled, but the columns remain the same. I don't know if I am explaining this clearly. Let me know if I don't.
How can I achieve this? | You can shuffle the rows of a dataframe by indexing with a shuffled index. For this, you can eg use `np.random.permutation` (but `np.random.choice` is also a possibility):
```
In [12]: df = pd.read_csv(StringIO(s), sep="\s+")
In [13]: df
Out[13]:
Col1 Col2 Col3 Type
0 1 2 3 1
1 4 5 6 1
20 7 8 9 2
21 10 11 12 2
45 13 14 15 3
46 16 17 18 3
In [14]: df.iloc[np.random.permutation(len(df))]
Out[14]:
Col1 Col2 Col3 Type
46 16 17 18 3
45 13 14 15 3
20 7 8 9 2
0 1 2 3 1
1 4 5 6 1
21 10 11 12 2
```
If you want to keep the index numbered from 1, 2, .., n as in your example, you can simply reset the index: `df_shuffled.reset_index(drop=True)` |
Shuffle DataFrame rows | 29,576,430 | 12 | 2015-04-11T09:47:57Z | 34,879,805 | 31 | 2016-01-19T14:49:17Z | [
"python",
"pandas",
"dataframe",
"permutation",
"shuffle"
] | I have the following DataFrame:
```
Col1 Col2 Col3 Type
0 1 2 3 1
1 4 5 6 1
...
20 7 8 9 2
21 10 11 12 2
...
45 13 14 15 3
46 16 17 18 3
...
```
The DataFrame is read from a csv file. All rows which have `Type` 1 are on top, followed by the rows with `Type` 2, followed by the rows with `Type` 3, etc.
I would like to shuffle the DataFrame's rows, so that all `Type`'s are mixed. A possible result could be:
```
Col1 Col2 Col3 Type
0 7 8 9 2
1 13 14 15 3
...
20 1 2 3 1
21 10 11 12 2
...
45 4 5 6 1
46 16 17 18 3
...
```
As can be seen from the result, the order of the rows is shuffled, but the columns remain the same. I don't know if I am explaining this clearly. Let me know if I don't.
How can I achieve this? | The more idiomatic way to do this with pandas is to use the `.sample` method of your dataframe, i.e.
```
df.sample(frac=1)
```
The `frac` keyword argument specifies the fraction of rows to return in the random sample, so `frac=1` means return all rows (in random order).
**Note:**
*If you wish to shuffle your dataframe in-place and reset the index, you could do e.g.*
```
df = df.sample(frac=1).reset_index(drop=True)
```
*Here, specifying `drop=True` prevents `.reset_index` from creating a column containing the old index entries.* |
How to make an object both a Python2 and Python3 iterator? | 29,578,469 | 3 | 2015-04-11T13:34:53Z | 29,578,499 | 13 | 2015-04-11T13:38:09Z | [
"python",
"python-2.7",
"python-3.x",
"iterator"
] | [This Stack Overflow post](http://stackoverflow.com/questions/19151/build-a-basic-python-iterator) is about making an object an iterator in Python.
In Python 2, that means you need to implement an `__iter__()` method, and a `next()` method. But in Python 3, you need to implement a *different* method, instead of `next()` you need to implement `__next__()`.
How does one make an object which is an iterator in both Python 2 and 3? | Just give it both `__next__` and `next` method; one can be an alias of the other:
```
class Iterator(object):
def __iter__(self):
return self
def __next__(self):
# Python 3
return 'a value'
next = __next__ # Python 2
``` |
Still can't install scipy due to missing fortran compiler after brew install gcc on Mac OC X | 29,586,487 | 15 | 2015-04-12T06:00:41Z | 29,596,528 | 15 | 2015-04-13T01:10:47Z | [
"python",
"osx",
"numpy",
"fortran",
"homebrew"
] | I have read and followed [this answer](http://stackoverflow.com/questions/14821297/scipy-build-install-mac-osx) to install scipy/numpy/theano. However, it still failed on the same error of missing Fortran compiler after brew install gcc. While HomeBrew installed the gcc-4.8, it didn't install any gfortran or g95 commands. I figure gfortran may be just a [synonymy](https://gcc.gnu.org/onlinedocs/gfortran/Invoking-GNU-Fortran.html#Invoking-GNU-Fortran) of gcc, then I create a symlink
```
$ cd /usr/local/bin
$ ln -s gcc-4.8 gfortran
$ pip install scipy
```
Then it detects the gfortran command but still complaining no Fortran compiler
```
customize Gnu95FCompiler
Found executable /usr/local/bin/gfortran
customize NAGFCompiler
Could not locate executable f95
customize AbsoftFCompiler
Could not locate executable f90
Could not locate executable f77
customize IBMFCompiler
Could not locate executable xlf90
Could not locate executable xlf
customize IntelFCompiler
Could not locate executable ifort
Could not locate executable ifc
customize GnuFCompiler
Could not locate executable g77
customize G95FCompiler
Could not locate executable g95
customize PGroupFCompiler
Could not locate executable pgfortran
don't know how to compile Fortran code on platform 'posix'
building 'dfftpack' library
error: library dfftpack has Fortran sources but no Fortran compiler found
```
What else should I do? | Fixed by upgrading pip, even though I just installed my pip/virtualenv the first time anew on the same day.
```
(mypy)MAC0227: $ pip install --upgrade pip
...
(mypy)MAC0227: $ pip install theano
/Users/me/.virtualenvs/mypy/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Requirement already satisfied (use --upgrade to upgrade): theano in /Users/me/.virtualenvs/mypy/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): numpy>=1.6.2 in /Users/me/.virtualenvs/mypy/lib/python2.7/site-packages (from theano)
Collecting scipy>=0.11 (from theano)
/Users/me/.virtualenvs/mypy/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Downloading scipy-0.15.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (19.8MB)
100% |ââââââââââââââââââââââââââââââââ| 19.8MB 23kB/s
Installing collected packages: scipy
Successfully installed scipy-0.15.1
``` |
Still can't install scipy due to missing fortran compiler after brew install gcc on Mac OC X | 29,586,487 | 15 | 2015-04-12T06:00:41Z | 34,038,311 | 11 | 2015-12-02T08:46:53Z | [
"python",
"osx",
"numpy",
"fortran",
"homebrew"
] | I have read and followed [this answer](http://stackoverflow.com/questions/14821297/scipy-build-install-mac-osx) to install scipy/numpy/theano. However, it still failed on the same error of missing Fortran compiler after brew install gcc. While HomeBrew installed the gcc-4.8, it didn't install any gfortran or g95 commands. I figure gfortran may be just a [synonymy](https://gcc.gnu.org/onlinedocs/gfortran/Invoking-GNU-Fortran.html#Invoking-GNU-Fortran) of gcc, then I create a symlink
```
$ cd /usr/local/bin
$ ln -s gcc-4.8 gfortran
$ pip install scipy
```
Then it detects the gfortran command but still complaining no Fortran compiler
```
customize Gnu95FCompiler
Found executable /usr/local/bin/gfortran
customize NAGFCompiler
Could not locate executable f95
customize AbsoftFCompiler
Could not locate executable f90
Could not locate executable f77
customize IBMFCompiler
Could not locate executable xlf90
Could not locate executable xlf
customize IntelFCompiler
Could not locate executable ifort
Could not locate executable ifc
customize GnuFCompiler
Could not locate executable g77
customize G95FCompiler
Could not locate executable g95
customize PGroupFCompiler
Could not locate executable pgfortran
don't know how to compile Fortran code on platform 'posix'
building 'dfftpack' library
error: library dfftpack has Fortran sources but no Fortran compiler found
```
What else should I do? | The following worked for me:
`sudo apt-get install gfortran`
on my system:
Ubuntu 15.10 (Linux 4.2.0-19-generic #23-Ubuntu x86\_64 x86\_64 x86\_64 GNU/Linux) |
How to find the length of a leading sequence in a string? | 29,586,698 | 3 | 2015-04-12T06:32:40Z | 29,586,719 | 7 | 2015-04-12T06:36:45Z | [
"python",
"string"
] | I'd like to count the number of leading spaces in a string.What's the most Pythonic way of doing this?
```
>>>F(' ' * 5 + 'a')
5
```
(update) Here are timings of several of the answers:
```
import timeit
>>> timeit.timeit("s.index(re.search(r'\S',s).group())", number=10000, setup="import re;s=' a'")
0.027384042739868164
>>> timeit.timeit("len([i for i in itertools.takewhile(str.isspace,s)])", number=10000, setup="import itertools;s=' a'")
0.025166034698486328
>>> timeit.timeit("next(idx for idx,val in enumerate(s) if val != ' ')", number=10000, setup="s=' a'")
0.028306961059570312
>>> timeit.timeit("F(' a')", number=10000, setup="def F(s): return len(s)-len(s.lstrip(' '))")
0.0051808357238769531
``` | Using [`re` module](https://docs.python.org/2/howto/regex.html)
```
>>> s
' a'
>>> import re
>>> s.index(re.search(r'\S',s).group())
5
```
Using [`itertools`](https://docs.python.org/2/library/itertools.html#itertools.takewhile)
```
>>> import itertools
>>> len([i for i in itertools.takewhile(str.isspace,s)])
5
```
The brute force way
```
>>> def F(s):
... for i in s:
... if i!=' ':
... return s.index(i)
...
>>> F(s)
5
``` |
How do I install Hadoop and Pydoop on a fresh Ubuntu instance | 29,588,595 | 4 | 2015-04-12T10:51:47Z | 30,322,610 | 9 | 2015-05-19T10:08:07Z | [
"python",
"ubuntu",
"hadoop",
"amazon-web-services"
] | Most of the setup instructions I see are verbose. Is there a near script-like set of commands that we can just execute to set up Hadoop and Pydoop on an Ubuntu instance on Amazon EC2? | Another solution would be to use Juju (Ubuntu's service orchestration framework).
First install the Juju client on your standard computer:
```
sudo add-apt-repository ppa:juju/stable
sudo apt-get update && sudo apt-get install juju-core
```
(instructions for MacOS and Windows are also available [here](https://jujucharms.com/docs/stable/getting-started))
Then generate a configuration file
```
juju generate-config
```
And modify it with your preferred cloud credentials (AWS, Azure, GCE...). Based on the naming for m3.medium, I assume you use AWS hence follow [these instructions](https://jujucharms.com/docs/stable/config-aws)
Note: The above has to be done only once.
Now bootstrap
```
juju bootstrap amazon
```
Deploy a GUI (optional) like the demo available on the website
```
juju deploy --to 0 juju-gui && juju expose juju-gui
```
You'll find the URL of the GUI and password with:
```
juju api-endpoints | cut -f1 -d":"
cat ~/.juju/environments/amazon.jenv | grep pass
```
Note that the above steps are preliminary to any Juju deployment, and can be re-used everytime you want to spin the environment.
Now comes your use case with Hadoop. You have several options.
1. Just deploy 1 node of Hadoop
```
juju deploy --constraints "cpu-cores=2 mem=4G root-disk=20G" hadoop
```
You can track the deployment with
```
juju debug-log
```
and get info about the new instances with
```
juju status
```
This is the only command you'll need to deploy Hadoop (you could consider Juju as an evolution of apt for complex systems)
2. Deploy a cluster of 3 nodes with HDFS and MapReduce
```
juju deploy hadoop hadoop-master
juju deploy hadoop hadoop-slavecluster
juju add-unit -n 2 hadoop-slavecluster
juju add-relation hadoop-master:namenode hadoop-slavecluster:datanode
juju add-relation hadoop-master:resourcemanager hadoop-slavecluster:nodemanager
```
3. Scale out usage (separate HDFS & MapReduce, experimental)
```
juju deploy hadoop hdfs-namenode
juju deploy hadoop hdfs-datacluster
juju add-unit -n 2 hdfs-datacluster
juju add-relation hdfs-namenode:namenode hdfs-datacluster:datanode
juju deploy hadoop mapred-resourcemanager
juju deploy hadoop mapred-taskcluster
juju add-unit -n 2 mapred-taskcluster
juju add-relation mapred-resourcemanager:mapred-namenode hdfs-namenode:namenode
juju add-relation mapred-taskcluster:mapred-namenode hdfs-namenode:namenode
juju add-relation mapred-resourcemanager:resourcemanager mapred-taskcluster:nodemanager
```
For Pydoop, you'll have to deploy it manually as in the first answer (you have access to the Juju instances via "juju ssh "), or you can write a "charm" (a method for Juju to learn how to deploy pydoop). |
Django: how to check if username already exists | 29,588,808 | 4 | 2015-04-12T11:17:53Z | 29,588,913 | 8 | 2015-04-12T11:28:54Z | [
"python",
"django",
"validation",
"registration",
"username"
] | i am not very advanced user of Django. I have seen many different methods online, but they all are for modified models or too complicated for me to understand.
I am reusing the `UserCreationForm` in my `MyRegistrationForm`
```
class MyRegistrationForm(UserCreationForm):
email = forms.EmailField(required=True)
class Meta:
model = User
fields = ('username', 'email', 'password1', 'password2')
def save(self, commit=True):
user = super(MyRegistrationForm, self).save(commit=False)
user.email = self.cleaned_data['email']
user.set_password(self.cleaned_data["password1"])
if commit:
user.save()
return user
```
I struggle to understand or find a way to check if the username that user enters is already taken or not.
So i just use this to redirect me to html where it says bad username or passwords do not match:
```
def register_user(request):
if request.method == 'POST':
form = MyRegistrationForm(request.POST)
if form.is_valid():
form.save()
return HttpResponseRedirect('/accounts/register_success')
else:
return render_to_response('invalid_reg.html')
args = {}
args.update(csrf(request))
args['form'] = MyRegistrationForm()
print args
return render_to_response('register.html', args)
```
Here is my registration template(if needed):
```
{% extends "base.html" %}
{% block content %}
<section>
<h2 style="text-align: center">Register</h2>
<form action="/accounts/register/" method="post">{% csrf_token %}
<ul>
{{form.as_ul}}
</ul>
<input type="submit" value="Register" onclick="validateForm()"/>
</form>
</section>
{% endblock %}
```
But i need to rasise some kind of exception or smth like that before user gets redirected. Maybe when user presses register he/she would get the error/warrning saying that username is already taken? Is that possible? | You can use [exists](https://docs.djangoproject.com/en/1.8/ref/models/querysets/#django.db.models.query.QuerySet.exists):
```
if User.objects.filter(username=self.cleaned_data['username']).exists():
# Username exists
...
``` |
Plot width settings in ipython notebook | 29,589,119 | 16 | 2015-04-12T11:48:45Z | 29,612,806 | 19 | 2015-04-13T18:48:02Z | [
"python",
"matplotlib",
"ipython",
"ipython-notebook"
] | I've got the following plots:

It would look nicer if they have the same width. Do you have any idea how to do it in ipython notebook when I am using `%matplotlib inline`?
**UPDATE:**
To generate both figures I am using the following functions:
```
import numpy as np
import matplotlib.pyplot as plt
def show_plots2d(title, plots, points, xlabel = '', ylabel = ''):
"""
Shows 2D plot.
Arguments:
title : string
Title of the plot.
plots : array_like of pairs like array_like and array_like
List of pairs,
where first element is x axis and the second is the y axis.
points : array_like of pairs like integer and integer
List of pairs,
where first element is x coordinate
and the second is the y coordinate.
xlabel : string
Label of x axis
ylabel : string
Label of y axis
"""
xv, yv = zip(*plots)
y_exclNone = [y[y != np.array(None)] for y in yv]
y_mins, y_maxs = zip(*
[(float(min(y)), float(max(y))) for y in y_exclNone]
)
y_min = min(y_mins)
y_max = max(y_maxs)
y_amp = y_max - y_min
plt.figure().suptitle(title)
plt.axis(
[xv[0][0], xv[0][-1], y_min - 0.3 * y_amp, y_max + 0.3 * y_amp]
)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
for x, y in plots:
plt.plot(x, y)
for x, y in points:
plt.plot(x, y, 'bo')
plt.show()
def show_plot3d(title, x, y, z, xlabel = '', ylabel = '', zlabel = ''):
"""
Shows 3D plot.
Arguments:
title : string
Title of the plot.
x : array_like
List of x coordinates
y : array_like
List of y coordinates
z : array_like
List of z coordinates
xlabel : string
Label of x axis
ylabel : string
Label of y axis
zlabel : string
Label of z axis
"""
plt.figure().suptitle(title)
plt.pcolormesh(x, y, z)
plt.axis([x[0], x[-1], y[0], y[-1]])
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.colorbar().set_label(zlabel)
plt.show()
``` | If you use `%pylab inline` you can (on a new line) insert the following command:
```
%pylab inline
pylab.rcParams['figure.figsize'] = (10, 6)
```
This will set all figures in your document (unless otherwise specified) to be of the size `(10, 6)`, where the first entry is the width and the second is the height.
See this SO post for more details. <http://stackoverflow.com/a/17231361/1419668> |
Plot width settings in ipython notebook | 29,589,119 | 16 | 2015-04-12T11:48:45Z | 34,787,587 | 16 | 2016-01-14T10:46:17Z | [
"python",
"matplotlib",
"ipython",
"ipython-notebook"
] | I've got the following plots:

It would look nicer if they have the same width. Do you have any idea how to do it in ipython notebook when I am using `%matplotlib inline`?
**UPDATE:**
To generate both figures I am using the following functions:
```
import numpy as np
import matplotlib.pyplot as plt
def show_plots2d(title, plots, points, xlabel = '', ylabel = ''):
"""
Shows 2D plot.
Arguments:
title : string
Title of the plot.
plots : array_like of pairs like array_like and array_like
List of pairs,
where first element is x axis and the second is the y axis.
points : array_like of pairs like integer and integer
List of pairs,
where first element is x coordinate
and the second is the y coordinate.
xlabel : string
Label of x axis
ylabel : string
Label of y axis
"""
xv, yv = zip(*plots)
y_exclNone = [y[y != np.array(None)] for y in yv]
y_mins, y_maxs = zip(*
[(float(min(y)), float(max(y))) for y in y_exclNone]
)
y_min = min(y_mins)
y_max = max(y_maxs)
y_amp = y_max - y_min
plt.figure().suptitle(title)
plt.axis(
[xv[0][0], xv[0][-1], y_min - 0.3 * y_amp, y_max + 0.3 * y_amp]
)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
for x, y in plots:
plt.plot(x, y)
for x, y in points:
plt.plot(x, y, 'bo')
plt.show()
def show_plot3d(title, x, y, z, xlabel = '', ylabel = '', zlabel = ''):
"""
Shows 3D plot.
Arguments:
title : string
Title of the plot.
x : array_like
List of x coordinates
y : array_like
List of y coordinates
z : array_like
List of z coordinates
xlabel : string
Label of x axis
ylabel : string
Label of y axis
zlabel : string
Label of z axis
"""
plt.figure().suptitle(title)
plt.pcolormesh(x, y, z)
plt.axis([x[0], x[-1], y[0], y[-1]])
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.colorbar().set_label(zlabel)
plt.show()
``` | If you're not in an ipython notebook (like the OP), you can also just declare the size when you declare the figure:
```
width = 12
height = 12
plt.figure(figsize=(width, height))
``` |
How should I handle inclusive ranges in Python? | 29,596,045 | 29 | 2015-04-12T23:55:32Z | 29,596,068 | 12 | 2015-04-12T23:59:12Z | [
"python",
"range",
"slice"
] | I am working in a domain in which ranges are conventionally described inclusively. I have human-readable descriptions such as `from A to B` , which represent ranges that include both end points - e.g. `from 2 to 4` means `2, 3, 4`.
What is the best way to work with these ranges in Python code? The following code works to generate inclusive ranges of integers, but I also need to perform inclusive slice operations:
```
def inclusive_range(start, stop, step):
return range(start, (stop + 1) if step >= 0 else (stop - 1), step)
```
The only complete solution I see is to explicitly use `+ 1` (or `- 1`) every time I use `range` or slice notation (e.g. `range(A, B + 1)`, `l[A:B+1]`, `range(B, A - 1, -1)`). Is this repetition really the best way to work with inclusive ranges?
**Edit:** Thanks to L3viathan for answering. Writing an `inclusive_slice` function to complement `inclusive_range` is certainly an option, although I would probably write it as follows:
```
def inclusive_slice(start, stop, step):
...
return slice(start, (stop + 1) if step >= 0 else (stop - 1), step)
```
`...` here represents code to handle negative indices, which are not straightforward when used with slices - note, for example, that L3viathan's function gives incorrect results if `slice_to == -1`.
However, it seems that an `inclusive_slice` function would be awkward to use - is `l[inclusive_slice(A, B)]` really any better than `l[A:B+1]`?
Is there any better way to handle inclusive ranges?
**Edit 2:** Thank you for the new answers. I agree with Francis and Corley that changing the meaning of slice operations, either globally or for certain classes, would lead to significant confusion. I am therefore now leaning towards writing an `inclusive_slice` function.
To answer my own question from the previous edit, I have come to the conclusion that using such a function (e.g. `l[inclusive_slice(A, B)]`) would be better than manually adding/subtracting 1 (e.g. `l[A:B+1]`), since it would allow edge cases (such as `B == -1` and `B == None`) to be handled in a single place. Can we reduce the awkwardness in using the function?
**Edit 3:** I have been thinking about how to improve the usage syntax, which currently looks like `l[inclusive_slice(1, 5, 2)]`. In particular, it would be good if the creation of an inclusive slice resembled standard slice syntax. In order to allow this, instead of `inclusive_slice(start, stop, step)`, there could be a function `inclusive` that takes a slice as a parameter. The ideal usage syntax for `inclusive` would be line `1`:
```
l[inclusive(1:5:2)] # 1
l[inclusive(slice(1, 5, 2))] # 2
l[inclusive(s_[1:5:2])] # 3
l[inclusive[1:5:2]] # 4
l[1:inclusive(5):2] # 5
```
Unfortunately this is not permitted by Python, which only allows the use of `:` syntax within `[]`. `inclusive` would therefore have to be called using either syntax `2` or `3` (where `s_` acts like [the version provided by numpy](http://docs.scipy.org/doc/numpy/reference/generated/numpy.s_.html)).
Other possibilities are to make `inclusive` into an object with `__getitem__`, permitting syntax `4`, or to apply `inclusive` only to the `stop` parameter of the slice, as in syntax `5`. Unfortunately I do not believe the latter can be made to work since `inclusive` requires knowledge of the `step` value.
Of the workable syntaxes (the original `l[inclusive_slice(1, 5, 2)]`, plus `2`, `3` and `4`), which would be the best to use? Or is there another, better option?
**Final Edit:** Thank you all for the replies and comments, this has been very interesting. I have always been a fan of Python's "one way to do it" philosophy, but this issue has been caused by a conflict between Python's "one way" and the "one way" proscribed by the problem domain. I have definitely gained some appreciation for [TIMTOWTDI](http://en.wikipedia.org/wiki/There%27s_more_than_one_way_to_do_it) in language design.
For giving the first and highest-voted answer, I award the bounty to L3viathan. | Write an additional function for inclusive slice, and use that instead of slicing. While it would be possible to e.g. subclass list and implement a `__getitem__` reacting to a slice object, I would advise against it, since your code will behave contrary to expectation for anyone but you â and probably to you, too, in a year.
`inclusive_slice` could look like this:
```
def inclusive_slice(myList, slice_from=None, slice_to=None, step=1):
if slice_to is not None:
slice_to += 1 if step > 0 else -1
if slice_to == 0:
slice_to = None
return myList[slice_from:slice_to:step]
```
What I would do personally, is just use the "complete" solution you mentioned (`range(A, B + 1)`, `l[A:B+1]`) and comment well. |
How should I handle inclusive ranges in Python? | 29,596,045 | 29 | 2015-04-12T23:55:32Z | 29,700,201 | 7 | 2015-04-17T13:11:56Z | [
"python",
"range",
"slice"
] | I am working in a domain in which ranges are conventionally described inclusively. I have human-readable descriptions such as `from A to B` , which represent ranges that include both end points - e.g. `from 2 to 4` means `2, 3, 4`.
What is the best way to work with these ranges in Python code? The following code works to generate inclusive ranges of integers, but I also need to perform inclusive slice operations:
```
def inclusive_range(start, stop, step):
return range(start, (stop + 1) if step >= 0 else (stop - 1), step)
```
The only complete solution I see is to explicitly use `+ 1` (or `- 1`) every time I use `range` or slice notation (e.g. `range(A, B + 1)`, `l[A:B+1]`, `range(B, A - 1, -1)`). Is this repetition really the best way to work with inclusive ranges?
**Edit:** Thanks to L3viathan for answering. Writing an `inclusive_slice` function to complement `inclusive_range` is certainly an option, although I would probably write it as follows:
```
def inclusive_slice(start, stop, step):
...
return slice(start, (stop + 1) if step >= 0 else (stop - 1), step)
```
`...` here represents code to handle negative indices, which are not straightforward when used with slices - note, for example, that L3viathan's function gives incorrect results if `slice_to == -1`.
However, it seems that an `inclusive_slice` function would be awkward to use - is `l[inclusive_slice(A, B)]` really any better than `l[A:B+1]`?
Is there any better way to handle inclusive ranges?
**Edit 2:** Thank you for the new answers. I agree with Francis and Corley that changing the meaning of slice operations, either globally or for certain classes, would lead to significant confusion. I am therefore now leaning towards writing an `inclusive_slice` function.
To answer my own question from the previous edit, I have come to the conclusion that using such a function (e.g. `l[inclusive_slice(A, B)]`) would be better than manually adding/subtracting 1 (e.g. `l[A:B+1]`), since it would allow edge cases (such as `B == -1` and `B == None`) to be handled in a single place. Can we reduce the awkwardness in using the function?
**Edit 3:** I have been thinking about how to improve the usage syntax, which currently looks like `l[inclusive_slice(1, 5, 2)]`. In particular, it would be good if the creation of an inclusive slice resembled standard slice syntax. In order to allow this, instead of `inclusive_slice(start, stop, step)`, there could be a function `inclusive` that takes a slice as a parameter. The ideal usage syntax for `inclusive` would be line `1`:
```
l[inclusive(1:5:2)] # 1
l[inclusive(slice(1, 5, 2))] # 2
l[inclusive(s_[1:5:2])] # 3
l[inclusive[1:5:2]] # 4
l[1:inclusive(5):2] # 5
```
Unfortunately this is not permitted by Python, which only allows the use of `:` syntax within `[]`. `inclusive` would therefore have to be called using either syntax `2` or `3` (where `s_` acts like [the version provided by numpy](http://docs.scipy.org/doc/numpy/reference/generated/numpy.s_.html)).
Other possibilities are to make `inclusive` into an object with `__getitem__`, permitting syntax `4`, or to apply `inclusive` only to the `stop` parameter of the slice, as in syntax `5`. Unfortunately I do not believe the latter can be made to work since `inclusive` requires knowledge of the `step` value.
Of the workable syntaxes (the original `l[inclusive_slice(1, 5, 2)]`, plus `2`, `3` and `4`), which would be the best to use? Or is there another, better option?
**Final Edit:** Thank you all for the replies and comments, this has been very interesting. I have always been a fan of Python's "one way to do it" philosophy, but this issue has been caused by a conflict between Python's "one way" and the "one way" proscribed by the problem domain. I have definitely gained some appreciation for [TIMTOWTDI](http://en.wikipedia.org/wiki/There%27s_more_than_one_way_to_do_it) in language design.
For giving the first and highest-voted answer, I award the bounty to L3viathan. | Since in Python, the ending index is always exclusive, it's worth considering to always use the "Python-convention" values internally. This way, you will save yourself from mixing up the two in your code.
Only ever deal with the "external representation" through dedicated conversion subroutines:
```
def text2range(text):
m = re.match(r"from (\d+) to (\d+)",text)
start,end = int(m.groups(1)),int(m.groups(2))+1
def range2text(start,end):
print "from %d to %d"%(start,end-1)
```
Alternatively, you can mark the variables holding the "unusual" representation with the [true Hungarian notation](http://www.joelonsoftware.com/articles/Wrong.html). |
import check_arrays from sklearn | 29,596,237 | 14 | 2015-04-13T00:26:17Z | 29,616,386 | 16 | 2015-04-13T22:51:45Z | [
"python",
"scikit-learn",
"svm"
] | I'm trying to use a svm function from the scikit learn package for python but I get the error message:
```
from sklearn.utils.validation import check_arrays
```
> ImportError: cannot import name 'check\_arrays'
I'm using python 3.4. Can anyone give me an advice? Thanks in advance. | This method was removed in 0.16, replaced by a (very different) `check_array` function.
You are likely getting this error because you didn't upgrade from 0.15 to 0.16 properly. [Or because you relied on a not-really-public function in sklearn]. See <http://scikit-learn.org/dev/install.html#canopy-and-anaconda-for-all-supported-platforms> .
If you installed using anaconda / conda, you should use the conda mechanism to upgrade, not pip. Otherwise old .pyc files might remain in your folder. |
How to uninstall mini conda? python | 29,596,350 | 11 | 2015-04-13T00:43:47Z | 29,616,442 | 14 | 2015-04-13T22:58:15Z | [
"python",
"pip",
"uninstall",
"conda",
"miniconda"
] | I've install the conda package as such:
```
$ wget http://bit.ly/miniconda
$ bash miniconda
$ conda install numpy pandas scipy matplotlib scikit-learn nltk ipython-notebook seaborn
```
I want to uninstall it because it's messing up my pips and environment.
* **How do I uninstall conda totally?**
* **Will it uninstall also my pip managed packages? If so, is there a way to uninstall conda safely without uninstalling packages managed by pip?** | In order to [uninstall miniconda](http://docs.continuum.io/anaconda/install.html#id6), simply remove the `miniconda` folder,
```
rm -r ~/miniconda/
```
this should not remove any of your pip installed packages (but you should check the contents of the `~/miniconda` folder to confirm).
As to avoid conflicts between different python environements, you can use `virtualenv`. In particular, with miniconda, the following workflow could be used,
```
$ wget http://bit.ly/miniconda
$ bash miniconda
$ conda env remove --yes -n new_env # remove the environement new_env if it exists (optional)
$ conda create --yes -n new_env pip numpy pandas scipy matplotlib scikit-learn nltk ipython-notebook seaborn python=2
$ activate new_env
$ # pip install modules if needed, run python scripts, etc
# everything will be installed in the new_env
# located in ~/miniconda/envs/new_env
$ deactivate
``` |
How to pip install cairocffi? | 29,596,426 | 6 | 2015-04-13T00:54:39Z | 29,596,525 | 8 | 2015-04-13T01:10:25Z | [
"python",
"install",
"pip",
"cairo",
"python-cffi"
] | **How do I install `cairocffi` through `pip`?**
`cairocffi` is a CFFI-based drop-in replacement for `Pycairo` <https://github.com/SimonSapin/cairocffi>.
I'm trying to install it on Ubuntu 14.04:
```
alvas@ubi:~$ cat /etc/*-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.2 LTS"
NAME="Ubuntu"
VERSION="14.04.2 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.2 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
```
I've tried installing with the standard pip command but I get this:
```
$ sudo pip install cairocffi
The directory '/home/alvas/.cache/pip/log' or its parent directory is not owned by the current user and the debug log has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/alvas/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/alvas/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting cairocffi
Downloading cairocffi-0.6.tar.gz (75kB)
100% |ââââââââââââââââââââââââââââââââ| 77kB 34kB/s
Collecting cffi>=0.6 (from cairocffi)
Downloading cffi-0.9.2.tar.gz (209kB)
100% |ââââââââââââââââââââââââââââââââ| 212kB 97kB/s
Requirement already satisfied (use --upgrade to upgrade): pycparser in /usr/local/lib/python3.4/dist-packages (from cffi>=0.6->cairocffi)
Installing collected packages: cffi, cairocffi
Running setup.py install for cffi
Complete output from command /usr/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip-build-d3kjzf__/cffi/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-ll323a3c-record/install-record.txt --single-version-externally-managed --compile:
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.4
creating build/lib.linux-x86_64-3.4/cffi
copying cffi/commontypes.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/lock.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/api.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/verifier.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/__init__.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/cparser.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/vengine_gen.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/gc_weakref.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/ffiplatform.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/model.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-3.4/cffi
running build_ext
building '_cffi_backend' extension
creating build/temp.linux-x86_64-3.4
creating build/temp.linux-x86_64-3.4/c
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC -DUSE__THREAD -I/usr/include/ffi -I/usr/include/libffi -I/usr/include/python3.4m -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.4/c/_cffi_backend.o
c/_cffi_backend.c:13:17: fatal error: ffi.h: No such file or directory
#include <ffi.h>
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip-build-d3kjzf__/cffi/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-ll323a3c-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-d3kjzf__/cffi
```
I've manually checked the permission and I realized that there's no write access permission. **Why is that so? And why isn't sudo working to overwrite the permission?**
```
$ ls -la .cache/pip/log/
total 60
drwxrwxr-x 2 alvas alvas 4096 Feb 3 10:51 .
drwx------ 4 alvas alvas 4096 Apr 12 23:16 ..
-rw-rw-r-- 1 alvas alvas 49961 Apr 12 23:18 debug.log
```
When I tried `sudo -H pip install cairoffi`, I got:
```
sudo -H pip install cairocffi
Collecting cairocffi
Using cached cairocffi-0.6.tar.gz
Collecting cffi>=0.6 (from cairocffi)
Downloading cffi-0.9.2.tar.gz (209kB)
100% |ââââââââââââââââââââââââââââââââ| 212kB 29kB/s
Requirement already satisfied (use --upgrade to upgrade): pycparser in /usr/local/lib/python3.4/dist-packages (from cffi>=0.6->cairocffi)
Installing collected packages: cffi, cairocffi
Running setup.py install for cffi
Complete output from command /usr/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip-build-2sv6pbsp/cffi/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-xk4kkjrj-record/install-record.txt --single-version-externally-managed --compile:
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.4
creating build/lib.linux-x86_64-3.4/cffi
copying cffi/commontypes.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/lock.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/api.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/verifier.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/__init__.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/cparser.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/vengine_gen.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/gc_weakref.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/ffiplatform.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/model.py -> build/lib.linux-x86_64-3.4/cffi
copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-3.4/cffi
running build_ext
building '_cffi_backend' extension
creating build/temp.linux-x86_64-3.4
creating build/temp.linux-x86_64-3.4/c
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC -DUSE__THREAD -I/usr/include/ffi -I/usr/include/libffi -I/usr/include/python3.4m -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.4/c/_cffi_backend.o
c/_cffi_backend.c:13:17: fatal error: ffi.h: No such file or directory
#include <ffi.h>
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip-build-2sv6pbsp/cffi/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-xk4kkjrj-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-2sv6pbsp/cffi
```
As @MattDMo suggested, i've tried `apt-get install libffi` but it still didn't work out:
```
alvas@ubi:~$ sudo apt-get install libffi libffi-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package libffi
```
But there isn't any `libffi` on the package manager, so i've tried `libffi-dev`:
```
alvas@ubi:~$ sudo apt-get install libffi-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
libffi-dev
0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded.
Need to get 99.8 kB of archives.
After this operation, 323 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ trusty/main libffi-dev amd64 3.1~rc1+r3.0.13-12 [99.8 kB]
Fetched 99.8 kB in 1s (76.3 kB/s)
Selecting previously unselected package libffi-dev:amd64.
(Reading database ... 492855 files and directories currently installed.)
Preparing to unpack .../libffi-dev_3.1~rc1+r3.0.13-12_amd64.deb ...
Unpacking libffi-dev:amd64 (3.1~rc1+r3.0.13-12) ...
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
Processing triggers for doc-base (0.10.5) ...
Processing 1 added doc-base file...
Processing triggers for install-info (5.2.0.dfsg.1-2) ...
Setting up libffi-dev:amd64 (3.1~rc1+r3.0.13-12) ...
```
It installs `libffi-dev` successfully but `cairoffi` is still not installing:
```
alvas@ubi:~$ sudo -H pip install cairoffi
Collecting cairoffi
Could not find a version that satisfies the requirement cairoffi (from versions: )
No matching distribution found for cairoffi
alvas@ubi:~$ sudo -H pip3 install cairoffi
Collecting cairoffi
Could not find a version that satisfies the requirement cairoffi (from versions: )
No matching distribution found for cairoffi
``` | It's right in the error message:
```
No package 'libffi' found
```
You'll need to install **`libffi`** and **`libffi-dev`** through your distro's package manager (`yum`, `apt-get`, whatever) before the `pip` installation will work. Their names may very slightly from platform to platform. |
Python regex explanation needed - $ character usage | 29,599,122 | 5 | 2015-04-13T06:33:44Z | 29,599,140 | 8 | 2015-04-13T06:35:09Z | [
"python",
"regex"
] | My apologies for a completely newbie question. I did try searching stackoverflow first before posting this question.
I am trying to learn regex using python from diveintopython3.net. While fiddling with the examples, I failed to understand one particular output
for a regex search (shown below):
```
>>> pattern = 'M?M?M?$'
>>> re.search(pattern,'MMMMmmmmm')
<_sre.SRE_Match object at 0x7f0aa8095168>
```
Why does the above regex pattern match the input text? My understanding is that the $ character should match only at the end of the string. But the input text ends with `'mmmm'`. So i though the patterns should not match.
My python version is :
```
Python 3.3.2 (default, Dec 4 2014, 12:49:00)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-7)] on linux
```
EDIT: Attached a screenshot from Debuggex. | > Why does the above regex pattern match the input text?
Because you made the previous `M`'s as optional. `M?` refers an optional `M`. `M` may or maynot present. So the above regex `'M?M?M?$'` matches only the zero width end of the line boundary. Hence you got a match. |
regex.sub() gives different results to re.sub() | 29,602,177 | 14 | 2015-04-13T09:38:37Z | 29,602,793 | 13 | 2015-04-13T10:09:36Z | [
"python",
"regex",
"python-3.x"
] | I work with [Czech](http://en.wikipedia.org/wiki/Czechs) accented text in Python 3.4.
Calling [`re.sub()`](https://docs.python.org/3.4/library/re.html#re.sub) to perform substitution by regex on an accented sentence works well, but using a regex compiled with [`re.compile()`](https://docs.python.org/3.4/library/re.html#re.compile) and then calling [`regex.sub()`](https://docs.python.org/3.4/library/re.html#re.regex.sub) fails.
Here is the case, where I use the same arguments for [`re.sub()`](https://docs.python.org/3.4/library/re.html#re.sub) and [`regex.sub()`](https://docs.python.org/3.4/library/re.html#re.regex.sub)
```
import re
pattern = r'(?<!\*)(Poplatn[iÃ]\w+ da[nÅ]\w+)'
flags = re.I|re.L
compiled = re.compile(pattern, flags)
text = 'PoplatnÃkem danÄ z pozemků je vlastnÃk pozemku'
mark = r'**\1**' # wrap 1st matching group in double stars
print(re.sub(pattern, mark, text, flags))
# outputs: **PoplatnÃkem danÄ** z pozemků je vlastnÃk pozemku
# substitution works
print(compiled.sub(mark, text))
# outputs: PoplatnÃkem danÄ z pozemků je vlastnÃk pozemku
# substitution fails
```
I believe that the reason is accents, because for a non-accented sentence [`re.sub()`](https://docs.python.org/3.4/library/re.html#re.sub) and [`regex.sub()`](https://docs.python.org/3.4/library/re.html#re.regex.sub) work identically.
But it seems to me like a bug, because passing the same arguments returns different results, which should not happen. This topic is complicated by different platforms and locales, so it may not be reproducible on your system. Here is screenshot of my console.

Do you see any fault in my code, or should I report it as a bug? | As [Padraic Cunningham figured out](http://stackoverflow.com/a/29602850/908494), this is not actually a bug.
However, it is *related* to a bug which you didn't run into, and to you using a flag you probably shouldn't be using, so I'll leave my earlier answer below, even though his is the right answer to your problem.
---
There's a recent-ish change (somewhere between 3.4.1 and 3.4.3, and between 2.7.3 and 2.7.8) that affects this. Before that change, you can't even *compile* that pattern without raising an `OverflowError`.
More importantly, why are you using `re.L`? The `re.L` mechanism does not mean "use the Unicode rules for my locale", it means "use some unspecified non-Unicode rules that only really make sense for Latin-1-derived locales and may not work right on Windows". Or, as [the docs](https://docs.python.org/3/library/re.html#re.L) put it:
> Make `\w`, `\W`, `\b`, `\B`, `\s` and `\S` dependent on the current locale. **The use of this flag is discouraged as the locale mechanism is very unreliable**, and it only handles one âcultureâ at a time anyway; you should use Unicode matching instead, which is the default in Python 3 for Unicode (str) patterns.
See [bug #22407](http://bugs.python.org/issue22407) and the linked python-dev thread for some recent discussion of this.
And if I remove the `re.L` flag, the code now compiles just fine on 3.4.1. (I also get the "right" results on both 3.4.1 and 3.4.3, but that's just a coincidence; I'm now intentionally not passing the screwy flag and screwing it up in the first version, and still accidentally not passing the screwy flag and screwing it up in the second, so they matchâ¦)
So, even if this were a bug, there's a good chance it would be closed WONTFIX. The resolution for #22407 was to deprecate `re.L` for non-`bytes` patterns in 3.5 and remove it in 3.6, so I doubt anyone's going to care about fixing bugs with it now. (Not to mention that `re` itself is theoretically going away in favor of [`regex`](https://pypi.python.org/pypi/regex) one of these decades⦠and IIRC, `regex` also deprecated the `L` flag unless you're using a `bytes` pattern and `re`-compatible mode.) |
regex.sub() gives different results to re.sub() | 29,602,177 | 14 | 2015-04-13T09:38:37Z | 29,602,850 | 9 | 2015-04-13T10:12:59Z | [
"python",
"regex",
"python-3.x"
] | I work with [Czech](http://en.wikipedia.org/wiki/Czechs) accented text in Python 3.4.
Calling [`re.sub()`](https://docs.python.org/3.4/library/re.html#re.sub) to perform substitution by regex on an accented sentence works well, but using a regex compiled with [`re.compile()`](https://docs.python.org/3.4/library/re.html#re.compile) and then calling [`regex.sub()`](https://docs.python.org/3.4/library/re.html#re.regex.sub) fails.
Here is the case, where I use the same arguments for [`re.sub()`](https://docs.python.org/3.4/library/re.html#re.sub) and [`regex.sub()`](https://docs.python.org/3.4/library/re.html#re.regex.sub)
```
import re
pattern = r'(?<!\*)(Poplatn[iÃ]\w+ da[nÅ]\w+)'
flags = re.I|re.L
compiled = re.compile(pattern, flags)
text = 'PoplatnÃkem danÄ z pozemků je vlastnÃk pozemku'
mark = r'**\1**' # wrap 1st matching group in double stars
print(re.sub(pattern, mark, text, flags))
# outputs: **PoplatnÃkem danÄ** z pozemků je vlastnÃk pozemku
# substitution works
print(compiled.sub(mark, text))
# outputs: PoplatnÃkem danÄ z pozemků je vlastnÃk pozemku
# substitution fails
```
I believe that the reason is accents, because for a non-accented sentence [`re.sub()`](https://docs.python.org/3.4/library/re.html#re.sub) and [`regex.sub()`](https://docs.python.org/3.4/library/re.html#re.regex.sub) work identically.
But it seems to me like a bug, because passing the same arguments returns different results, which should not happen. This topic is complicated by different platforms and locales, so it may not be reproducible on your system. Here is screenshot of my console.

Do you see any fault in my code, or should I report it as a bug? | The last argument in the compile is `flags`, if you actually use `flags=flags` in the `re.sub` you will see the same behaviour:
```
compiled = re.compile(pattern, flags)
print(compiled)
text = 'PoplatnÃkem danÄ z pozemků je vlastnÃk pozemku'
mark = r'**\1**' # wrap 1st matching group in double stars
r = re.sub(pattern, mark, text, flags=flags)
```
The fourth arg to `re.sub` is `count` so that is why you see the difference.
**re.sub(pattern, repl, string, count=0, flags=0)**
**re.compile(pattern, flags=0)** |
TemplateDoesNotExist in project folder django 1.8 | 29,610,085 | 2 | 2015-04-13T16:12:25Z | 29,610,501 | 16 | 2015-04-13T16:35:34Z | [
"python",
"django",
"django-templates",
"django-views",
"django-1.8"
] | I have structured my application Django (Django 1.8) as shown below.
When I try template in app1 or app2 extends base.html in base.html of my application, I get this error.
```
TemplateDoesNotExist at /
base.html
Error during template rendering
In template /myProject/project_folder/app1/templates/app1/base.html, error at line 1
{% extends "base.html" %}
```
Here is structure of my project
```
/projekt_folder
template
base.html
/app1
/template
base.html <-- {% extends "base.html" %}
/app2
/template
base.html <-- {% extends "base.html" %}
``` | You need to tell Django what is the additional location of your template folder (`projekt_folder/template`) which is not under installed apps, add following lines at top of your settings file:
```
import os
PACKAGE_ROOT = os.path.abspath(os.path.dirname(__file__))
```
Then set `DIRS` in `TEMPLATES` setting var:
```
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(PACKAGE_ROOT, 'template')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
``` |
Why numpy/scipy is faster without OpenBLAS? | 29,616,487 | 4 | 2015-04-13T23:02:55Z | 29,638,143 | 8 | 2015-04-14T22:01:31Z | [
"python",
"performance",
"numpy",
"scipy",
"openblas"
] | I made two installations:
1. `brew install numpy` (and scipy) `--with-openblas`
2. Cloned GIT repositories (for numpy and scipy) and built it myself
After I cloned two handy scripts for verification of these libraries in multi-threaded environment:
```
git clone https://gist.github.com/3842524.git
```
Then for each installation I'm executing `show_config`:
```
python -c "import scipy as np; np.show_config()"
```
It's all nice for the installation 1:
```
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/opt/openblas/lib']
language = f77
blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/opt/openblas/lib']
language = f77
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/opt/openblas/lib']
language = f77
blas_mkl_info:
NOT AVAILABLE
```
But installation 2 the things are not so bright:
```
lapack_opt_info:
extra_link_args = ['-Wl,-framework', '-Wl,Accelerate']
extra_compile_args = ['-msse3']
define_macros = [('NO_ATLAS_INFO', 3)]
blas_opt_info:
extra_link_args = ['-Wl,-framework', '-Wl,Accelerate']
extra_compile_args = ['-msse3', '- I/System/Library/Frameworks/vecLib.framework/Headers']
define_macros = [('NO_ATLAS_INFO', 3)]
```
So it seems when I failed to link OpenBLAS correctly. But it's fine for now, here the performance results. All tests are performed on iMac, Yosemite, i7-4790K, 4 cores, hyper-threaded.
First installation with OpenBLAS:
**numpy:**
```
OMP_NUM_THREADS=1 python test_numpy.py
FAST BLAS
version: 1.9.2
maxint: 9223372036854775807
dot: 0.126578998566 sec
OMP_NUM_THREADS=2 python test_numpy.py
FAST BLAS
version: 1.9.2
maxint: 9223372036854775807
dot: 0.0640147686005 sec
OMP_NUM_THREADS=4 python test_numpy.py
FAST BLAS
version: 1.9.2
maxint: 9223372036854775807
dot: 0.0360922336578 sec
OMP_NUM_THREADS=8 python test_numpy.py
FAST BLAS
version: 1.9.2
maxint: 9223372036854775807
dot: 0.0364527702332 sec
```
**scipy:**
```
OMP_NUM_THREADS=1 python test_scipy.py
cholesky: 0.0276656150818 sec
svd: 0.732437372208 sec
OMP_NUM_THREADS=2 python test_scipy.py
cholesky: 0.0182101726532 sec
svd: 0.441690778732 sec
OMP_NUM_THREADS=4 python test_scipy.py
cholesky: 0.0130400180817 sec
svd: 0.316107988358 sec
OMP_NUM_THREADS=8 python test_scipy.py
cholesky: 0.012854385376 sec
svd: 0.315939807892 sec
```
Second installation without OpenBLAS:
**numpy:**
```
OMP_NUM_THREADS=1 python test_numpy.py
slow blas
version: 1.10.0.dev0+3c5409e
maxint: 9223372036854775807
dot: 0.0371072292328 sec
OMP_NUM_THREADS=2 python test_numpy.py
slow blas
version: 1.10.0.dev0+3c5409e
maxint: 9223372036854775807
dot: 0.0215149879456 sec
OMP_NUM_THREADS=4 python test_numpy.py
slow blas
version: 1.10.0.dev0+3c5409e
maxint: 9223372036854775807
dot: 0.0146862030029 sec
OMP_NUM_THREADS=8 python test_numpy.py
slow blas
version: 1.10.0.dev0+3c5409e
maxint: 9223372036854775807
dot: 0.0141334056854 sec
```
**scipy:**
```
OMP_NUM_THREADS=1 python test_scipy.py
cholesky: 0.0109382152557 sec
svd: 0.32529540062 sec
OMP_NUM_THREADS=2 python test_scipy.py
cholesky: 0.00988121032715 sec
svd: 0.331357002258 sec
OMP_NUM_THREADS=4 python test_scipy.py
cholesky: 0.00916676521301 sec
svd: 0.318637990952 sec
OMP_NUM_THREADS=8 python test_scipy.py
cholesky: 0.00931282043457 sec
svd: 0.324427986145 sec
```
To my surprise, the second case is faster than the first. In case of scipy there is no increase in performance after adding more cores, but even one core is faster than 4 cores in OpenBLAS.
Does anyone have an idea why is that? | There are two obvious differences that might account for the discrepancy:
1. You are comparing two different versions numpy. The OpenBLAS-linked version you installed using Homebrew is 1.9.1, whereas the one you built from source is 1.10.0.dev0+3c5409e.
2. Whilst the newer version is not linked against OpenBLAS, it is linked against Apple's [Accelerate Framework](https://developer.apple.com/library/mac/documentation/Accelerate/Reference/AccelerateFWRef/index.html), a different optimized BLAS implementation.
---
The reason why your test script still reports `slow blas` for the second case is due to an incompatibility with the newest versions of numpy. The script you are using tests whether numpy is linked against an optimised BLAS library by [checking for the presence of `numpy.core._dotblas`](https://gist.github.com/osdf/3842524#file-test_numpy-py-L6-L10):
```
try:
import numpy.core._dotblas
print 'FAST BLAS'
except ImportError:
print 'slow blas'
```
In older versions of numpy, this C module would only be compiled during the installation process if an optimized BLAS library was found. However, [`_dotblas` has been removed altogether in development versions > 1.10.0](https://github.com/numpy/numpy/blob/master/doc/release/1.10.0-notes.rst#dropped-support) (as mentioned in [this previous SO question](http://stackoverflow.com/q/29026976/1461210)), so the script will always report `slow blas` for these versions.
I've written an updated version of the numpy test script that reports the BLAS linkage correctly for the latest versions; [you can find it here](https://gist.github.com/alimuldal/eb0f4eea8af331b2a890). |
Is there a way to start android emulator in Travis CI build? | 29,622,597 | 3 | 2015-04-14T08:31:03Z | 29,625,997 | 7 | 2015-04-14T11:19:25Z | [
"android",
"python",
"unit-testing",
"adb",
"travis-ci"
] | I have [python wrapper-library for adb](https://github.com/vmalyi/adb-lib) where I have unit-test which depend on emulator or real device (since they execute adb commands).
I want also to use Travis CI as build environment along with executing those unit tests for each build.
Is there a way to have android emulator available somewhow in Travis CI, so that unit tests can execute adb commands?
Thanks in advance! | According to the [Travis CI documentation](http://docs.travis-ci.com/user/languages/android/#How-to-Create-and-Start-an-Emulator), you can start an emulator with the following script in your `.travis.yml`:
```
# Emulator Management: Create, Start and Wait
before_script:
- echo no | android create avd --force -n test -t android-19 --abi armeabi-v7a
- emulator -avd test -no-skin -no-audio -no-window &
- android-wait-for-emulator
- adb shell input keyevent 82 &
```
Just specify the system image you need in `components`. |
Pytest where to store expected data | 29,627,341 | 3 | 2015-04-14T12:28:02Z | 29,631,801 | 9 | 2015-04-14T15:46:16Z | [
"python",
"py.test"
] | Testing function I need to pass parameters and see the output matches the expected output.
It is easy when function's response is just a small array or a one-line string which can be defined inside the test function, but suppose function I test modifies a config file which can be huge. Or the resulting array is something 4 lines long if I define it explicitly. Where do I store that so my tests remain clean and easy to maintain?
Right now if that is string I just put a file near the `.py` test and do `open()` it inside the test:
```
def test_if_it_works():
with open('expected_asnwer_from_some_function.txt') as res_file:
expected_data = res_file.read()
input_data = ... # Maybe loaded from a file as well
assert expected_data == if_it_works(input_data)
```
I see many problems with such approach, like the problem of maintaining this file up to date. It looks bad as well.
I can make things probably better moving this to a fixture:
```
@pytest.fixture
def expected_data()
with open('expected_asnwer_from_some_function.txt') as res_file:
expected_data = res_file.read()
return expected_data
@pytest.fixture
def input_data()
return '1,2,3,4'
def test_if_it_works(input_data, expected_data):
assert expected_data == if_it_works(input_data)
```
That just moves the problem to another place and usually I need to test if function works in case of empty input, input with a single item or multiple items, so I should create one big fixture including all three cases or multiple fixtures. In the end code gets quite messy.
If a function expects a complicated dictionary as an input or gives back the dictionary of the same huge size test code becomes ugly:
```
@pytest.fixture
def input_data():
# It's just an example
return {['one_value': 3, 'one_value': 3, 'one_value': 3,
'anotherky': 3, 'somedata': 'somestring'],
['login': 3, 'ip_address': 32, 'value': 53,
'one_value': 3], ['one_vae': 3, 'password': 13, 'lue': 3]}
```
It's quite hard to read tests with such fixtures and keep them up to date.
### Update
After searching a while I found a library which solved a part of a problem when instead of big config files I had large HTML responses. It's [betamax](https://github.com/sigmavirus24/betamax).
For easier usage I created a fixture:
```
from betamax import Betamax
@pytest.fixture
def session(request):
session = requests.Session()
recorder = Betamax(session)
recorder.use_cassette(os.path.join(os.path.dirname(__file__), 'fixtures', request.function.__name__)
recorder.start()
request.addfinalizer(recorder.stop)
return session
```
So now in my tests I just use the `session` fixture and every request I make is being serialized automatically to the `fixtures/test_name.json` file so the next time I execute the test instead of doing a real HTTP request library loads it from the filesystem:
```
def test_if_response_is_ok(session):
r = session.get("http://google.com")
```
It's quite handy because in order to keep these fixtures up to date I just need to clean the `fixtures` folder and rerun my tests. | I had a similar problem once, where I have to test configuration file against an expected file. That's how I fixed it:
1. Create a folder with the same name of your test module and at the same location. Put all your expected files inside that folder.
```
test_foo/
expected_config_1.ini
expected_config_2.ini
test_foo.py
```
2. Create a fixture responsible for moving the contents of this folder to a temporary file. I did use of `tmpdir` fixture for this matter.
```
from __future__ import unicode_literals
from distutils import dir_util
from pytest import fixture
import os
@fixture
def datadir(tmpdir, request):
'''
Fixture responsible for searching a folder with the same name of test
module and, if available, moving all contents to a temporary directory so
tests can use them freely.
'''
filename = request.module.__file__
test_dir, _ = os.path.splitext(filename)
if os.path.isdir(test_dir):
dir_util.copy_tree(test_dir, bytes(tmpdir))
return tmpdir
```
3. Use your new fixture.
```
def test_foo(datadir):
expected_config_1 = datadir.join('expected_config_1.ini')
expected_config_2 = datadir.join('expected_config_2.ini')
```
Remember: `datadir` is just the same as `tmpdir` fixture, plus the ability of working with your expected files placed into the a folder with the very name of test module. |
Pretty-printing physical quantities with automatic scaling of SI prefixes | 29,627,796 | 10 | 2015-04-14T12:50:06Z | 29,749,228 | 8 | 2015-04-20T13:26:09Z | [
"python",
"units-of-measurement"
] | I am looking for an elegant way to pretty-print physical quantities with the most appropriate prefix (as in `12300 grams` are `12.3 kilograms`). A simple approach looks like this:
```
def pprint_units(v, unit_str, num_fmt="{:.3f}"):
""" Pretty printer for physical quantities """
# prefixes and power:
u_pres = [(-9, u'n'), (-6, u'µ'), (-3, u'm'), (0, ''),
(+3, u'k'), (+6, u'M'), (+9, u'G')]
if v == 0:
return num_fmt.format(v) + " " + unit_str
p = np.log10(1.0*abs(v))
p_diffs = np.array([(p - u_p[0]) for u_p in u_pres])
idx = np.argmin(p_diffs * (1+np.sign(p_diffs))) - 1
u_p = u_pres[idx if idx >= 0 else 0]
return num_fmt.format(v / 10.**u_p[0]) + " " + u_p[1] + unit_str
for v in [12e-6, 3.4, .123, 3452]:
print(pprint_units(v, 'g', "{: 7.2f}"))
# Prints:
# 12.00 µg
# 3.40 g
# 123.00 mg
# 3.45 kg
```
Looking over [units](https://pypi.python.org/pypi/units/) and [Pint](http://pint.readthedocs.org/en/latest/index.html), I could not find that functionality. Are there any other libraries which typeset SI units more comprehensively (to handle special cases like angles, temperatures, etc)? | I have solved the same problem once. And IMHO with more elegance. No degrees or temperatures though.
```
def sign(x, value=1):
"""Mathematical signum function.
:param x: Object of investigation
:param value: The size of the signum (defaults to 1)
:returns: Plus or minus value
"""
return -value if x < 0 else value
def prefix(x, dimension=1):
"""Give the number an appropriate SI prefix.
:param x: Too big or too small number.
:returns: String containing a number between 1 and 1000 and SI prefix.
"""
if x == 0:
return "0 "
l = math.floor(math.log10(abs(x)))
if abs(l) > 24:
l = sign(l, value=24)
div, mod = divmod(l, 3*dimension)
return "%.3g %s" % (x * 10**(-l + mod), " kMGTPEZYyzafpnµm"[div])
```
[CommaCalc](https://github.com/pacholik/CommaCalc)
Degrees like that:
```
def intfloatsplit(x):
i = int(x)
f = x - i
return i, f
def prettydegrees(d):
degrees, rest = intfloatsplit(d)
minutes, rest = intfloatsplit(60*rest)
seconds = round(60*rest)
return "{degrees}° {minutes}' {seconds}''".format(**locals())
```
**edit:**
Added dimension of the unit
```
>>> print(prefix(0.000009, 2))
9 m
>>> print(prefix(0.9, 2))
9e+05 m
```
The second output is not very pretty, I know. You may want to edit the formating string.
**edit:**
Parse inputs like `0.000009 m²`. Works on dimensions less than 10.
```
import unicodedata
def unitprefix(val):
"""Give the unit an appropriate SI prefix.
:param val: Number and a unit, e.g. "0.000009 m²"
"""
xstr, unit = val.split(None, 2)
x = float(xstr)
try:
dimension = unicodedata.digit(unit[-1])
except ValueError:
dimension = 1
return prefix(x, dimension) + unit
``` |
Python get the smallest words between two characters | 29,632,847 | 3 | 2015-04-14T16:40:22Z | 29,632,905 | 7 | 2015-04-14T16:42:40Z | [
"python",
"regex",
"string"
] | How do I find the smallest string between two characters in python.
I tried this to extract words between `[[` and `]]` and put them into a list:
```
clause_text ="Bien_id [[bien_id.name]] dossier [[dossier_id.name]] "
list_variables = re.findall(re.escape("[[")+"(.*)"+re.escape("]]"),clause_text)
print list_variables
```
In this case i need to get two variables on my list `bien_id.name` and `dossier_id.name`
But the result i get is one variable:
```
[u'bien_id.name]] dfdfdf [[dossier_id.name']
```
It means that it takes the biggest word between brackets.Can you help me please to solve my problem? | You are looking for a non-greedy regex. Use `(.*?)` instead of `(.*)` |
Python reverse / inverse a mapping (but with multiple values for each key) | 29,633,848 | 4 | 2015-04-14T17:33:34Z | 29,633,896 | 7 | 2015-04-14T17:36:00Z | [
"python",
"dictionary",
"mapping",
"inverse"
] | This is really a variation on this question, but not a duplicate:
[Python reverse / inverse a mapping](http://stackoverflow.com/q/483666/992834)
Given a dictionary like so:
`mydict= { 'a': ['b', 'c'], 'd': ['e', 'f'] }`
How can one invert this dict to get:
`inv_mydict = { 'b':'a', 'c':'a', 'e':'d', 'f':'d' }`
Note that values span uniquely under each key.
**Note**: I previously had syntax `map = ...` and `dict = ...` Reminder not to use `map` and `dict` as they are built-in functions, see excellent comments and answers below :) | ### TL;DR
Use dictionary comprehension, like this
```
>>> my_map = { 'a': ['b', 'c'], 'd': ['e', 'f'] }
>>> {value: key for key in my_map for value in my_map[key]}
{'c': 'a', 'f': 'd', 'b': 'a', 'e': 'd'}
```
---
The above seen dictionary comprehension is functionally equivalent to the following looping structure which populates an empty dictionary
```
>>> inv_map = {}
>>> for key in my_map:
... for value in my_map[key]:
... inv_map[value] = key
...
>>> inv_map
{'c': 'a', 'f': 'd', 'b': 'a', 'e': 'd'}
```
**Note:** Using `map` shadows the built-in [`map`](https://docs.python.org/3/library/functions.html#map) function. So, don't use that as a variable name unless you know what you are doing.
---
**Other similar ways to do the same**
**Python 3.x**
You can use [`dict.items`](https://docs.python.org/3/library/stdtypes.html#dict.items), like this
```
>>> {value: key for key, values in my_map.items() for value in values}
{'c': 'a', 'f': 'd', 'b': 'a', 'e': 'd'}
```
We use `items()` method here, which would create a view object from the dictionary which would give key value pairs on iteration. So we just iterate over it and construct a new dictionary with the inverse mapping.
**Python 2.x**
You can use [`dict.iteritems`](https://docs.python.org/2/library/stdtypes.html#dict.iteritems) like this
```
>>> {value: key for key, values in my_map.iteritems() for value in values}
{'c': 'a', 'b': 'a', 'e': 'd', 'f': 'd'}
```
We don't prefer `items()` method in 2.x, because it will return a list of key-value pairs. We don't want to construct a list just to iterate and construct a new dictionary. That is why we prefer `iteritems()`, which returns an iterator object which gives a key value pair on iteration.
**Note:** The actual equivalent of Python 3.x's `items` would be Python 2.x's [`viewitems`](https://docs.python.org/2/library/stdtypes.html#dict.viewitems) method, which returns a view object. Read more about the view object in the official documentation, [here](https://docs.python.org/2/library/stdtypes.html#dictionary-view-objects).
---
## `iter*` vs `view*` methods in Python 2.x
The main difference between `iter*` functions and `view*` functions in Python 2.x is that, the view objects reflect the current state of the dictionary. For example,
```
>>> d = {1: 2}
>>> iter_items = d.iteritems()
>>> view_items = d.viewitems()
```
now we add a new element to the dictionary
```
>>> d[2] = 3
```
If you try to check if `(2, 3)` (key-value pair) is in the `iter_items`, it will throw an error
```
>>> (2, 3) in iter_items
Traceback (most recent call last):
File "<input>", line 1, in <module>
RuntimeError: dictionary changed size during iteration
```
but view object will reflect the current state of the dictionary. So, it will work fine
```
>>> (2, 3) in view_items
True
``` |
Django 1.9 deprecation warnings app_label | 29,635,765 | 39 | 2015-04-14T19:24:01Z | 29,703,136 | 29 | 2015-04-17T15:14:57Z | [
"python",
"django",
"deprecation-warning"
] | I've just updated to Django v1.8, and testing my local setup before updating my project and I've had a deprecation warning that I've never seen before, nor does it make any sense to me. I may be just overlooking something or misunderstanding the documentation.
```
/Users/neilhickman/Sites/guild/ankylosguild/apps/raiding/models.py:6: RemovedInDjango19Warning: Model class ankylosguild.apps.raiding.models.Difficulty doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Difficulty(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/raiding/models.py:21: RemovedInDjango19Warning: Model class ankylosguild.apps.raiding.models.Zone doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Zone(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/raiding/models.py:49: RemovedInDjango19Warning: Model class ankylosguild.apps.raiding.models.Boss doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Boss(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/raiding/models.py:79: RemovedInDjango19Warning: Model class ankylosguild.apps.raiding.models.Item doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Item(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:14: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.Category doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Category(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:36: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.Comment doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Comment(ScoreMixin, ProfileMixin, models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:64: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.Forum doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Forum(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:88: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.Post doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Post(ScoreMixin, ProfileMixin, models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:119: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.CommentPoint doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class CommentPoint(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:127: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.TopicPoint doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class TopicPoint(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/auctionhouse/models.py:10: RemovedInDjango19Warning: Model class ankylosguild.apps.auctionhouse.models.Auction doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Auction(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/auctionhouse/models.py:83: RemovedInDjango19Warning: Model class ankylosguild.apps.auctionhouse.models.Bid doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Bid(models.Model):
```
Now this poses 3 questions for me.
1. According to the [documentation](https://docs.djangoproject.com/en/1.8/ref/models/options/#django.db.models.Options.app_label), `Options.app_label` isn't a requirement unless the model is outside of the application module, which in my case, it isn't. Secondly, this behaviour was deprecated in 1.7 anyway, so why is it even an issue?
2. The applications are all in the INSTALLED\_APPS tuple, so it surely can't be that?
3. Why would the applications not be loaded before they are called if everything is in the INSTALLED\_APPS tuple?
If I am indeed doing something wrong, what is the correct way of doing it as the docs don't really clear up what is causing this problem or how to rectify it. | As stated in the warning, this happens either :
* When you're using a model which is not in an `INSTALLED_APPS`;
* Or when you're using a model before its application is loaded.
Since you did refer the app in the `INSTALLED_APPS` setting, this is most likely you're using a model before the app initialisation.
Typically, this occurs when you have `from .models import SomeModels` in an **apps.py** early signal (for example `post_migrate`).
Instead of referring your models the classic way here, it is recommended to use [AppConfig.get\_model()](http://django.readthedocs.org/en/latest/ref/applications.html#methods).
Check your **apps.py** file for any model import, and replace them using this api.
For example instead of :
```
# apps.py
from django.apps import AppConfig
from .models import MyModel
def do_stuff(sender, **kwargs):
MyModel.objects.get() # etc...
class MyAppConfig(AppConfig):
name = 'src.my_app_label'
def ready(self):
post_migrate.connect(do_stuff, sender=self)
```
Do this :
```
# apps.py
from django.apps import AppConfig
def do_stuff(sender, **kwargs):
mymodel = sender.get_model('MyModel')
mymodel.objects.get() # etc...
class MyAppConfig(AppConfig):
name = 'src.my_app_label'
def ready(self):
post_migrate.connect(do_stuff, sender=self)
```
Note this enforcement was introduced in bug [#21719](https://code.djangoproject.com/ticket/21719#no1). |
Django 1.9 deprecation warnings app_label | 29,635,765 | 39 | 2015-04-14T19:24:01Z | 31,370,311 | 40 | 2015-07-12T17:01:28Z | [
"python",
"django",
"deprecation-warning"
] | I've just updated to Django v1.8, and testing my local setup before updating my project and I've had a deprecation warning that I've never seen before, nor does it make any sense to me. I may be just overlooking something or misunderstanding the documentation.
```
/Users/neilhickman/Sites/guild/ankylosguild/apps/raiding/models.py:6: RemovedInDjango19Warning: Model class ankylosguild.apps.raiding.models.Difficulty doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Difficulty(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/raiding/models.py:21: RemovedInDjango19Warning: Model class ankylosguild.apps.raiding.models.Zone doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Zone(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/raiding/models.py:49: RemovedInDjango19Warning: Model class ankylosguild.apps.raiding.models.Boss doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Boss(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/raiding/models.py:79: RemovedInDjango19Warning: Model class ankylosguild.apps.raiding.models.Item doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Item(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:14: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.Category doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Category(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:36: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.Comment doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Comment(ScoreMixin, ProfileMixin, models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:64: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.Forum doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Forum(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:88: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.Post doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Post(ScoreMixin, ProfileMixin, models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:119: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.CommentPoint doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class CommentPoint(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:127: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.TopicPoint doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class TopicPoint(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/auctionhouse/models.py:10: RemovedInDjango19Warning: Model class ankylosguild.apps.auctionhouse.models.Auction doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Auction(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/auctionhouse/models.py:83: RemovedInDjango19Warning: Model class ankylosguild.apps.auctionhouse.models.Bid doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Bid(models.Model):
```
Now this poses 3 questions for me.
1. According to the [documentation](https://docs.djangoproject.com/en/1.8/ref/models/options/#django.db.models.Options.app_label), `Options.app_label` isn't a requirement unless the model is outside of the application module, which in my case, it isn't. Secondly, this behaviour was deprecated in 1.7 anyway, so why is it even an issue?
2. The applications are all in the INSTALLED\_APPS tuple, so it surely can't be that?
3. Why would the applications not be loaded before they are called if everything is in the INSTALLED\_APPS tuple?
If I am indeed doing something wrong, what is the correct way of doing it as the docs don't really clear up what is causing this problem or how to rectify it. | Similar error. In my case the error was:
`RemovedInDjango19Warning: Model class django.contrib.sites.models.Site doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Site(models.Model):`
My solution was:
Added `'django.contrib.sites'` to `INSTALLED_APPS` |
Scatterplot without linear fit in seaborn | 29,637,150 | 8 | 2015-04-14T20:50:05Z | 29,644,094 | 15 | 2015-04-15T07:29:19Z | [
"python",
"matplotlib",
"seaborn"
] | I am wondering if there is a way to turn of the linear fit in seaborn's `lmplot` or if there is an equivalent function that just produces the scatterplot.
Sure, I could also use matplotlib, however, I find the syntax and aesthetics in seaborn quite appealing. E.g,. I want to plot the following plot
```
import seaborn as sns
sns.set(style="ticks")
df = sns.load_dataset("anscombe")
sns.lmplot("x", "y", data=df, hue='dataset')
```

Without the linear fit like so:
```
from itertools import cycle
import numpy as np
import matplotlib.pyplot as plt
color_gen = cycle(('blue', 'lightgreen', 'red', 'purple', 'gray', 'cyan'))
for lab in np.unique(df['dataset']):
plt.scatter(df.loc[df['dataset'] == lab, 'x'],
df.loc[df['dataset'] == lab, 'y'],
c=next(color_gen),
label=lab)
plt.legend(loc='best')
```
 | set `fit_reg` argument to `False`:
```
sns.lmplot("x", "y", data=df, hue='dataset', fit_reg=False)
``` |
Django - present current date and time in template | 29,637,768 | 2 | 2015-04-14T21:31:38Z | 29,637,791 | 8 | 2015-04-14T21:33:38Z | [
"python",
"django"
] | The tittle is pretty self explanatory. How can one present current data and time in django's template? | Try using built-in django template tags and filters:
<https://docs.djangoproject.com/en/dev/ref/templates/builtins/?from=olddocs#now>
Example: `It is {% now "jS F Y H:i" %}` |
How to set a ProtoBuf field which is an empty message in Python? | 29,643,295 | 5 | 2015-04-15T06:44:11Z | 29,651,069 | 7 | 2015-04-15T13:05:27Z | [
"python",
"protocol-buffers"
] | The following are the contents of the Google Protocol Buffer (.proto) file
```
message First
{
required uint32 field1 = 1;
optional MessageType1 request = 2;
}
message MessageType1
{
}
```
I want to set the MessageType1 field request. But I get this as an error:
```
AttributeError: Assignment not allowed to composite field "request" in protocol message object.
```
How to set the value of this empty message in Python? | Got this in the source code of Message class in Proto Buffer.
```
def SetInParent(self):
"""Mark this as present in the parent.
This normally happens automatically when you assign a field of a
sub-message, but sometimes you want to make the sub-message
present while keeping it empty. If you find yourself using this,
you may want to reconsider your design."""
```
So the way to set such an empty message is to call this function:
```
first.request.SetInParent()
``` |
Tornado framework. TypeError: 'Future' object is not callable | 29,645,115 | 8 | 2015-04-15T08:20:54Z | 29,645,882 | 14 | 2015-04-15T08:58:02Z | [
"python",
"asynchronous",
"frameworks",
"tornado",
"python-asyncio"
] | I've started to learn Tornado framework sometime ago. I've faced the lack of documentation for unexperienced users and checked also asyncio module docs.
So the problem is, that I have some simple code in asyncio:
```
import asyncio
@asyncio.coroutine
def compute(x, y):
print("Compute %s + %s ..." % (x, y))
yield from asyncio.sleep(1.0)
return x + y
@asyncio.coroutine
def print_sum(x, y):
result = yield from compute(x, y)
print("%s + %s = %s" % (x, y, result))
loop = asyncio.get_event_loop()
loop.run_until_complete(print_sum(1, 2))
loop.close()
```
And then I've tried to make the same using Tornado framework:
```
from tornado.ioloop import IOLoop
from tornado import gen
@gen.coroutine
def compute(x, y):
print("Compute %s + %s ..." % (x, y))
yield gen.sleep(1.0)
return (x+y)
@gen.coroutine
def print_sum(x, y):
result = yield compute(x, y)
print("%s + %s = %s" % (x, y, result))
IOLoop.instance().run_sync(print_sum(1,2))
```
But unfortunately Tornado code raise such an exception:
```
Compute 1 + 2 ...
Traceback (most recent call last):
File "tornado_coroutine.py", line 19, in <module>
IOLoop.instance().run_sync(print_sum(1, 2))
File "C:\Python34\lib\site-packages\tornado\ioloop.py", line 421, in run_sync
return future_cell[0].result()
File "C:\Python34\lib\site-packages\tornado\concurrent.py", line 209, in resul
t
raise_exc_info(self._exc_info)
File "<string>", line 3, in raise_exc_info
File "C:\Python34\lib\site-packages\tornado\ioloop.py", line 402, in run
result = func()
TypeError: 'Future' object is not callable
```
Maybe IOLoop tries to make an new "lap" after all coroutines have returned their values? | `run_sync` takes a function as argument. You are calling the function in-place and then giving the result as argument. You can create an anonymous function simply by using `lambda`:
```
IOLoop.instance().run_sync(lambda: print_sum(1,2))
``` |
Python zip by key | 29,645,415 | 3 | 2015-04-15T08:35:33Z | 29,645,569 | 9 | 2015-04-15T08:43:13Z | [
"python",
"list"
] | I'd like to combine (zip?) two python lists of tuples, but matching on a key.
e.g. I'd like to create a function that takes two input lists and produces an output like this:
```
lst1 = [(0, 1.1), (1, 1.2), (2, 1.3), (5, 2.5)]
lst2 = [ (1, 4.5), (2, 3.4), (4, 2.3), (5, 3.2)]
desiredOutput = [(1, 1.2, 4.5), (2, 1.3, 3.4), (5, 2.5, 3.2)]
```
I could do it very messily and manually with loops, but I figure there must be some `itertools` / zipping functions that will greatly simplify this.
I'm sure the answer is out there and obvious, I just don't have the right termonology to search for it.
==
(( For what it's worth, here's my naive solution. I'm hoping to find something neater / more pythonic:
```
def key_zipper(lst1, lst2):
dict1 = dict(lst1)
dict2 = dict(lst2)
intersectKeys = [k for k in dict1.keys() if k in dict2.keys()]
output = []
for key in intersectKeys:
output.append((key, dict1[key], dict2[key]))
return output
```
Thanks )) | ```
>>> [(i, a, b) for i, a in lst1 for j, b in lst2 if i==j]
[(1, 1.2, 4.5), (2, 1.3, 3.4), (5, 2.5, 3.2)]
``` |
igraph Graph from numpy or pandas adjacency matrix | 29,655,111 | 11 | 2015-04-15T15:57:19Z | 29,673,192 | 15 | 2015-04-16T11:21:46Z | [
"python",
"numpy",
"pandas",
"igraph"
] | I have an adjacency matrix stored as a `pandas.DataFrame`:
```
node_names = ['A', 'B', 'C']
a = pd.DataFrame([[1,2,3],[3,1,1],[4,0,2]],
index=node_names, columns=node_names)
a_numpy = a.as_matrix()
```
I'd like to create an `igraph.Graph` from either the `pandas` or the `numpy` adjacency matrices. In an ideal world the nodes would be named as expected.
Is this possible? [The tutorial](http://igraph.org/python/doc/tutorial/tutorial.html) seems to be silent on the issue. | In igraph you can use [`igraph.Graph.Adjacency`](http://igraph.org/python/doc/igraph.GraphBase-class.html#Adjacency) to create a graph from an adjacency matrix without having to use `zip`. There are some things to be aware of when a weighted adjacency matrix is used and stored in a `np.array` or `pd.DataFrame`.
* `igraph.Graph.Adjacency` can't take an `np.array` as argument, but that is easily solved using [`tolist`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tolist.html).
* Integers in adjacency-matrix are interpreted as number of edges between nodes rather than weights, solved by using adjacency as boolean.
An example of how to do it:
```
import igraph
import pandas as pd
node_names = ['A', 'B', 'C']
a = pd.DataFrame([[1,2,3],[3,1,1],[4,0,2]], index=node_names, columns=node_names)
# Get the values as np.array, it's more convenenient.
A = a.values
# Create graph, A.astype(bool).tolist() or (A / A).tolist() can also be used.
g = igraph.Graph.Adjacency((A > 0).tolist())
# Add edge weights and node labels.
g.es['weight'] = A[A.nonzero()]
g.vs['label'] = node_names # or a.index/a.columns
```
You can reconstruct your adjacency dataframe using [`get_adjacency`](http://igraph.org/python/doc/igraph.Graph-class.html#get_adjacency) by:
```
df_from_g = pd.DataFrame(g.get_adjacency(attribute='weight').data,
columns=g.vs['label'], index=g.vs['label'])
(df_from_g == a).all().all() # --> True
``` |
What does extra parameter mean in this for-loop in Python? | 29,655,449 | 6 | 2015-04-15T16:13:47Z | 29,655,554 | 7 | 2015-04-15T16:18:07Z | [
"python",
"for-loop"
] | I have come across a for-loop that is unusual to me. What does `method` mean in this for-loop?
`for method, config in self.myList.items():` | `items()` is a method used on python `dictionaries` to return an `iterable` holding `tuples` for each of the dictionary's `keys` and their corresponding `value`.
In Python you can unpack `lists` and `tuples` into variables using the method you've shown.
e.g.:
```
item1, item2 = [1,2]
# now we have item1 = 1, item2 = 2
```
Therefore, assuming `self.myList` is a `dict`, `method` and `config` would relate to the `key` and `value` in each `tuple` for that iteration respectively.
If `self.myList` is not a `dict`, I would assume it either inherits from `dict` or it's `items()` method is similar (you would know better). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.