title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Python Sniffing from Black Hat Python book | 29,306,747 | 7 | 2015-03-27T17:33:41Z | 29,307,402 | 13 | 2015-03-27T18:08:02Z | [
"python",
"linux",
"sockets",
"networking"
] | ```
import socket
import os
import struct
import sys
from ctypes import *
# host to listen on
host = sys.argv[1]
class IP(Structure):
_fields_ = [
("ihl", c_ubyte, 4),
("version", c_ubyte, 4),
("tos", c_ubyte),
("len", c_ushort),
("id", c_ushort),
("offset", c_ushort),
("ttl", c_ubyte),
("protocol_num", c_ubyte),
("sum", c_ushort),
("src", c_ulong),
("dst", c_ulong)
]
def __new__(self, socket_buffer=None):
return self.from_buffer_copy(socket_buffer)
def __init__(self, socket_buffer=None):
# map protocol constants to their names
self.protocol_map = {1:"ICMP", 6:"TCP", 17:"UDP"}
# human readable IP addresses
self.src_address = socket.inet_ntoa(struct.pack("<L",self.src))
self.dst_address = socket.inet_ntoa(struct.pack("<L",self.dst))
# human readable protocol
try:
self.protocol = self.protocol_map[self.protocol_num]
except:
self.protocol = str(self.protocol_num)
# create a raw socket and bind it to the public interface
if os.name == "nt":
socket_protocol = socket.IPPROTO_IP
else:
socket_protocol = socket.IPPROTO_ICMP
sniffer = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket_protocol)
sniffer.bind((host, 0))
# we want the IP headers included in the capture
sniffer.setsockopt(socket.IPPROTO_IP, socket.IP_HDRINCL, 1)
# if we're on Windows we need to send some ioctls
# to setup promiscuous mode
if os.name == "nt":
sniffer.ioctl(socket.SIO_RCVALL, socket.RCVALL_ON)
try:
while True:
# read in a single packet
raw_buffer = sniffer.recvfrom(65565)[0]
# create an IP header from the first 20 bytes of the buffer
ip_header = IP(raw_buffer[0:20])
print "Protocol: %s %s -> %s" % (ip_header.protocol, ip_header.src_address, ip_header.dst_address)
except KeyboardInterrupt:
# if we're on Windows turn off promiscuous mode
if os.name == "nt":
sniffer.ioctl(socket.SIO_RCVALL, socket.RCVALL_OFF)
```
This is code from the book Black Hat Python. This code is supposed to sniff with raw sockets and display information from the IP header. It works fine for me on Windows (using Windows 8.1 64bit). When I attempt to run this on linux (Kali linux 1.1.0-amd64) I get the following error
```
ValueError: Buffer size too small (20 instead of at least 32 bytes)
```
To get around that I added 12 spaces to the buffer like this
```
ip_header = IP(raw_buffer[0:20]+' '*12)
```
When I do that I get the following error
```
struct.error: 'L' format requires 0 <= number <= 4294967295
```
This occurs on the line
```
self.src_address = socket.inet_ntoa(struct.pack("<L",self.src))
```
I have tried changing the symbol before the L to > and ! and I tried it with just L all of them give me the same issue. I also tried wrapping self.src in ntohs like so
```
self.src_address = socket.inet_ntoa(struct.pack("<L",socket.ntohs(self.src)))
```
I'm thinking this has something to do with endianness but I'm not sure. Any help would be greatly appreciated.
NOTE: On windows you have to run as an administrator and on linux you have to run as a superuser because of the raw sockets. If you run this on linux open up another terminal and ping www.google.com so you can generate some ICMP packets for it to capture.
EDIT: I have also tried reversing the buffer with
```
ip_header = IP(raw_buffer[0:20][::-1]+' '*12)
```
EDIT 2: I did try both 65535 and 65534 on the below line before doing any of the other items I listed here.
```
raw_buffer = sniffer.recvfrom(65565)[0]
```
EDIT 3: This worked on an ubuntu machine running python 2.7.6 and my kali distro was 2.7.3 so I decided to get the latest version of python on my kali box which happens to be 2.7.9. Still no luck.
I placed the following code to the **new** function in my structure to view the buffer size
```
print sizeof(self)
```
On my Ubuntu and windows machines it was 20 however on my kali machine it was 32 | ```
#raw_buffer = sniffer.recvfrom(65565)[0]
raw_buffer = sniffer.recvfrom(65535)[0]
```
IP paket size is (2^16) - 1
The problem is with 32 vs 64 bit systems.
`ip_header = IP(raw_buffer[:20])` works on x86 Ubuntu.
`ip_header = IP(raw_buffer[:32])` works on amd64 CentOS 6.6 Python 2.6.6
`ip_header = IP(raw_buffer)` works in both.
You have to change these,
```
("src", c_ulong),
("dst", c_ulong)
self.src_address = socket.inet_ntoa(struct.pack("<L",self.src))
self.dst_address = socket.inet_ntoa(struct.pack("<L",self.dst))
```
into
```
("src", c_uint32),
("dst", c_uint32)
self.src_address = socket.inet_ntoa(struct.pack("@I",self.src))
self.dst_address = socket.inet_ntoa(struct.pack("@I",self.dst))
```
'@I' is unisigned int in native order.
because `c_ulong` is 4 bytes in i386 and 8 in amd64. Check the following,
```
struct.calcsize('@BBHHHBBHLL')
```
is 20 in i386 and 32 in amd64 which is size of `_fields_`. In actual it's 28 bytes in amd64 plus 4 bytes padded for word alignment.
`ip_header = IP(raw_buffer[:20])` now works correctly independent of platforms. |
fabric Import Error: cannot import name 'isMappingType' | 29,306,752 | 5 | 2015-03-27T17:34:04Z | 29,306,872 | 9 | 2015-03-27T17:40:22Z | [
"python",
"django",
"fabric"
] | I came across this "ImportError: cannot import name 'isMappingType' " in the middle of process to deploy fabfile for a Django Project.
1.Here is the structure of my fabfile.py
```
from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm
from fabric.contrib.files import append, exists, sed
env.hosts = ["127.0.0.1"]
env.user = raw_input('Please enter user:')
def deploy():
sudo("apt-get update -y")
sudo("apt-get install git -y")
sudo("apt-get install postgresql libpq-dev python-dev python-pip -y")
code_dir = 'backend-directory'
if exists(code_dir):
run('cd %s && git pull' % (code_dir,))
else:
run("git clone git://serveraddress/projects/backend-directory")
with cd(code_dir):
sudo("pip install virtualenv")
run("virtualenv -p /usr/bin/python3.4 venv")
run("source venv/bin/activate")
#sudo("pip install -r requirements/dev.txt")
sudo("pip install -r requirements/production.txt")
with settings(warn_only=True):
with settings(sudo_user = 'postgres'):
sudo("psql -c " + '"CREATE USER new_user WITH PASSWORD ' + "'new_password';" + '"')
sudo("psql -c 'ALTER USER new_user CREATEDB;'")
sudo("psql -c 'CREATE DATABASE newdb;'")
sudo("psql -c 'GRANT ALL PRIVILEGES ON DATABASE 'newdb' to new_user;'")
if run("nginx -v").failed:
sudo(" apt-get install nginx -y")
code_dir = 'frontend-directory'
if exists(code_dir):
run('cd %s && git pull' % (code_dir,))
else:
run("git clone git://serveraddress/frontend-directory")
code_dir = 'backend-directory/project_site'
with cd(code_dir):
run("python manage.py makemigrations --settings=project.settings.development")
run("python manage.py migrate --settings=project.settings.development")
sudo("/etc/init.d/nginx start")
with settings(warn_only=True):
if run("find /etc/uwsgi").failed:
sudo("mkdir /etc/uwsgi")
if run("find /etc/uwsgi/vassals").failed:
sudo("mkdir /etc/uwsgi/vassals")
if run("find /etc/uwsgi/vassals/pam_uwsgi.ini").failed:
sudo("ln -s ~/backend-direcoty/project_site/pam_uwsgi.ini /etc/uwsgi/vassals/")
run("uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data")
```
Next, I have executed the below command in virtual env
```
(venv)praneeth@praneeth-Latitude-E6400 ~/wru-pam $ fab deploy
```
I got the following traceback:-
```
Traceback (most recent call last):
File "/home/praneeth/wru-pam/venv/bin/fab", line 9, in <module>
load_entry_point('Fabric==1.10.1', 'console_scripts', 'fab')()
File "/home/praneeth/wru-pam/venv/lib/python3.4/site-packages/pkg_resources/__init__.py", line 474, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/home/praneeth/wru-pam/venv/lib/python3.4/site-packages/pkg_resources/__init__.py", line 2582, in load_entry_point
return ep.load()
File "/home/praneeth/wru-pam/venv/lib/python3.4/site-packages/pkg_resources/__init__.py", line 2265, in load
return self._load()
File "/home/praneeth/wru-pam/venv/lib/python3.4/site-packages/pkg_resources/__init__.py", line 2268, in _load
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/home/praneeth/wru-pam/venv/lib/python3.4/site-packages/fabric/main.py", line 12, in <module>
from operator import isMappingType
ImportError: cannot import name 'isMappingType'
```
What might be the reason for this import error ? | [`fabric` *doesn't support Python 3*](https://github.com/fabric/fabric):
> Fabric is a Python (2.5-2.7) library and command-line tool for
> streamlining the use of SSH for application deployment or systems
> administration tasks.
See also other points and workarounds at:
* [Python 3 support for fabric](http://stackoverflow.com/questions/18736274/python-3-support-for-fabric)
* [python3 compatibility?](https://github.com/fabric/fabric/issues/1017)
From what I understand, migrating to [`invoke`](https://github.com/pyinvoke/invoke) is the first thing to consider.
---
Quick test demonstrating the problem:
```
$ python2.7
>>> from operator import isMappingType
>>>
$ python3.4
>>> from operator import isMappingType
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'isMappingType'
``` |
Why am I getting a "Task was destroyed but it is pending" error in Python asyncio? | 29,307,698 | 5 | 2015-03-27T18:27:04Z | 29,307,819 | 9 | 2015-03-27T18:33:31Z | [
"python",
"python-asyncio",
"aiohttp"
] | I use `asyncio` and beautiful `aiohttp`. The main idea is that I make request to server (it returns links) and then I want to download files from all links in **parallel** (something like in an [example](https://docs.python.org/3/library/asyncio-task.html#example-parallel-execution-of-tasks)).
Code:
```
import aiohttp
import asyncio
@asyncio.coroutine
def downloader(file):
print('Download', file['title'])
yield from asyncio.sleep(1.0) # some actions to download
print('OK', file['title'])
def run():
r = yield from aiohttp.request('get', 'my_url.com', True))
raw = yield from r.json()
tasks = []
for file in raw['files']:
tasks.append(asyncio.async(downloader(file)))
asyncio.wait(tasks)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
```
But, when I try to run it, I have many "Download ..." outputs and
```
Task was destroyed but it is pending!
```
And nothing about 'OK + filename'.
How can I fix that? | You forgot to `yield from` the call to `asyncio.wait`. You also probably have the indentation on it wrong; you only want to run it after you've iterated over the entire `raw['files']` list. Here's the complete example with both mistakes fixed:
```
import aiohttp
import asyncio
@asyncio.coroutine
def downloader(file):
print('Download', file['title'])
yield from asyncio.sleep(1.0) # some actions to download
print('OK', file['title'])
@asyncio.coroutine
def run():
r = yield from aiohttp.request('get', 'my_url.com', True))
raw = yield from r.json()
tasks = []
for file in raw['files']:
tasks.append(asyncio.async(downloader(file)))
yield from asyncio.wait(tasks)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
```
Without the call to `yield from`, `run` exits immediately after you've iterated over the entire list of files, which will mean your script exits, causing a whole bunch of unfinished `downloader` tasks to be destroyed, and the warning you saw to be displayed. |
How to replace characters in string by the next one? | 29,307,814 | 2 | 2015-03-27T18:33:14Z | 29,307,923 | 8 | 2015-03-27T18:39:12Z | [
"python"
] | I would like to replace every character of a string with the next one and the last should become first. Here is an example:
```
abcdefghijklmnopqrstuvwxyz
```
should become:
```
bcdefghijklmnopqrstuvwxyza
```
Is it possible to do it without using the replace function 26 times? | You can use the [`str.translate()` method](https://docs.python.org/2/library/stdtypes.html#str.translate) to have Python replace characters by other characters in one step.
Use the [`string.maketrans()` function](https://docs.python.org/2/library/string.html#string.maketrans) to map ASCII characters to their targets; using [`string.ascii_lowercase`](https://docs.python.org/2/library/string.html#string.ascii_lowercase) can help here as it saves you typing all the letters yourself:
```
from string import ascii_lowercase
try:
# Python 2
from string import maketrans
except ImportError:
# Python 3 made maketrans a static method
maketrans = str.maketrans
cipher_map = maketrans(ascii_lowercase, ascii_lowercase[1:] + ascii_lowercase[:1])
encrypted = text.translate(cipher_map)
```
Demo:
```
>>> from string import maketrans
>>> from string import ascii_lowercase
>>> cipher_map = maketrans(ascii_lowercase, ascii_lowercase[1:] + ascii_lowercase[:1])
>>> text = 'the quick brown fox jumped over the lazy dog'
>>> text.translate(cipher_map)
'uif rvjdl cspxo gpy kvnqfe pwfs uif mbaz eph'
``` |
Converting List to Dict | 29,309,643 | 4 | 2015-03-27T20:27:38Z | 29,309,683 | 15 | 2015-03-27T20:30:14Z | [
"python",
"python-2.7"
] | When I had a list that was in format of
```
list1 = [[James,24],[Ryan,21],[Tim,32]...etc]
```
I could use
```
dic1 =dict(list1)
```
However now lets say I have multiple values such as
```
list1 = [[James,24,Canada,Blue,Tall],[Ryan,21,U.S.,Green,Short
[Tim,32,Mexico,Yellow,Average]...etc]
```
I have no idea how to go about creating a dict so that it would show the first name as the key and the following values as the value.
Thanks in advance | You can use a [dict comprehension](https://docs.python.org/2/tutorial/datastructures.html#dictionaries) and slicing :
```
>>> list1 = [['James','24','Canada','Blue','Tall'],['Ryan','21','U.S.','Green','Short']]
>>> {i[0]:i[1:] for i in list1}
{'James': ['24', 'Canada', 'Blue', 'Tall'], 'Ryan': ['21', 'U.S.', 'Green', 'Short']}
```
In python 3 you can use a more elegant way with unpacking operation :
```
>>> {i:j for i,*j in list1}
{'James': ['24', 'Canada', 'Blue', 'Tall'], 'Ryan': ['21', 'U.S.', 'Green', 'Short']}
``` |
Django Rest Framework 3 Serializers on non-Model objects? | 29,310,000 | 7 | 2015-03-27T20:51:38Z | 29,314,232 | 8 | 2015-03-28T06:05:36Z | [
"python",
"django",
"serialization",
"django-rest-framework"
] | i'm doing an upgrade to DRF3.1.1 from 2.4. I was using a custom serializer to create an instance of an object that's not a Model.
In 2.4, it was easy enough to do this because in the serializer, I would create the object in `restore_object()`. In the view, i'd call `serializer.is_valid()` and then pop the instance of the object out of the serializer with `serializer.object`. Then I could do whatever I want.
With the 3.x changes, it's harder to get the instance out of the object because the create and update methods are supposed to do the saving, and "serializer.object" isn't available anymore.
As an example, I used to have this for my "UserRegistration" Object. This is not a model because it's a convenience object that the server parses up and stores data in a number of other objects/db tables.
```
class UserRegistration(object):
def __init__(self, full_name, stage_name, password="", email="", locale="en_US"):
self.full_name = full_name
self.password = password
self.locale = locale
self.email = email
self.stage_name = stage_name
```
Here's the associated DRF-2.4 serializer:
```
class UserRegistrationSerializer(serializers.Serializer):
full_name = serializers.CharField(max_length=128, required=False)
stage_name = serializers.CharField(max_length=128)
password = serializers.CharField(max_length=128, required=False)
locale = serializers.CharField(max_length=10, required=False)
# use CharField instead of EmailField for email. We do our own validation later to make for a better error msg.
email = serializers.CharField(max_length=254, required=False)
def restore_object(self, attrs, instance=None):
if instance is not None:
instance.full_name = attrs.get('full_name', instance.full_name)
instance.password = attrs.get('password', instance.password)
instance.locale = attrs.get('locale', instance.locale)
instance.email = attrs.get('email', instance.email)
instance.stage_name = attrs.get('stage_name', instance.stage_name)
return instance
return UserRegistration(**attrs)
```
Then in my view, I do something like this:
```
class UserRegistration(APIView):
throttle_classes = ()
serializer_class = UserRegistrationSerializer
def post(self, request, format=None):
event_type = "user_registration"
serializer = UserRegistrationSerializer(data=request.DATA, context={'request': request})
try:
if serializer.is_valid():
user_registration = serializer.object
# save user_registration pieces in various places...
```
However, in DRF3, I `serializer.object` is gone. The docs say to do "validation" using `serializer.validated_data`, but that's just a hash and not the real object. Is there a way to get the object?
The whole thing seems more married to DB objects, which in this particular case is exactly what i'm trying to avoid.
Am I just missing some new DRF3 concept? | Thanks @levi for the beginnings of an answer, but unfortunately, that's not all of it, so I think this is a more complete answer.
I originally asked:
> Am I just missing some new DRF3 concept?
Turns out...Yep. I was. The docs talk about the new `Single-step object creation`, which made me think the serialization and model had become more tightly coupled. This thought was incorrect, because if you write your own custom serializer, it's up to you to do the actual object save (**or not**) in the new `serializer.update()` and `serializer.create()` methods.
I also asked:
> In 2.4, it was easy enough to do this because in the serializer, I would create the object in restore\_object(). In the view, i'd call serializer.is\_valid() and then pop the instance of the object out of the serializer with serializer.object. Then I could do whatever I want.
>
> With the 3.x changes, it's harder to get the instance out of the object because the create and update methods are supposed to do the saving, and "serializer.object" isn't available anymore.
Although there's no `serializer.object` that you can use to pull the created object out of after calling `serializer.is_valid()`, the `serializer.save()` method returns the object itself, which in my case was just fine.
So, turns out, the code change wasn't very big at all. Here's my new code that is pretty happy with DRF-3:
```
class UserRegistration(object):
def __init__(self, full_name, stage_name, password="", email="", locale="en_US", notification_pref="ask"):
self.full_name = full_name
self.password = password
self.locale = locale
self.email = email
self.stage_name = stage_name
class UserRegistrationSerializer(serializers.Serializer):
full_name = serializers.CharField(max_length=128, required=False)
stage_name = serializers.CharField(max_length=128)
password = serializers.CharField(max_length=128, required=False)
locale = serializers.CharField(max_length=10, required=False)
# use CharField instead of EmailField for email. We do our own validation later to make for a better error msg.
email = serializers.CharField(max_length=254, required=False)
def update(self, instance, validated_data):
instance.full_name = validated_data.get('full_name', instance.full_name)
instance.password = validated_data.get('password', instance.password)
instance.locale = validated_data.get('locale', instance.locale)
instance.email = validated_data.get('email', instance.email)
instance.stage_name = validated_data.get('stage_name', instance.stage_name)
return instance
def create(self, validated_data):
return UserRegistration(**validated_data)
```
notice that there's no saving of the object to any DB in the Serializer. I'm just creating or updating the object and then returning it.
Now the view looks like this:
```
class UserRegistration(APIView):
throttle_classes = ()
serializer_class = UserRegistrationSerializer
def post(self, request, format=None):
event_type = "user_registration"
serializer = UserRegistrationSerializer(data=request.DATA, context={'request': request})
try:
if serializer.is_valid():
user_registration = serializer.save()
# save user_registration pieces in various places...
```
I also said in my original post:
> The whole thing seems more married to DB objects, which in this particular case is exactly what i'm trying to avoid.
This statement was also incorrect as seen by the fact that the create and update methods don't have to save anything to any DB.
One caveat here is that the code is functional, but obviously I'm just wrapping my head around some of the DRF2.x->3.x changes, so I could be doing this in a non-DRF way. If so, someone who knows please feel free to tell me how to do it better. :) |
Django programming error 1146 table doesn't exist | 29,310,117 | 2 | 2015-03-27T20:59:43Z | 29,310,275 | 9 | 2015-03-27T21:11:52Z | [
"python",
"mysql",
"django",
"database-migration"
] | I'm setting up my django project on a new remote server. When trying to setup the database running `python manage.py migrate' to run all migrations I get the following error:
```
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 354, in execute
django.setup()
File "/usr/local/lib/python2.7/dist-packages/django/__init__.py", line 21, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python2.7/dist-packages/django/apps/registry.py", line 115, in populate
app_config.ready()
File "/usr/local/lib/python2.7/dist-packages/django/contrib/admin/apps.py", line 22, in ready
self.module.autodiscover()
File "/usr/local/lib/python2.7/dist-packages/django/contrib/admin/__init__.py", line 23, in autodiscover
autodiscover_modules('admin', register_to=site)
File "/usr/local/lib/python2.7/dist-packages/django/utils/module_loading.py", line 74, in autodiscover_modules
import_module('%s.%s' % (app_config.name, module_to_search))
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/django/kwp/app/admin.py", line 3, in <module>
from app.views import genCustCode
File "/home/django/kwp/app/views.py", line 6, in <module>
from app.forms import *
File "/home/django/kwp/app/forms.py", line 466, in <module>
tag_choices = ((obj.id, obj.tag) for obj in BlogTag.objects.all())
File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 141, in __iter__
self._fetch_all()
File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 966, in _fetch_all
self._result_cache = list(self.iterator())
File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 265, in iterator
for row in compiler.results_iter():
File "/usr/local/lib/python2.7/dist-packages/django/db/models/sql/compiler.py", line 700, in results_iter
for rows in self.execute_sql(MULTI):
File "/usr/local/lib/python2.7/dist-packages/django/db/models/sql/compiler.py", line 786, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/utils.py", line 81, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/mysql/base.py", line 128, in execute
return self.cursor.execute(query, args)
File "/usr/local/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 205, in execute
self.errorhandler(self, exc, value)
File "/usr/local/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
django.db.utils.ProgrammingError: (1146, "Table 'kwp_db_prod.app_blogtag' doesn't exist")
```
I'm not sure what the issue is but it looks like my `forms.py` is getting called and looking for the `BlogTag` table before that table exists. Why would this happen when running migrations? Any way to get around this?
Thanks for your help. | There's a problem in the way your code is written, especially this line :
```
tag_choices = ((obj.id, obj.tag) for obj in BlogTag.objects.all())
```
In forms.py : you shouldn't use any QuerySet filtering in module body because it is executed when the module load, you'd rather call it in a function.
That's why your migration cannot be applied : you're trying to get data from database, using queryset, during module (forms.py) load but the table doesn't exist before migration :)
Try to comment this line and try to apply your migrations again.
But remember that this is a bad practice, if there's other lines like this one, you'd better comment them to apply migration, and best, move them into functions |
Get the product of lists inside a dict while retaining the same keys | 29,311,131 | 3 | 2015-03-27T22:19:19Z | 29,311,222 | 7 | 2015-03-27T22:27:49Z | [
"python",
"list",
"dictionary",
"product"
] | I have the following dict:
`my_dict = {'A': [1, 2], 'B': [1, 4]}`
And I want to end up with a list of dicts like this:
```
[
{'A': 1, 'B': 1},
{'A': 1, 'B': 4},
{'A': 2, 'B': 1},
{'A': 2, 'B': 4}
]
```
So, I'm after the product of dict's lists, expressed as a list of dicts using the same keys as the incoming dict.
The closest I got was:
```
my_dict = {'A': [1, 2], 'B': [1, 4]}
it = []
for k in my_dict.keys():
current = my_dict.pop(k)
for i in current:
it.append({k2: i2 for k2, i2 in my_dict.iteritems()})
it[-1].update({k: i})
```
Which, apart from looking hideous, doesn't give me what I want:
```
[
{'A': 1, 'B': [1, 4]},
{'A': 2, 'B': [1, 4]},
{'B': 1},
{'B': 4}
]
```
If anyone feels like solving a riddle, I'd love to see how you'd approach it. | You can use [`itertools.product`](https://docs.python.org/2/library/itertools.html#itertools.product) for this, i.e calculate cartesian product of the value and then simply zip each of the them with the keys from the dictionary. Note that [ordering of a dict](https://docs.python.org/2/library/stdtypes.html#mapping-types-dict)'s `keys()` and corresponding `values()` remains same if it is not modified in-between hence ordering won't be an issue here:
```
>>> from itertools import product
>>> my_dict = {'A': [1, 2], 'B': [1, 4]}
>>> keys = list(my_dict)
>>> [dict(zip(keys, p)) for p in product(*my_dict.values())]
[{'A': 1, 'B': 1}, {'A': 1, 'B': 4}, {'A': 2, 'B': 1}, {'A': 2, 'B': 4}]
``` |
How to set the timezone in Django? | 29,311,354 | 24 | 2015-03-27T22:41:33Z | 29,311,392 | 44 | 2015-03-27T22:45:07Z | [
"python",
"django",
"timezone",
"utc"
] | In my django project's `settings.py` file, I have this line :
```
TIME_ZONE = 'UTC'
```
But I want my app to run in UTC+2 timezone, so I changed it to
```
TIME_ZONE = 'UTC+2'
```
It gives the error `ValueError: Incorrect timezone setting: UTC+2`. What is the correct way of doing this?
Thanks! | Here is the list of valid timezones:
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>
You can use
```
TIME_ZONE = 'Europe/Istanbul'
```
for UTC+02:00 |
Reverse a Python string without omitting start and end slice | 29,312,383 | 20 | 2015-03-28T00:51:32Z | 29,312,436 | 13 | 2015-03-28T00:59:56Z | [
"python",
"reverse",
"slice"
] | How do you reverse a Python string without omitting the start and end slice arguments?
```
word = "hello"
reversed_word = word[::-1]
```
I understand that this works, but how would I get the result by specifying the start and end indexes?
```
word = "hello"
reversed_word = word[?:?:-1]
``` | Not quite sure why, but the following will return the reverse of `word`:
```
word = "hello"
word[len(word):-(len(word)+1):-1]
```
Or...
```
word = "hello"
word[len(word):-len(word)-1:-1]
```
**Edit (Explanation):**
From jedward's comment:
> The middle parameter is the trickiest, but it's pretty straightforward
> once you realize (a) negative indices to slice start/stop indicate
> that you want to count "backwards" from the end of the string, and (b)
> the stop index is exclusive, so it will "count" up to but stop
> at/before the stop index. word[len(word):-len(word)-1:-1] is probably
> more clear.
In response to [this](http://stackoverflow.com/questions/29312383/reverse-python-string-without-omitting-start-and-end-slice/29312436?noredirect=1#comment46818437_29312436) comment:
The third value is actually the increment so you are telling Python that you want to start at the last letter then return all the (-1)st values up to the last one.
Here is an drawing (pulled together in a minute):

The drawing shows that we can also use this instead:
```
word = "hello"
word[-1:-len(word)-1:-1] #-1 as the first
``` |
Reverse a Python string without omitting start and end slice | 29,312,383 | 20 | 2015-03-28T00:51:32Z | 29,313,612 | 15 | 2015-03-28T04:14:52Z | [
"python",
"reverse",
"slice"
] | How do you reverse a Python string without omitting the start and end slice arguments?
```
word = "hello"
reversed_word = word[::-1]
```
I understand that this works, but how would I get the result by specifying the start and end indexes?
```
word = "hello"
reversed_word = word[?:?:-1]
``` | Some other ways to reverse a string:
```
word = "hello"
reversed_word1 = word[-1: :-1]
reversed_word2 = word[len(word): :-1]
reversed_word3 = word[:-len(word)-1 :-1]
```
One thing you should note about the slicing notation `a[i:j:k]` is that **omitting `i` and `j` doesn't always mean that `i` will become `0` and `j` will become `len(s)`. [It depends upon the sign of `k`](https://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange)**. By default `k` is `+1`.
* If `k` is +ve then the default value of `i` is `0` (start from the beginning). If it is -ve then the default value of `i` is `-1` (start from the end).
* If `k` is +ve then the default value of `j` is `len(s)` (stop at the end). If it is -ve then the default value of `j` is `-(len(s)+1)` (stop at the beginning).
Now you can [explain](http://stackoverflow.com/questions/29312383/reverse-a-python-string-without-omitting-start-and-end-slice/29313612#comment46818372_29312383) your students how `Hello[::-1]` prints `olleH`. |
Is there a better way to run a randomly chosen function? | 29,313,319 | 2 | 2015-03-28T03:25:50Z | 29,313,360 | 9 | 2015-03-28T03:31:35Z | [
"python",
"function",
"eval"
] | I made a Tic-Tac-Toe game, and for one of the AI's I made, it chooses a random function enclosed in quotes (so it is a string) and calls `eval()` on it. Here's what it look like:
```
import random
func_list = ["func1()", "func2()", "func3()"]
eval(random.choice(func_list))
```
I don't really like using `eval()` (not really sure why, just doesn't seem right). I don't like having to put the functions in the list as strings either. **Is there a better way of doing this?** | Functions are first-class objects in Python. That means you can pass them around, and store them in lists as well, just like any other object.
So why not just do this?
```
func_list = [func1, func2, func3]
random.choice(func_list)()
``` |
How do I remove the background from this kind of image? | 29,313,667 | 5 | 2015-03-28T04:23:27Z | 29,314,286 | 9 | 2015-03-28T06:12:59Z | [
"python",
"image-processing",
"scikit-image"
] | 
I want to remove the background of this image to get the person only. I have thousand of images like this, basically, a person and a somewhat whitish background.
What I have done is to use edge detector like canny edge detector or sobel filter (from `skimage` library). Then what I think possible to do is, whiten the pixels within the edges and blacken the pixels without. Afterwards, the original image can be mask to get the picture of the person only.
However, it's hard to get a closed boundary using canny edge detector. Result using Sobel filter is not that bad, however I don't how to proceed from there.
 | The following code should get you started. You may want to play around with the parameters at the top of the program to fine-tune your extraction:
```
import cv2
import numpy as np
#== Parameters =======================================================================
BLUR = 21
CANNY_THRESH_1 = 10
CANNY_THRESH_2 = 200
MASK_DILATE_ITER = 10
MASK_ERODE_ITER = 10
MASK_COLOR = (0.0,0.0,1.0) # In BGR format
#== Processing =======================================================================
#-- Read image -----------------------------------------------------------------------
img = cv2.imread('C:/Temp/person.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#-- Edge detection -------------------------------------------------------------------
edges = cv2.Canny(gray, CANNY_THRESH_1, CANNY_THRESH_2)
edges = cv2.dilate(edges, None)
edges = cv2.erode(edges, None)
#-- Find contours in edges, sort by area ---------------------------------------------
contour_info = []
contours, _ = cv2.findContours(edges, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
for c in contours:
contour_info.append((
c,
cv2.isContourConvex(c),
cv2.contourArea(c),
))
contour_info = sorted(contour_info, key=lambda c: c[2], reverse=True)
max_contour = contour_info[0]
#-- Create empty mask, draw filled polygon on it corresponding to largest contour ----
# Mask is black, polygon is white
mask = np.zeros(edges.shape)
cv2.fillConvexPoly(mask, max_contour[0], (255))
#-- Smooth mask, then blur it --------------------------------------------------------
mask = cv2.dilate(mask, None, iterations=MASK_DILATE_ITER)
mask = cv2.erode(mask, None, iterations=MASK_ERODE_ITER)
mask = cv2.GaussianBlur(mask, (BLUR, BLUR), 0)
mask_stack = np.dstack([mask]*3) # Create 3-channel alpha mask
#-- Blend masked img into MASK_COLOR background --------------------------------------
mask_stack = mask_stack.astype('float32') / 255.0 # Use float matrices,
img = img.astype('float32') / 255.0 # for easy blending
masked = (mask_stack * img) + ((1-mask_stack) * MASK_COLOR) # Blend
masked = (masked * 255).astype('uint8') # Convert back to 8-bit
cv2.imshow('img', masked) # Display
cv2.waitKey()
#cv2.imwrite('C:/Temp/person-masked.jpg', masked) # Save
```
Ouput:
 |
Python Pandas DataFrame remove Empty Cells | 29,314,033 | 5 | 2015-03-28T05:30:48Z | 29,314,880 | 9 | 2015-03-28T07:46:29Z | [
"python",
"pandas"
] | I have a pd.DataFrame that was created by parsing some excel spreadsheets. A column of which has empty cells. For example, below is the output for the frequency of that column, 32320 records have missing values for Tenant.
```
In [67]: value_counts(Tenant,normalize=False)
Out[67]:
32320
Thunderhead 8170
Big Data Others 5700
Cloud Cruiser 5700
Partnerpedia 5700
Comcast 5700
SDP 5700
Agora 5700
dtype: int64
```
I am trying to drop rows where Tenant is missing, however isnull option does not recognize the missing values.
```
In [71]: df['Tenant'].isnull().sum()
Out[71]: 0
```
The column has data type "Object". What is happening in this case ? How can I drop records where Tenant is missing? | Pandas will recognise a value as null if it is a `np.nan` object, which will print as `NaN` in the DataFrame. Your missing values are probably empty strings, which Pandas does not recognise as null. To rectify this, you can convert the empty stings (or whatever is in your empty cells) to `np.nan` objects using `replace()`, and then call `dropna()`on your DataFrame to delete rows with null tenants.
To demonstrate, I create a DataFrame with some random values and some empty strings in a `Tenants` column:
```
>>> import pandas as pd
>>> import numpy as np
>>>
>>> df = pd.DataFrame(np.random.randn(10, 2), columns=list('AB'))
>>> df['Tenant'] = np.random.choice(['Babar', 'Rataxes', ''], 10)
>>> print df
A B Tenant
0 -0.588412 -1.179306 Babar
1 -0.008562 0.725239
2 0.282146 0.421721 Rataxes
3 0.627611 -0.661126 Babar
4 0.805304 -0.834214
5 -0.514568 1.890647 Babar
6 -1.188436 0.294792 Rataxes
7 1.471766 -0.267807 Babar
8 -1.730745 1.358165 Rataxes
9 0.066946 0.375640
```
Now I replace any empty strings in the `Tenants` column with `np.nan` objects, like so:
```
>>> df['Tenant'].replace('', np.nan, inplace=True)
>>> print df
A B Tenant
0 -0.588412 -1.179306 Babar
1 -0.008562 0.725239 NaN
2 0.282146 0.421721 Rataxes
3 0.627611 -0.661126 Babar
4 0.805304 -0.834214 NaN
5 -0.514568 1.890647 Babar
6 -1.188436 0.294792 Rataxes
7 1.471766 -0.267807 Babar
8 -1.730745 1.358165 Rataxes
9 0.066946 0.375640 NaN
```
Now I can drop the null values:
```
>>> df.dropna(subset=['Tenant'], inplace=True)
>>> print df
A B Tenant
0 -0.588412 -1.179306 Babar
2 0.282146 0.421721 Rataxes
3 0.627611 -0.661126 Babar
5 -0.514568 1.890647 Babar
6 -1.188436 0.294792 Rataxes
7 1.471766 -0.267807 Babar
8 -1.730745 1.358165 Rataxes
``` |
Python Pandas DataFrame remove Empty Cells | 29,314,033 | 5 | 2015-03-28T05:30:48Z | 29,319,460 | 8 | 2015-03-28T16:19:17Z | [
"python",
"pandas"
] | I have a pd.DataFrame that was created by parsing some excel spreadsheets. A column of which has empty cells. For example, below is the output for the frequency of that column, 32320 records have missing values for Tenant.
```
In [67]: value_counts(Tenant,normalize=False)
Out[67]:
32320
Thunderhead 8170
Big Data Others 5700
Cloud Cruiser 5700
Partnerpedia 5700
Comcast 5700
SDP 5700
Agora 5700
dtype: int64
```
I am trying to drop rows where Tenant is missing, however isnull option does not recognize the missing values.
```
In [71]: df['Tenant'].isnull().sum()
Out[71]: 0
```
The column has data type "Object". What is happening in this case ? How can I drop records where Tenant is missing? | value\_counts omits NaN by default so you're most likely dealing with "".
So you can just filter them out like
```
filter = df["Tenant"] != ""
dfNew = df[filter]
``` |
Itertools product without repeating duplicates | 29,314,372 | 3 | 2015-03-28T06:27:55Z | 29,314,389 | 9 | 2015-03-28T06:30:08Z | [
"python",
"python-3.x",
"itertools"
] | ```
from itertools import product
teams = ['india', 'australia', 'new zealand']
word_and = ['and']
tmp = '%s %s %s'
items = [teams, word_and, teams]
print(list(tmp % a for a in list(product(*items))))
```
prints:
```
['india and india',
'india and australia',
'india and new zealand',
'australia and india',
'australia and australia',
'australia and new zealand',
'new zealand and india',
'new zealand and australia',
'new zealand and new zealand']
```
How to:
1. avoid the same name repeating in a single sentence (india and india)
2. generate only one combination (either india and australia or australia and india)
<http://pythonfiddle.com/product-without-matching-duplicates> | You should use [`itertools.combinations`](https://docs.python.org/3/library/itertools.html#itertools.combinations) like this
```
>>> from itertools import combinations
>>> teams = ['india', 'australia', 'new zealand']
>>> [" and ".join(items) for items in combinations(teams, r=2)]
['india and australia', 'india and new zealand', 'australia and new zealand']
```
But for this simple case, you can run two loops, like this
```
>>> ["%s and %s" % (t1, t2) for i, t1 in enumerate(teams) for t2 in teams[i + 1:]]
['india and australia', 'india and new zealand', 'australia and new zealand']
``` |
Is it possible to get color gradients under curve in matplotlb? | 29,321,835 | 8 | 2015-03-28T19:52:42Z | 29,331,211 | 11 | 2015-03-29T15:46:24Z | [
"python",
"matplotlib"
] | I happen to see beautiful graphs on this [page](http://hackaday.com/2015/01/28/raspberry-pi-learns-how-to-control-a-combustion-engine/#more-145076) which is shown below:

Is it possible to get such color gradients in matplotlib? | There have been a handful of previous answers to similar questions (e.g. <http://stackoverflow.com/a/22081678/325565>), but they recommend a sub-optimal approach.
Most of the previous answers recommend plotting a white polygon over a `pcolormesh` fill. This is less than ideal for two reasons:
1. The background of the axes can't be transparent, as there's a filled polygon overlying it
2. `pcolormesh` is fairly slow to draw and isn't smoothly interpolated.
It's a touch more work, but there's a method that draws much faster and gives a better visual result: Set the clip path of an image plotted with `imshow`.
As an example:
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
from matplotlib.patches import Polygon
np.random.seed(1977)
def main():
for _ in range(5):
gradient_fill(*generate_data(100))
plt.show()
def generate_data(num):
x = np.linspace(0, 100, num)
y = np.random.normal(0, 1, num).cumsum()
return x, y
def gradient_fill(x, y, fill_color=None, ax=None, **kwargs):
"""
Plot a line with a linear alpha gradient filled beneath it.
Parameters
----------
x, y : array-like
The data values of the line.
fill_color : a matplotlib color specifier (string, tuple) or None
The color for the fill. If None, the color of the line will be used.
ax : a matplotlib Axes instance
The axes to plot on. If None, the current pyplot axes will be used.
Additional arguments are passed on to matplotlib's ``plot`` function.
Returns
-------
line : a Line2D instance
The line plotted.
im : an AxesImage instance
The transparent gradient clipped to just the area beneath the curve.
"""
if ax is None:
ax = plt.gca()
line, = ax.plot(x, y, **kwargs)
if fill_color is None:
fill_color = line.get_color()
zorder = line.get_zorder()
alpha = line.get_alpha()
alpha = 1.0 if alpha is None else alpha
z = np.empty((100, 1, 4), dtype=float)
rgb = mcolors.colorConverter.to_rgb(fill_color)
z[:,:,:3] = rgb
z[:,:,-1] = np.linspace(0, alpha, 100)[:,None]
xmin, xmax, ymin, ymax = x.min(), x.max(), y.min(), y.max()
im = ax.imshow(z, aspect='auto', extent=[xmin, xmax, ymin, ymax],
origin='lower', zorder=zorder)
xy = np.column_stack([x, y])
xy = np.vstack([[xmin, ymin], xy, [xmax, ymin], [xmin, ymin]])
clip_path = Polygon(xy, facecolor='none', edgecolor='none', closed=True)
ax.add_patch(clip_path)
im.set_clip_path(clip_path)
ax.autoscale(True)
return line, im
main()
```
 |
Is it possible to get color gradients under curve in matplotlb? | 29,321,835 | 8 | 2015-03-28T19:52:42Z | 29,347,731 | 10 | 2015-03-30T13:39:07Z | [
"python",
"matplotlib"
] | I happen to see beautiful graphs on this [page](http://hackaday.com/2015/01/28/raspberry-pi-learns-how-to-control-a-combustion-engine/#more-145076) which is shown below:

Is it possible to get such color gradients in matplotlib? | *Please note [Joe Kington](http://stackoverflow.com/a/29331211/190597) deserves the lion's share of the credit here; my sole contribution is `zfunc`.*
His method opens to door to many gradient/blur/drop-shadow
effects. For example, to make the lines have an evenly blurred underside, you
could use PIL to build an alpha layer which is 1 near the line and 0 near the bottom edge.
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
import matplotlib.patches as patches
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFilter
np.random.seed(1977)
def demo_blur_underside():
for _ in range(5):
# gradient_fill(*generate_data(100), zfunc=None) # original
gradient_fill(*generate_data(100), zfunc=zfunc)
plt.show()
def generate_data(num):
x = np.linspace(0, 100, num)
y = np.random.normal(0, 1, num).cumsum()
return x, y
def zfunc(x, y, fill_color='k', alpha=1.0):
scale = 10
x = (x*scale).astype(int)
y = (y*scale).astype(int)
xmin, xmax, ymin, ymax = x.min(), x.max(), y.min(), y.max()
w, h = xmax-xmin, ymax-ymin
z = np.empty((h, w, 4), dtype=float)
rgb = mcolors.colorConverter.to_rgb(fill_color)
z[:,:,:3] = rgb
# Build a z-alpha array which is 1 near the line and 0 at the bottom.
img = Image.new('L', (w, h), 0)
draw = ImageDraw.Draw(img)
xy = (np.column_stack([x, y]))
xy -= xmin, ymin
# Draw a blurred line using PIL
draw.line(map(tuple, xy.tolist()), fill=255, width=15)
img = img.filter(ImageFilter.GaussianBlur(radius=100))
# Convert the PIL image to an array
zalpha = np.asarray(img).astype(float)
zalpha *= alpha/zalpha.max()
# make the alphas melt to zero at the bottom
n = zalpha.shape[0] // 4
zalpha[:n] *= np.linspace(0, 1, n)[:, None]
z[:,:,-1] = zalpha
return z
def gradient_fill(x, y, fill_color=None, ax=None, zfunc=None, **kwargs):
if ax is None:
ax = plt.gca()
line, = ax.plot(x, y, **kwargs)
if fill_color is None:
fill_color = line.get_color()
zorder = line.get_zorder()
alpha = line.get_alpha()
alpha = 1.0 if alpha is None else alpha
if zfunc is None:
h, w = 100, 1
z = np.empty((h, w, 4), dtype=float)
rgb = mcolors.colorConverter.to_rgb(fill_color)
z[:,:,:3] = rgb
z[:,:,-1] = np.linspace(0, alpha, h)[:,None]
else:
z = zfunc(x, y, fill_color=fill_color, alpha=alpha)
xmin, xmax, ymin, ymax = x.min(), x.max(), y.min(), y.max()
im = ax.imshow(z, aspect='auto', extent=[xmin, xmax, ymin, ymax],
origin='lower', zorder=zorder)
xy = np.column_stack([x, y])
xy = np.vstack([[xmin, ymin], xy, [xmax, ymin], [xmin, ymin]])
clip_path = patches.Polygon(xy, facecolor='none', edgecolor='none', closed=True)
ax.add_patch(clip_path)
im.set_clip_path(clip_path)
ax.autoscale(True)
return line, im
demo_blur_underside()
```
yields
 |
NumPy random seed produces different random numbers | 29,324,735 | 5 | 2015-03-29T01:45:11Z | 29,324,779 | 9 | 2015-03-29T01:54:24Z | [
"python",
"numpy",
"random"
] | I run the following code:
```
np.random.RandomState(3)
idx1 = np.random.choice(range(20),(5,))
idx2 = np.random.choice(range(20),(5,))
np.random.RandomState(3)
idx1S = np.random.choice(range(20),(5,))
idx2S = np.random.choice(range(20),(5,))
```
The output I get is the following:
```
idx1: array([ 2, 19, 19, 9, 4])
idx1S: array([ 2, 19, 19, 9, 4])
idx2: array([ 9, 2, 7, 10, 6])
idx2S: array([ 5, 16, 9, 11, 15])
```
idx1 and idx1S match, but idx2 and idx2S do not match. I expect that once I seed the random number generator and repeat the same sequence of commands - it should produce the same sequence of random numbers. Is this not true? Or is there something else that I am missing? | You're confusing `RandomState` with `seed`. Your first line constructs an object which you can then use as your random source. For example, we make
```
>>> rnd = np.random.RandomState(3)
>>> rnd
<mtrand.RandomState object at 0xb17e18cc>
```
and then
```
>>> rnd.choice(range(20), (5,))
array([10, 3, 8, 0, 19])
>>> rnd.choice(range(20), (5,))
array([10, 11, 9, 10, 6])
>>> rnd = np.random.RandomState(3)
>>> rnd.choice(range(20), (5,))
array([10, 3, 8, 0, 19])
>>> rnd.choice(range(20), (5,))
array([10, 11, 9, 10, 6])
```
[I don't understand why your `idx1` and `idx1S` agree-- but you didn't actually post a self-contained transcript, so I suspect user error.]
If you want to affect the *global* state, use `seed`:
```
>>> np.random.seed(3)
>>> np.random.choice(range(20),(5,))
array([10, 3, 8, 0, 19])
>>> np.random.choice(range(20),(5,))
array([10, 11, 9, 10, 6])
>>> np.random.seed(3)
>>> np.random.choice(range(20),(5,))
array([10, 3, 8, 0, 19])
>>> np.random.choice(range(20),(5,))
array([10, 11, 9, 10, 6])
```
Using a specific `RandomState` object may seem less convenient at first, but it makes a lot of things easier when you want different entropy streams you can tune. |
Global error handler for any exception | 29,332,056 | 6 | 2015-03-29T16:58:14Z | 29,332,131 | 10 | 2015-03-29T17:05:16Z | [
"python",
"error-handling",
"flask",
"http-error"
] | Is there a way to add a global catch-all error handler in which I can change the response to a generic JSON response?
I can't use the `got_request_exception` signal, as it is not allowed to modify the response (<http://flask.pocoo.org/docs/0.10/signals/>).
> In contrast all signal handlers are executed in undefined order and do not modify any data.
I would prefer to not wrap the `app.handle_exception` function as that feels like internal API. I guess I'm after something like:
```
@app.errorhandler()
def handle_global_error(e):
return "Global error"
```
Note the `errorhandler` does not take any parameters, meaning it would catch all exceptions/status codes which does not have a specific error handler attached to them. I know I can use `errorhandler(500)` or `errorhandler(Exception)` to catch exceptions, but if I do `abort(409)` for example, it will still return a HTML response. | You can use `@app.errorhandler(Exception)`:
Demo (the HTTPException check ensures that the status code is preserved):
```
from flask import Flask, abort, jsonify
from werkzeug.exceptions import HTTPException
app = Flask('test')
@app.errorhandler(Exception)
def handle_error(e):
code = 500
if isinstance(e, HTTPException):
code = e.code
return jsonify(error=str(e)), code
@app.route('/')
def index():
abort(409)
app.run(port=1234)
```
Output:
```
$ http get http://127.0.0.1:1234/
HTTP/1.0 409 CONFLICT
Content-Length: 31
Content-Type: application/json
Date: Sun, 29 Mar 2015 17:06:54 GMT
Server: Werkzeug/0.10.1 Python/3.4.3
{
"error": "409: Conflict"
}
$ http get http://127.0.0.1:1234/notfound
HTTP/1.0 404 NOT FOUND
Content-Length: 32
Content-Type: application/json
Date: Sun, 29 Mar 2015 17:06:58 GMT
Server: Werkzeug/0.10.1 Python/3.4.3
{
"error": "404: Not Found"
}
```
For older Flask versions (<=0.10.1, i.e. any non-git/master version at the moment), add the following code to your application to register the HTTP errors explicitly:
```
from werkzeug import HTTP_STATUS_CODES
for code in HTTP_STATUS_CODES:
app.register_error_handler(code, handle_error)
``` |
Python class scoping rules | 29,333,359 | 26 | 2015-03-29T18:48:47Z | 29,333,740 | 7 | 2015-03-29T19:25:50Z | [
"python",
"scoping"
] | **EDIT:** Looks like this is a very old "bug" or, actually, feature. See, e.g., [this mail](https://mail.python.org/pipermail/python-dev/2002-April/023428.html)
I am trying to understand the Python scoping rules. More precisely, I thought that I understand them but then I found this code [here](http://lackingrhoticity.blogspot.ch/2008/08/4-python-variable-binding-oddities.html):
```
x = "xtop"
y = "ytop"
def func():
x = "xlocal"
y = "ylocal"
class C:
print(x)
print(y)
y = 1
func()
```
In Python 3.4 the output is:
```
xlocal
ytop
```
If I replace the inner class by a function then it reasonably gives `UnboundLocalError`. Could you explain me why it behaves this strange way with classes and what is the reason for such choice of scoping rules? | First focus on the case of a closure -- a function within a function:
```
x = "xtop"
y = "ytop"
def func():
x = "xlocal"
y = "ylocal"
def inner():
# global y
print(x)
print(y)
y='inner y'
print(y)
inner()
```
Note the commented out `global` in `inner` If you run this, it replicates the `UnboundLocalError` you got. Why?
Run dis.dis on it:
```
>>> import dis
>>> dis.dis(func)
6 0 LOAD_CONST 1 ('xlocal')
3 STORE_DEREF 0 (x)
7 6 LOAD_CONST 2 ('ylocal')
9 STORE_FAST 0 (y)
8 12 LOAD_CLOSURE 0 (x)
15 BUILD_TUPLE 1
18 LOAD_CONST 3 (<code object inner at 0x101500270, file "Untitled 3.py", line 8>)
21 LOAD_CONST 4 ('func.<locals>.inner')
24 MAKE_CLOSURE 0
27 STORE_FAST 1 (inner)
14 30 LOAD_FAST 1 (inner)
33 CALL_FUNCTION 0 (0 positional, 0 keyword pair)
36 POP_TOP
37 LOAD_CONST 0 (None)
40 RETURN_VALUE
```
Note the different access mode of `x` vs `y` inside of `func`. The use of `y='inner y'` inside of `inner` has created the `UnboundLocalError`
Now uncomment `global y` inside of `inner`. Now you have unambiguously create `y` to be the top global version until resigned as `y='inner y'`
With `global` uncommented, prints:
```
xlocal
ytop
inner y
```
You *can* get a more sensible result with:
```
x = "xtop"
y = "ytop"
def func():
global y, x
print(x,y)
x = "xlocal"
y = "ylocal"
def inner():
global y
print(x,y)
y = 'inner y'
print(x,y)
inner()
```
Prints:
```
xtop ytop
xlocal ylocal
xlocal inner y
```
---
The analysis of the closure class is complicated by instance vs class variables and what / when a naked class (with no instance) is being executed.
The bottom line is the same: If you reference a name outside the local namespace and then assign to the same name locally you get a surprising result.
The 'fix' is the same: use the global keyword:
```
x = "xtop"
y = "ytop"
def func():
global x, y
x = "xlocal"
y = "ylocal"
class Inner:
print(x, y)
y = 'Inner_y'
print(x, y)
```
Prints:
```
xlocal ylocal
xlocal Inner_y
```
You can read more about Python 3 scope rules in [PEP 3104](https://www.python.org/dev/peps/pep-3104/) |
Python class scoping rules | 29,333,359 | 26 | 2015-03-29T18:48:47Z | 29,334,539 | 12 | 2015-03-29T20:35:53Z | [
"python",
"scoping"
] | **EDIT:** Looks like this is a very old "bug" or, actually, feature. See, e.g., [this mail](https://mail.python.org/pipermail/python-dev/2002-April/023428.html)
I am trying to understand the Python scoping rules. More precisely, I thought that I understand them but then I found this code [here](http://lackingrhoticity.blogspot.ch/2008/08/4-python-variable-binding-oddities.html):
```
x = "xtop"
y = "ytop"
def func():
x = "xlocal"
y = "ylocal"
class C:
print(x)
print(y)
y = 1
func()
```
In Python 3.4 the output is:
```
xlocal
ytop
```
If I replace the inner class by a function then it reasonably gives `UnboundLocalError`. Could you explain me why it behaves this strange way with classes and what is the reason for such choice of scoping rules? | **TL;DR**: This behaviour has existed since Python 2.1 [PEP 227: Nested Scopes](https://docs.python.org/3/whatsnew/2.1.html#pep-227-nested-scopes), and was known back then. If a name is assigned to within a class body (like `y`), then it is assumed to be a local/global variable; if it is not assigned to (`x`), then it also can potentially point to a closure cell. The lexical variables do not show up as local/global names to the class body.
---
On Python 3.4, `dis.dis(func)` shows the following:
```
>>> dis.dis(func)
4 0 LOAD_CONST 1 ('xlocal')
3 STORE_DEREF 0 (x)
5 6 LOAD_CONST 2 ('ylocal')
9 STORE_FAST 0 (y)
6 12 LOAD_BUILD_CLASS
13 LOAD_CLOSURE 0 (x)
16 BUILD_TUPLE 1
19 LOAD_CONST 3 (<code object C at 0x7f083c9bbf60, file "test.py", line 6>)
22 LOAD_CONST 4 ('C')
25 MAKE_CLOSURE 0
28 LOAD_CONST 4 ('C')
31 CALL_FUNCTION 2 (2 positional, 0 keyword pair)
34 STORE_FAST 1 (C)
37 LOAD_CONST 0 (None)
40 RETURN_VALUE
```
The `LOAD_BUILD_CLASS` loads the `builtins.__build_class__` on the stack; this is called with arguments `__build_class__(func, name)`; where `func` is the class body, and `name` is `'C'`. The class body is the constant #3 for the function `func`:
```
>>> dis.dis(func.__code__.co_consts[3])
6 0 LOAD_NAME 0 (__name__)
3 STORE_NAME 1 (__module__)
6 LOAD_CONST 0 ('func.<locals>.C')
9 STORE_NAME 2 (__qualname__)
7 12 LOAD_NAME 3 (print)
15 LOAD_CLASSDEREF 0 (x)
18 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
21 POP_TOP
8 22 LOAD_NAME 3 (print)
25 LOAD_NAME 4 (y)
28 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
31 POP_TOP
9 32 LOAD_CONST 1 (1)
35 STORE_NAME 4 (y)
38 LOAD_CONST 2 (None)
41 RETURN_VALUE
```
Within the class body, `x` is accessed with `LOAD_CLASSDEREF` (15) while `y` is load with `LOAD_NAME` (25). The `LOAD_CLASSDEREF` is a Python 3.4+ opcode for loading values from closure cells specifically within class bodies (in previous versions, the generic `LOAD_DEREF` was used); the `LOAD_NAME` is for loading values from *locals* and then *globals*. However closure cells show up neither as locals nor globals.
Now, because the name `y` is stored to within the class body (35), it is consistently being used as not a closure cell but a local/global name.
The closure cells do not show up as local variables to the class body.
This behaviour has been true [ever since implementing PEP 227 - nested scopes](https://mail.python.org/pipermail/python-dev/2002-April/023428.html). And back then BDFL stated that this should not be fixed - and thus it has been for these 13+ years.
---
The only change since PEP 227 is the addition of `nonlocal` in Python 3; if one uses it within the class body, the class body can set the values of the cells within the containing scope:
```
x = "xtop"
y = "ytop"
def func():
x = "xlocal"
y = "ylocal"
class C:
nonlocal y # y here now refers to the outer variable
print(x)
print(y)
y = 1
print(y)
print(C.y)
func()
```
The output now is
```
xlocal
ylocal
1
Traceback (most recent call last):
File "test.py", line 15, in <module>
func()
File "test.py", line 13, in func
print(C.y)
AttributeError: type object 'C' has no attribute 'y'
```
That is, `print(y)` read the value of the cell `y` of the containing scope, and `y = 1` set the value in that cell; in this case, no attribute was created for the class `C`. |
pandas read csv file line by line | 29,334,463 | 6 | 2015-03-29T20:29:18Z | 29,334,672 | 14 | 2015-03-29T20:49:27Z | [
"python",
"pandas"
] | I have a very big csv file so that I can not read them all into the memory. I only want to read and process a few lines in it. So I am seeking a function in Pandas which could handle this task, which the basic python can handle this well:
```
with open('abc.csv') as f:
line = f.readline()
# pass until it reaches a particular line number....
```
However, if I do this in pandas, I always read the first line:
```
datainput1 = pd.read_csv('matrix.txt',sep=',', header = None, nrows = 1 )
datainput2 = pd.read_csv('matrix.txt',sep=',', header = None, nrows = 1 )
```
I am looking for some easier way to handle this task in pandas. For example, if I want to read rows from 1000 to 2000. How can I do this quickly?
The reason I want to use pandas, because I want to read data into the data frame | Use [`chunksize`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html#pandas.read_csv):
```
for df in pd.read_csv('matrix.txt',sep=',', header = None, chunksize=1):
#do something
```
To answer your second part do this:
```
df = pd.read_csv('matrix.txt',sep=',', header = None, skiprows=1000, chunksize=1000)
```
This will skip the first 1000 rows and then only read the next 1000 rows giving you rows 1000-2000, unclear if you require the end points to be included or not but you can fiddle the numbers to get what you want. |
Eval scope in Python 2 vs. 3 | 29,336,616 | 24 | 2015-03-30T00:22:53Z | 29,336,695 | 24 | 2015-03-30T00:34:38Z | [
"python",
"python-3.x",
"compatibility",
"python-3.4"
] | I came across bizarre eval behavior in Python 3 - local variables aren't picked up when eval is called in a list comprehension.
```
def apply_op():
x, y, z = [0.5, 0.25, 0.75]
op = "x,y,z"
return [eval(o) for o in op.split(",")]
print(apply_op())
```
It errors in Python 3:
```
â¶ python --version
Python 3.4.3
â¶ python eval.py
Traceback (most recent call last):
File "eval.py", line 7, in <module>
print(apply_op())
File "eval.py", line 5, in apply_op
return [eval(o) % 1 for o in op.split(",")]
File "eval.py", line 5, in <listcomp>
return [eval(o) % 1 for o in op.split(",")]
File "<string>", line 1, in <module>
NameError: name 'x' is not defined
```
And it works fine in Python 2:
```
â¶ python --version
Python 2.7.8
â¶ python eval.py
[0.5, 0.25, 0.75]
```
Moving it outside of the list comprehension removes the problem.
```
def apply_op():
x, y, z = [0.5, 0.25, 0.75]
return [eval("x"), eval("y"), eval("z")]
```
Is this intended behavior, or is it a bug? | There is a *closed* issue in the bug tracker for this: [Issue 5242](http://bugs.python.org/issue5242).
The resolution for this bug is **won't fix**.
Some comments from the Issue read:
> This is expected, and won't easily fix. The reason is that list
> comprehensions in 3.x use a function namespace "under the hood" (in 2.x,
> they were implemented like a simple for loop). Because inner functions
> need to know what names to get from what enclosing namespace, the names
> referenced in eval() can't come from enclosing functions. They must
> either be locals or globals.
>
> eval() is probably already an hack, there's no need to add another hack
> to make it work. It's better to just get rid of eval() and find a better
> way to do what you want to do. |
Processing specific files in a directory in python | 29,339,662 | 2 | 2015-03-30T06:29:04Z | 29,339,715 | 8 | 2015-03-30T06:32:14Z | [
"python",
"json"
] | I wrote a small python program which processes all the files in a directory. I want to restrict that to include only JSON files in that directory. For example, the line `fname in fileList:` in the code snipped below should only enumerate files with the extension \*.json
```
#Set the directory you want to start from
rootDir = '/home/jas_parts'
for dirName, subdirList, fileList in os.walk(rootDir):
print('Found directory: %s' % dirName)
for fname in fileList:
print('\t%s' % fname)
fname='jas_parts/'+fname
with open(fname, 'r+') as f:
json_data = json.load(f)
event = json_data['Total']
print(event)
``` | Since your file name is string you can use the [**`str.endswith`**](https://docs.python.org/2/library/stdtypes.html#str.endswith) method to check if it is `json` file.
```
if fname.endswith('.json'):
#do_something()
``` |
Two different values for same variable "args" | 29,342,420 | 2 | 2015-03-30T09:17:16Z | 29,342,509 | 12 | 2015-03-30T09:21:03Z | [
"python",
"python-2.7",
"pdb"
] | I am invoking a method from python script which has one of the variable as args.
Once I step into the method, when I am trying to see the value of the the variable args, "print args" and just executing 'args' display two different values.
Can anyone please let me know whats the difference between these two commands.
I expected both the commands to display same value.
```
(Pdb) print args
<lib.framework.testmanager.RunArgs object at 0xb26acac>
(Pdb) args
args = <lib.framework.testmanager.RunArgs object at 0xb26acac>
u = <upgradelib.UpgradeManager object at 0x946cf8c>
spec = {'excludeHosts': None, 'evacuateAllData': True, 'WaitTime': None, 'IssueType': 'Host Disconnect', 'performObjectUpgrade': True, 'downgradeFormat': False}
result = True
``` | `args` is a PDB debugger command. Use `!args` to show the actual variable.
See the [*Debugger Commands* section](https://docs.python.org/2/library/pdb.html#debugger-commands):
> *a(rgs)*
> Print the argument list of the current function.
and
> *[!]statement*
> Execute the (one-line) statement in the context of the current stack frame. **The exclamation point can be omitted unless the first word of the statement resembles a debugger command.**
(Emphasis mine).
In your `args` output you can see the `args` argument value on the first line.
Personally, I find the `(a)rgs` command a little pointless; it prints all values using `str()` instead of `repr()`; this makes the difference between objects with similar `__str__` output values invisible (such as `str` vs. `unicode`, or a BeautifulSoup Element vs. a string with HTML, etc.). |
django difference between clear() and delete() | 29,344,450 | 4 | 2015-03-30T11:00:02Z | 29,344,522 | 7 | 2015-03-30T11:03:00Z | [
"python",
"django"
] | I'm using django for a while now and recently bumped into this :
```
user.groups.clear()
```
usually what I'd do is this:
```
user.groups.all().delete()
```
what's the difference? | `user.groups.all().delete()` will *delete* the related group objects, while `user.groups.clear()` will only disassociate the relation:
<https://docs.djangoproject.com/en/1.7/ref/models/relations/#django.db.models.fields.related.RelatedManager.clear>
> Removes all objects from the related object set:
> Note this doesnât delete the related objects â it just disassociates them.
Note that deleting the related objects may have the side effect that other users belonging to the same group may also be deleted (by cascade), depending on the ForeignKey rules specified by [`on_delete`](https://docs.djangoproject.com/en/1.7/ref/models/fields/#django.db.models.ForeignKey.on_delete). |
What's the pythonic way to parse such a URI 'foo://user:pass@host:port' into proper variables? | 29,346,878 | 2 | 2015-03-30T12:58:13Z | 29,346,928 | 7 | 2015-03-30T13:01:21Z | [
"python"
] | Only the `host` part is not optional, that is to say the URI may has the following forms:
```
1. foo://user:pass@host:port
2. foo://host:port
3. user@host
4. host
```
and so on. If we have five variables to save the values of parts in such an URI, Is there a pythonic way to assign these values to the proper variables when they exist in the URI? | You'd use [`urlparse`](https://docs.python.org/2/library/urlparse.html) (python 2) / [`urllib.parse`](https://docs.python.org/3/library/urllib.parse.html) module.
The [`urlparse()` function](https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlparse) can handle all forms you mention, but note that without a scheme, the `user@host` and `host` portions are seen as a *path*:
```
>>> from urllib.parse import urlparse
>>> urlparse('foo://user:pass@host:port')
ParseResult(scheme='foo', netloc='user:pass@host:port', path='', params='', query='', fragment='')
>>> urlparse('foo://host:port')
ParseResult(scheme='foo', netloc='host:port', path='', params='', query='', fragment='')
>>> urlparse('user@host')
ParseResult(scheme='', netloc='', path='user@host', params='', query='', fragment='')
>>> urlparse('host')
ParseResult(scheme='', netloc='', path='host', params='', query='', fragment='')
```
That's easily worked around:
```
host = result.hostname or result.path.rpartition('@')[-1]
``` |
Why is it a syntax error to have an object attribute named "del", "return" etc? | 29,346,945 | 3 | 2015-03-30T13:01:48Z | 29,346,977 | 8 | 2015-03-30T13:03:59Z | [
"python"
] | I understand that one shouldn't be able to replace the behaviour of the "del" ("return" etc) keyword, but I do not understand why it is not possible to do this:
```
myobj.del(mystr)
```
What could the parser confuse it with? Is there a way to allow it?
Of course, I could use a different name, but I want to have a little custom wrapper around the AWS tool s3cmd and do things like `s3cmd.del("s3://some/bucket/")` and have the "del" handled by a `__getattr__` in my s3cmd class... so the name "del" is something I'd be really happy to manage to use. | That is because such words are keywords. Keywords in Python are reserved words that cannot be used as ordinary identifiers.
The list includes [from the doc function `keyword`](https://docs.python.org/3/library/keyword.html)
```
>>> import keyword
>>> import pprint
>>> pprint.pprint(keyword.kwlist)
['and',
'as',
'assert',
'break',
'class',
'continue',
'def',
'del',
'elif',
'else',
'except',
'exec',
'finally',
'for',
'from',
'global',
'if',
'import',
'in',
'is',
'lambda',
'not',
'or',
'pass',
'print',
'raise',
'return',
'try',
'while',
'with',
'yield']
```
The reason as to why is beautifully mentioned in [Konrad's comment](http://stackoverflow.com/questions/29346945/why-is-it-a-syntax-error-to-have-an-object-attribute-named-del-return-etc/29346977#comment46879825_29346945)
> Thereâs nothing magical about keywords. However, it makes parsers **vastly** easier to write when disallowing keywords for identifiers. In particular, it makes it easier to provide human-readable error messages for parse errors, because the parser is able to infer more context about the error. |
Difference between IOError and OSError? | 29,347,790 | 5 | 2015-03-30T13:42:14Z | 29,347,946 | 7 | 2015-03-30T13:49:37Z | [
"python",
"exception",
"python-2.x"
] | I am always getting confused on whether a function would raise an IOError or OSError (or both?). What is the principle rule behind these exception types, what is the difference between them and when is which raised?
I've initially thought OSError is for things like permission denial, but opening a file without permissions will raise an IOError. | There is very little difference between the two types. In fact, even the core Python developers agreed that there is no real difference and removed `IOError` in Python 3 (it is now an alias for `OSError`). See [PEP 3151 - Reworking the OS and IO exception hierarchy](https://www.python.org/dev/peps/pep-3151/#confusing-set-of-os-related-exceptions):
> While some of these distinctions can be explained by implementation considerations, they are often not very logical at a higher level. The line separating `OSError` and `IOError`, for example, is often blurry. Consider the following:
>
> ```
> >>> os.remove("fff")
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> OSError: [Errno 2] No such file or directory: 'fff'
> >>> open("fff")
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> IOError: [Errno 2] No such file or directory: 'fff'
> ```
Yes, that's two different exception types with the *exact same error message*.
For your own code, stick to throwing `OSError`. For existing functions, check the documentation (it should detail what you need to catch), but you can safely catch both:
```
try:
# ...
except (IOError, OSError):
# handle error
```
Quoting the PEP again:
> In fact, it is hard to think of any situation where `OSError` should be caught but not `IOError`, or the reverse. |
Why can't I suppress numpy warnings | 29,347,987 | 10 | 2015-03-30T13:51:28Z | 29,348,184 | 12 | 2015-03-30T14:00:38Z | [
"python",
"numpy"
] | I really want to avoid these annoying numpy warnings since I have to deal with a lot of `NaNs`. I know this is usually done with seterr, but for some reason here it does not work:
```
import numpy as np
data = np.random.random(100000).reshape(10, 100, 100) * np.nan
np.seterr(all="ignore")
np.nanmedian(data, axis=[1, 2])
```
It gives me a runtime warning even though I set numpy to ignore all errors...any help?
Edit (this is the warning that is recieved):
`/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-pââackages/numpy/lib/nanfunctions.py:612: RuntimeWarning: All-NaN slice encountered warnings.warn("All-NaN slice encountered", RuntimeWarning)`
Thanks :) | Warnings can often be useful and in most cases I wouldn't advise this, but you can always make use of the [`Warnings`](https://docs.python.org/2/library/warnings.html) module to ignore all warnings with `filterwarnings`:
```
warnings.filterwarnings('ignore')
```
Should you want to suppress uniquely your particular error, you could specify it with:
```
with warnings.catch_warnings():
warnings.filterwarnings('ignore', r'All-NaN (slice|axis) encountered')
``` |
Declaring a multi dimensional dictionary in python | 29,348,345 | 5 | 2015-03-30T14:06:50Z | 29,348,412 | 12 | 2015-03-30T14:09:14Z | [
"python",
"dictionary"
] | I need to make a two dimensional dictionary in python. e.g. `new_dic[1][2] = 5`
When I make `new_dic = {}`, and try to insert values, I get a `KeyError`?
```
new_dic[1][2] = 5
KeyError: 1
```
How to do this? | A multi-dimensional dictionary is simply a dictionary where the values are themselves also dictionaries, creating a nested structure:
```
new_dic = {}
new_dic[1] = {}
new_dic[1][2] = 5
```
You'd have to detect that you already created `new_dic[1]` each time, though, to not accidentally wipe that nested object for additional keys under `new_dic[1]`.
You can simplify creating nested dictionaries using various techniques; using [`dict.setdefault()`](https://docs.python.org/2/library/stdtypes.html#dict.setdefault) for example:
```
new_dic.setdefault(1, {})[2] = 5
```
`dict.setdefault()` will only set a key to a default value if the key is still missing, saving you from having to test this each time.
Simpler still is using the [`collections.defaultdict()` type](https://docs.python.org/2/library/collections.html#collections.defaultdict) to create nested dictionaries automatically:
```
from collections import defaultdict
new_dic = defaultdict(dict)
new_dic[1][2] = 5
```
`defaultdict` is just a subclass of the standard `dict` type here; every time you try and access a key that doesn't yet exist in the mapping, a factory function is called to create a new value. Here that's the [`dict()` callable](https://docs.python.org/2/library/stdtypes.html#dict), which produces an empty dictionary when called.
Demo:
```
>>> new_dic_plain = {}
>>> new_dic_plain[1] = {}
>>> new_dic_plain[1][2] = 5
>>> new_dic_plain
{1: {2: 5}}
>>> new_dic_setdefault = {}
>>> new_dic_setdefault.setdefault(1, {})[2] = 5
>>> new_dic_setdefault
{1: {2: 5}}
>>> from collections import defaultdict
>>> new_dic_defaultdict = defaultdict(dict)
>>> new_dic_defaultdict[1][2] = 5
>>> new_dic_defaultdict
defaultdict(<type 'dict'>, {1: {2: 5}})
``` |
How to make a continuous alphabetic list python (from a-z then from aa, ab, ac etc) | 29,351,492 | 5 | 2015-03-30T16:31:54Z | 29,351,603 | 11 | 2015-03-30T16:38:20Z | [
"python",
"excel"
] | I would like to make a alphabetical list for an application similar to an excel worksheet.
A user would input number of cells and I would like to generate list.
For example a user needs 54 cells. Then I would generate
'a','b','c',...,'z','aa','ab','ac',...,'az', 'ba','bb'
I can generate the list from [[ref]](http://stackoverflow.com/questions/12032035/python-can-make-alphabet-list-like-haskell)
```
from string import ascii_lowercase
L = list(ascii_lowercase)
```
How do i stitch it together?
A similar question for PHP has been asked [here](http://stackoverflow.com/questions/25958518/how-to-list-from-a-to-z-in-php-and-then-on-to-aa-ab-ac-etc). Does some one have the python equivalent? | Use `itertools.product`.
```
from string import ascii_lowercase
import itertools
def iter_all_strings():
size = 1
while True:
for s in itertools.product(ascii_lowercase, repeat=size):
yield "".join(s)
size +=1
for s in iter_all_strings():
print s
if s == 'bb':
break
```
Result:
```
a
b
c
d
e
...
y
z
aa
ab
ac
...
ay
az
ba
bb
```
This has the added benefit of going well beyond two-letter combinations. If you need a million strings, it will happily give you three and four and five letter strings.
---
Bonus style tip: if you don't like having an explicit `break` inside the bottom loop, you can use `islice` to make the loop terminate on its own:
```
for s in itertools.islice(iter_all_strings(), 54):
print s
``` |
Creating a processing queue in Tornado | 29,354,044 | 6 | 2015-03-30T18:57:29Z | 29,354,963 | 10 | 2015-03-30T19:49:02Z | [
"python",
"tornado"
] | I'm using a Tornado web server to queue up items that need to be processed outside of the request/response cycle.
In my simplified example below, every time a request comes in, I add a new string to a list called `queued_items`. I want to create something that will watch that list and process the items as they show up in it.
(In my real code, the items are processed and sent over a TCP socket which may or may not be connected when the web request arrives. I want the web server to keep queuing up items regardless of the socket connection)
I'm trying to keep this code simple and not use external queues/programs like Redis or Beanstalk. It's not going to have very high volume.
What's a good way using Tornado idioms to watch the `client.queued_items` list for new items and process them as they arrive?
```
import time
import tornado.ioloop
import tornado.gen
import tornado.web
class Client():
def __init__(self):
self.queued_items = []
@tornado.gen.coroutine
def watch_queue(self):
# I have no idea what I'm doing
items = yield client.queued_items
# go_do_some_thing_with_items(items)
class IndexHandler(tornado.web.RequestHandler):
def get(self):
client.queued_items.append("%f" % time.time())
self.write("Queued a new item")
if __name__ == "__main__":
client = Client()
# Watch the queue for when new items show up
client.watch_queue()
# Create the web server
application = tornado.web.Application([
(r'/', IndexHandler),
], debug=True)
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
``` | There is a library called [`toro`](http://toro.readthedocs.org/en/latest/index.html), which provides synchronization primitives for `tornado`. **[Update: As of tornado 4.2, `toro` has been merged into `tornado`.]**
Sounds like you could just use a `toro.Queue` (or `tornado.queues.Queue` in `tornado` 4.2+) to handle this:
```
import time
import toro
import tornado.ioloop
import tornado.gen
import tornado.web
class Client():
def __init__(self):
self.queued_items = toro.Queue()
@tornado.gen.coroutine
def watch_queue(self):
while True:
items = yield self.queued_items.get()
# go_do_something_with_items(items)
class IndexHandler(tornado.web.RequestHandler):
@tornado.gen.coroutine
def get(self):
yield client.queued_items.put("%f" % time.time())
self.write("Queued a new item")
if __name__ == "__main__":
client = Client()
# Watch the queue for when new items show up
tornado.ioloop.IOLoop.instance().add_callback(client.watch_queue)
# Create the web server
application = tornado.web.Application([
(r'/', IndexHandler),
], debug=True)
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
```
There are a few tweaks required, aside from switching the data structure from a list to a `toro.Queue`:
1. We need to schedule `watch_queue` to run inside the IOLoop using `add_callback`, rather than trying to call it directly outside of an IOLoop context.
2. `IndexHandler.get` needs to be converted to a coroutine, because `toro.Queue.put` is a coroutine.
I also added a `while True` loop to `watch_queue`, so that it will run forever, rather than just processing one item and then exiting. |
Pandas MultiIndex: Divide all columns by one column | 29,354,553 | 7 | 2015-03-30T19:25:42Z | 29,354,704 | 8 | 2015-03-30T19:34:03Z | [
"python",
"pandas"
] | I have a data frame `results` of the form
```
TOTEXPPQ TOTEXPCQ FINLWT21
year quarter
13 1 9.183392e+09 5.459961e+09 1271559.398
2 2.907887e+09 1.834126e+09 481169.672
```
and I was trying to divide all (the first two) columns by the last one. My attempt was
```
weights = results.pop('FINLWT21')
results/weights
```
But I get
```
ValueError: cannot join with no level specified and no overlapping names
```
Which I don't get: There are overlapping names in the index:
```
weights.head()
year quarter
13 1 1271559.398
2 481169.672
```
Is there perhaps a better way to do this division? Do I *need* to reset the index? | You have to specify the axis for the divide (with the [`div`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.div.html) method):
```
In [11]: results.div(weights, axis=0)
Out[11]:
TOTEXPPQ TOTEXPCQ
year quarter
13 1 7222.149445 4293.909517
2 6043.371329 3811.807158
```
*The default is axis=1 and the result **columns** and weights' index names do not overlap, hence the error message.* |
How to connect MySQL database using Python+SQLAlchemy remotely? | 29,355,674 | 6 | 2015-03-30T20:28:35Z | 29,429,717 | 7 | 2015-04-03T09:39:52Z | [
"python",
"mysql",
"tcp",
"sqlalchemy",
"ssh-tunnel"
] | I am having difficulty accessing MySQL remotely. I use SSH tunnel and want to connect the database MySQL using Python+SQLALchemy.
When i use MySQL-client in my console and specify "`ptotocol=TCP`", then everything is fine!
I use command:
```
mysql -h localhost âprotocol=TCP -u USER -p
```
I get access to remote database through SSH-tunnel.
However, when I want to connect to the database using the Python+SQLAchemy I can't find such option like `âprotocol=TCP`
Otherwise, i have only connect to local MySQL Databases.
Tell me please, is there a way to do it using SQLAlchemy. | The classic answer to this issue is to use `127.0.0.1` or the *IP of the host* or the *host name* instead of the "special name" `localhost`. From the [documentation](https://dev.mysql.com/doc/refman/5.0/en/connecting.html#idm140235558252992):
> [...] connections on Unix to *localhost* are made using a Unix socket file by default
And later:
> **On Unix, MySQL programs treat the host name *localhost* specially**, in a way that is likely different from what you expect compared to other network-based programs. For connections to localhost, MySQL programs attempt to connect to the local server by using a Unix socket file. This occurs even if a --port or -P option is given to specify a port number. To ensure that the client makes a TCP/IP connection to the local server, use --host or -h to specify a host name value of 127.0.0.1, or the IP address or name of the local server.
---
However, this simple trick doesn't appear to work in your case, so you have to somehow *force* the use of a TCP socket. As you explained it yourself, when invoking `mysql` on the command line, you use the `--protocol tcp` option.
As explained [here](http://docs.sqlalchemy.org/en/latest/core/engines.html#custom-dbapi-args), from SQLAlchemy, you can pass the relevant options (if any) to your driver either as URL options *or* using the `connect_args` keyword argument.
For example using *PyMySQL*, on a test system I've setup for that purpose (MariaDB 10.0.12, SQLAlchemy 0.9.8 and PyMySQL 0.6.2) I got the following results:
```
>>> engine = create_engine(
"mysql+pymysql://sylvain:passwd@localhost/db?host=localhost?port=3306")
# ^^^^^^^^^^^^^^^^^^^^^^^^^^
# Force TCP socket. Notice the two uses of `?`
# Normally URL options should use `?` and `&`
# after that. But that doesn't work here (bug?)
>>> conn = engine.connect()
>>> conn.execute("SELECT host FROM INFORMATION_SCHEMA.PROCESSLIST WHERE ID = CONNECTION_ID()").fetchall()
[('localhost:54164',)]
# Same result by using 127.0.0.1 instead of localhost:
>>> engine = create_engine(
"mysql+pymysql://sylvain:[email protected]/db?host=localhost?port=3306")
>>> conn = engine.connect()
>>> conn.execute("SELECT host FROM INFORMATION_SCHEMA.PROCESSLIST WHERE ID = CONNECTION_ID()").fetchall()
[('localhost:54164',)]
# Alternatively, using connect_args:
>>> engine = create_engine("mysql+pymysql://sylvain:passwd@localhost/db",
connect_args= dict(host='localhost', port=3306))
>>> conn = engine.connect()
>>> conn.execute("SELECT host FROM INFORMATION_SCHEMA.PROCESSLIST WHERE ID = CONNECTION_ID()").fetchall()
[('localhost:54353',)]
```
As you noticed, both will use a TCP connection (I know that because of the port number after the hostname). On the other hand:
```
>>> engine = create_engine(
"mysql+pymysql://sylvain:passwd@localhost/db?unix_socket=/path/to/mysql.sock")
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# Specify the path to mysql.sock in
# the `unix_socket` option will force
# usage of a UNIX socket
>>> conn = engine.connect()
>>> conn.execute("SELECT host FROM INFORMATION_SCHEMA.PROCESSLIST WHERE ID = CONNECTION_ID()").fetchall()
[('localhost',)]
# Same result by using 127.0.0.1 instead of localhost:
>>> engine = create_engine(
"mysql+pymysql://sylvain:[email protected]/db?unix_socket=/path/to/mysql.sock")
>>> conn = engine.connect()
>>> conn.execute("SELECT host FROM INFORMATION_SCHEMA.PROCESSLIST WHERE ID = CONNECTION_ID()").fetchall()
[('localhost',)]
# Alternatively, using connect_args:
>>> engine = create_engine("mysql+pymysql://sylvain:passwd@localhost/db",
connect_args= dict(unix_socket="/path/to/mysql.sock"))
>>> conn = engine.connect()
>>> conn.execute("SELECT host FROM INFORMATION_SCHEMA.PROCESSLIST WHERE ID = CONNECTION_ID()").fetchall()
[('localhost',)]
```
No port after the *hostname*: this is an UNIX socket. |
Plot inline or a separate window using Matplotlib in Spyder IDE | 29,356,269 | 9 | 2015-03-30T21:05:38Z | 36,683,220 | 7 | 2016-04-17T23:10:32Z | [
"python",
"matplotlib",
"spyder"
] | When I use Matplotlib to plot some graphs, it is usually fine for the default inline drawing. However, when I draw some 3D graphs, I'd like to have them in a separate window so that interactions like rotation can be enabled. Can I configure in Python code which figure to display inline and which one to display in a new window?
I know that in Spyder, click Tools, Preferences, Ipython Console, Graphics and under Graphics Backend select âautomaticâ instead of âinlineâ. However, this make all the figures to be in new windows. It can be messy when I have a lot of plots. So I want only those 3D plot to be in new windows, but all the other 2D plots remain inline. Is it possible at all?
Thanks! | type
```
%matplotlib qt
```
when you want graphs in a separate window and
```
%matplotlib inline
```
when you want an inline plot |
Python list() vs list comprehension building speed | 29,356,846 | 12 | 2015-03-30T21:41:41Z | 29,356,931 | 11 | 2015-03-30T21:46:45Z | [
"python",
"performance",
"list",
"list-comprehension",
"python-2.x"
] | This is interesting; `list()` to force an iterator to get the actual list is so much faster than `[x for x in someList]` (comprehension).
Is this for real or is my test just too simple?
Below is the code:
```
import time
timer = time.clock()
for i in xrange(90):
#localList = [x for x in xrange(1000000)] #Very slow, took me 6.8s
localList = list(xrange(1000000)) #Very fast, took me 0.9s
print localList[999999] #make sure list is really evaluated.
print "Total time: ", time.clock() - timer
``` | The list comprehension executes the loop in Python bytecode, just like a regular `for` loop.
The `list()` call iterates entirely in C code, which is far faster.
The bytecode for the list comprehension looks like this:
```
>>> import dis
>>> dis.dis(compile("[x for x in xrange(1000000)]", '<stdin>', 'exec'))
1 0 BUILD_LIST 0
3 LOAD_NAME 0 (xrange)
6 LOAD_CONST 0 (1000000)
9 CALL_FUNCTION 1
12 GET_ITER
>> 13 FOR_ITER 12 (to 28)
16 STORE_NAME 1 (x)
19 LOAD_NAME 1 (x)
22 LIST_APPEND 2
25 JUMP_ABSOLUTE 13
>> 28 POP_TOP
29 LOAD_CONST 1 (None)
32 RETURN_VALUE
```
The `>>` pointers roughly give you the boundaries of the loop being executed, so you have 1 million `STORE_NAME`, `LOAD_NAME` and `LIST_APPEND` steps to execute in the Python bytecode evaluation loop.
`list()` on the other hand just grabs the values from the `xrange()` iterable directly by using the C API for object iteration, and it can use the length of the `xrange()` object to pre-allocate the list object rather than grow it dynamically. |
no module named urllib.parse (How should I install it?) | 29,358,403 | 5 | 2015-03-31T00:03:15Z | 29,358,613 | 8 | 2015-03-31T00:28:55Z | [
"python",
"django",
"urllib"
] | I'm trying to run a REST API on CentOS 7, I read urllib.parse is in Python 3 but I'm using Python 2.7.5 so I don't know how to install this module.
I installed all the requirements but still can't run the project.
When I'm looking for a URL I get this (I'm using the browsable interface):
Output:
```
ImportError at /stamp/
No module named urllib.parse
``` | With the information you have provided, your best bet will be to use Python 3.x.
Your error suggests that the code may have been written for Python 3 given that it is trying to import `urllib.parse`. If you've written the software and have control over its source code, you **should** change the import to:
```
from urlparse import urlparse
```
`urllib` was split into `urllib.parse`, `urllib.request`, and `urllib.error` in Python 3.
I suggest that you take a quick look at software collections in CentOS **if you are not able to change** the imports for some reason.
You can bring in Python 3.3 like this:
1. `yum install centosÂ-releaseÂ-SCL`
2. `yum install python33`
3. `scl enable python33`
Check [this](http://wiki.centos.org/AdditionalResources/Repositories/SCL) page out for more info on SCLs |
Can the name and the reference of a named tuple be different? | 29,358,695 | 3 | 2015-03-31T00:39:28Z | 29,358,763 | 8 | 2015-03-31T00:47:22Z | [
"python",
"tuples",
"namedtuple"
] | While reading `fmark`'s answer to the question [What are "named tuples" in Python?](http://stackoverflow.com/questions/2970608/what-are-named-tuples-in-python) I saw that the example given there had the same name and reference, i.e. the word `Point` appears twice in the following statement:
`Point = namedtuple('Point', 'x y')`
So I went to the original reference:
<https://docs.python.org/2/library/collections.html#collections.namedtuple>
And here too I found two more examples:
```
EmployeeRecord = namedtuple('EmployeeRecord', 'name, age, title, department, paygrade')
Color = namedtuple('Color', 'red green blue')
```
Idly are not words repeated in Python. At least that has been my experience so far. For instance the whole line (for the the `Point` example) could be replaced by the following:
```
namedtuple('Point', 'x y')
```
**OR**
```
Point = namedtuple('x y')
```
Of course, that's assuming that the named tuple has to have the same name and reference. So my question is: when is it advisable (if at all it is permitted) that a named tuple should have a different name and reference? I am yet to come across an example. | You can do it, it will just annoy you.
```
In [1]: import collections
In [2]: Point = collections.namedtuple('Rectangle', 'x y')
In [3]: Point(1, 2)
Out[3]: Rectangle(x=1, y=2)
```
This is confusing, don't do it unless you have a very good reason.
The reason why this happens is because `namedtuple()` is just a function, it has no special knowledge about how it is being used as a declaration. In languages with macros, `namedtuple()` would be a macro which expands to a declaration instead. So, rather than tack on a macro system or walk the call stack for the name, you have to specify name twice.
So it is one of Python's "warts", or a design compromise, depending on how you feel about it. |
Inefficient multiprocessing of numpy-based calculations | 29,358,872 | 4 | 2015-03-31T01:02:41Z | 29,361,699 | 7 | 2015-03-31T06:06:43Z | [
"python",
"numpy",
"multiprocessing"
] | I'm trying to parallelize some calculations that use `numpy` with the help of Python's `multiprocessing` module. Consider this simplified example:
```
import time
import numpy
from multiprocessing import Pool
def test_func(i):
a = numpy.random.normal(size=1000000)
b = numpy.random.normal(size=1000000)
for i in range(2000):
a = a + b
b = a - b
a = a - b
return 1
t1 = time.time()
test_func(0)
single_time = time.time() - t1
print("Single time:", single_time)
n_par = 4
pool = Pool()
t1 = time.time()
results_async = [
pool.apply_async(test_func, [i])
for i in range(n_par)]
results = [r.get() for r in results_async]
multicore_time = time.time() - t1
print("Multicore time:", multicore_time)
print("Efficiency:", single_time / multicore_time)
```
When I execute it, the `multicore_time` is roughly equal to `single_time * n_par`, while I would expect it to be close to `single_time`. Indeed, if I replace `numpy` calculations with just `time.sleep(10)`, this is what I get â perfect efficiency. But for some reason it does not work with `numpy`. Can this be solved, or is it some internal limitation of `numpy`?
Some additional info which may be useful:
* I'm using OSX 10.9.5, Python 3.4.2 and the CPU is Core i7 with (as reported by the system info) 4 cores (although the above program only takes 50% of CPU time in total, so the system info may not be taking into account hyperthreading).
* when I run this I see `n_par` processes in `top` working at 100% CPU
* if I replace `numpy` array operations with a loop and per-index operations, the efficiency rises significantly (to about 75% for `n_par = 4`). | It looks like the test function you're using is memory bound. That means that the run time you're seeing is limited by how fast the computer can pull the arrays from memory into cache. For example, the line `a = a + b` is actually using 3 arrays, `a`, `b` and a new array that will replace `a`. These three arrays are about 8MB each (1e6 floats \* 8 bytes per floats). I believe the different i7s have something like 3MB - 8MB of shared L3 cache so you cannot fit all 3 arrays in cache at once. Your cpu adds the floats faster than the array can be loaded into cache so most of the time is spent waiting on the array to be read from memory. Because the cache is shared between the cores, you don't see any speedup by spreading the work onto multiple cores.
Memory bound operations are an issue for numpy in general and the only way I know to deal with them is to use something like cython or numba. |
How does this function to remove duplicate characters from a string in python work? | 29,360,607 | 13 | 2015-03-31T04:26:11Z | 29,360,674 | 11 | 2015-03-31T04:33:35Z | [
"python"
] | I was looking up how to create a function that removes duplicate characters from a string in python and found this on stack overflow:
```
from collections import OrderedDict
def remove_duplicates (foo) :
print " ".join(OrderedDict.fromkeys(foo))
```
It works, but how? I've searched what OrderedDict and fromkeys mean but I can't find anything that explains how it works in this context. | I will give it a shot:
[OrderedDict](https://docs.python.org/3/library/collections.html#collections.OrderedDict) are dictionaries that store keys in order they are added. Normal dictionaries don't. If you look at **doc** of `fromkeys`, you find:
> OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S.
So the `fromkeys` class method, creates an `OrderedDict` using items in the input iterable S (in my example characters from a string) as keys. In a dictionary, keys are unique, so duplicate items in `S` are ignored.
For example:
```
s = "abbcdece" # example string with duplicate characters
print(OrderedDict.fromkeys(s))
```
This results in an OrderedDict:
```
OrderedDict([('a', None), ('b', None), ('c', None), ('d', None), ('e', None)])
```
Then `" ".join(some_iterable)` takes an iterable and joins its elements using a space in this case. It uses only keys, as iterating through a dictionary is done by its keys. For example:
```
for k in OrderedDict.fromkeys(s): # k is a key of the OrderedDict
print(k)
```
Results in:
```
a
b
c
d
e
```
Subsequently, call to join:
```
print(" ".join(OrderedDict.fromkeys(s)))
```
will print out:
```
a b c d e
```
**Using set**
Sometimes, people use a set for this:
```
print( " ".join(set(s)))
# c a b d e
```
But unlike sets in C++, sets in python do not guarantee order. So using a set will give you unique values easily, but they might be in a different order then they are in the original list or string (as in the above example).
Hope this helps a bit. |
Modify Python/PIP to automatically install modules when failed to import | 29,364,473 | 6 | 2015-03-31T08:53:51Z | 29,466,052 | 10 | 2015-04-06T05:53:16Z | [
"python",
"automation",
"pip"
] | Is there a way to modify python/pip to, whenever an import fails **at runtime**, it would try to install the module (by the same name) from pip and then import the module?
I'd say it would be a nicer default than to just throw an error. If after loading the module from pip any problems happen, then it would also throw an error, similar to when I notice I cannot import something, try `pip install` and then come to the *same exact error message*.
I know we can use `requirements.txt` for bundling a package, but I'm talking from a "client" (person running the script) rather than "provider" (person providing the script) perspective; that is, I, as a client, would like to be able to import any script and have dependencies be solved automatically.
I understand that this could potentially cause trouble, but whenever I see an ImportError I'd just try to `pip install` the module anyway. Only if the module wouldn't work after pip installation "would I ask further questions".
I thought of something like this snippet that would be "built in" to the python process:
```
def getDirectoryOfInterpreter(p):
return "someway to return the directory"
try:
import somemodule
except ImportError:
os.system(getDirectoryOfInterpreter('THIS_INTERPRETER_BINARY') + ' pip install ' + "MODULE_NAME")
import somemodule
``` | You can do this with [pipimport](https://pypi.python.org/pypi/pipimport/0.2.5), when using `virtualenv`. It probably works with the system python if you have appropriate privileges to write the necessary directories (at least `site-packages`, but your import might have some command that `pip` will try to put somewhere in the PATH). Since it is good practice to always use virtualenvs for you own development anyway, I never tried using `pipimport` with the system python.
You need to import pipimport into your virtualenv by hand:
```
virtualenv venv
source venv/bin/activate
pip install pipimport
```
Then create a file `autopipimport.py` that you import first in any module:
```
# import and install pipimport
import pipimport
pipimport.install()
```
Now in any other `.py` file you can do:
```
import autopipimport
import some.package.installable.by.pip
```
I once tried (auto of curiosity) to add the two pipimport related lines to `venv/lib/python2.7/site.py` to be automatically "loaded", but that was to early in the chain and did not work.
`pipimport` will only try to use `pip` to install a particular module once, storing information about what has been tried in a `.pipimport-ignore` file (should be under `venv` unless that is not writeable). |
How to code adagrad in python theano | 29,365,370 | 2 | 2015-03-31T09:37:56Z | 29,643,793 | 7 | 2015-04-15T07:12:23Z | [
"python",
"gradient",
"theano"
] | To simplify the problem, say when a dimension (or a feature) is already updated n times, the next time I see the feature, I want to set the learning rate to be 1/n.
I came up with these codes:
```
def test_adagrad():
embedding = theano.shared(value=np.random.randn(20,10), borrow=True)
times = theano.shared(value=np.ones((20,1)))
lr = T.dscalar()
index_a = T.lvector()
hist = times[index_a]
cost = T.sum(theano.sparse_grad(embedding[index_a]))
gradients = T.grad(cost, embedding)
updates = [(embedding, embedding+lr*(1.0/hist)*gradients)]
### Here should be some codes to update also times which are omitted ###
train = theano.function(inputs=[index_a, lr],outputs=cost,updates=updates)
for i in range(10):
print train([1,2,3],0.05)
```
Theano does not give any error, but the training result give Nan sometimes. Does anybody know how to correct this please ?
Thank you for your help
PS: I doubt it is the operations in sparse space which creates problems. So I tried to replace \* by theano.sparse.mul. This gave the some results as I mentioned before | Perhaps you can utilize the following [example for implementation of **adadelta**](http://deeplearning.net/tutorial/code/lstm.py), and use it to derive your own. Please update if you succeeded :-) |
Select dataframe rows between two dates | 29,370,057 | 15 | 2015-03-31T13:38:06Z | 29,370,182 | 38 | 2015-03-31T13:49:49Z | [
"python",
"pandas"
] | I am creating a dataframe from a csv as follows:
```
stock = pd.read_csv('data_in/' + filename + '.csv', skipinitialspace=True)
```
The dataframe has a date column. Is there a way to create a new dataframe (or just overwrite the existing one) which only containes rows that fall between a specific date range? | There are two possible solutions:
* Use a boolean mask, then use `df.loc[mask]`
* Set the date column as a DatetimeIndex, then use `df[start_date : end_date]`
---
**Using a boolean mask**:
Ensure `df['date']` is a Series with dtype `datetime64[ns]`:
```
df['date'] = pd.to_datetime(df['date'])
```
Make a boolean mask. `start_date` and `end_date` can be `datetime.datetime`s,
`np.datetime64`s, `pd.Timestamp`s, or even datetime strings:
```
mask = (df['date'] > start_date) & (df['date'] <= end_date)
```
Select the sub-DataFrame:
```
df.loc[mask]
```
or re-assign to `df`
```
df = df.loc[mask]
```
---
For example,
```
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.random((200,3)))
df['date'] = pd.date_range('2000-1-1', periods=200, freq='D')
mask = (df['date'] > '2000-6-1') & (df['date'] <= '2000-6-10')
print(df.loc[mask])
```
yields
```
0 1 2 date
153 0.208875 0.727656 0.037787 2000-06-02
154 0.750800 0.776498 0.237716 2000-06-03
155 0.812008 0.127338 0.397240 2000-06-04
156 0.639937 0.207359 0.533527 2000-06-05
157 0.416998 0.845658 0.872826 2000-06-06
158 0.440069 0.338690 0.847545 2000-06-07
159 0.202354 0.624833 0.740254 2000-06-08
160 0.465746 0.080888 0.155452 2000-06-09
161 0.858232 0.190321 0.432574 2000-06-10
```
---
**Using a DatetimeIndex**:
If you are going to do a lot of selections by date, it may be quicker to set the `date` column as the index first. Then you can select rows by date using `df[start_date : end_date]`.
```
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.random((200,3)))
df['date'] = pd.date_range('2000-1-1', periods=200, freq='D')
df = df.set_index(['date'])
print(df.loc['2000-6-1':'2000-6-10'])
```
yields
```
0 1 2
date
2000-06-01 0.040457 0.326594 0.492136 # <- includes start_date
2000-06-02 0.279323 0.877446 0.464523
2000-06-03 0.328068 0.837669 0.608559
2000-06-04 0.107959 0.678297 0.517435
2000-06-05 0.131555 0.418380 0.025725
2000-06-06 0.999961 0.619517 0.206108
2000-06-07 0.129270 0.024533 0.154769
2000-06-08 0.441010 0.741781 0.470402
2000-06-09 0.682101 0.375660 0.009916
2000-06-10 0.754488 0.352293 0.339337
```
Some caveats:
* When using `df[start_date : end_date]` both end-points are included in result.
* Unlike the boolean mask solution, the `start_date` and `end_date` must be dates in the DatetimeIndex.
---
Also note that [`pd.read_csv` has a `parse_dates` parameter](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.io.parsers.read_csv.html) which you could use to parse the `date` column as `datetime64`s. Thus, if you use `parse_dates`, you would not need to use `df['date'] = pd.to_datetime(df['date'])`. |
pandas split string into columns | 29,370,211 | 4 | 2015-03-31T13:51:07Z | 29,370,709 | 13 | 2015-03-31T14:13:34Z | [
"python",
"pandas",
"split"
] | I have the following `DataFrame`, where `Track ID` is the row index. How can I split the string in the `stats` column into 5 columns of numbers?
```
Track ID stats
14.0 (-0.00924175824176, 0.41, -0.742016492568, 0.0036830094242, 0.00251748449963)
28.0 (0.0411538461538, 0.318230769231, 0.758717081514, 0.00264000622468, 0.0106535783677)
42.0 (-0.0144351648352, 0.168438461538, -0.80870348637, 0.000816872566404, 0.00316572586742)
56.0 (0.0343461538462, 0.288730769231, 0.950844962874, 6.1608706775e-07, 0.00337262030771)
70.0 (0.00905164835165, 0.151030769231, 0.670257006716, 0.0121790506745, 0.00302182567957)
84.0 (-0.0047967032967, 0.171615384615, -0.552879463981, 0.0500316517755, 0.00217970256969)
```
Thanks in advance,
T | And for the other case, assuming it are strings that look like tuples:
```
In [74]: df['stats'].str[1:-1].str.split(',', return_type='frame').astype(float)
Out[74]:
0 1 2 3 4
0 -0.009242 0.410000 -0.742016 0.003683 0.002517
1 0.041154 0.318231 0.758717 0.002640 0.010654
2 -0.014435 0.168438 -0.808703 0.000817 0.003166
3 0.034346 0.288731 0.950845 0.000001 0.003373
4 0.009052 0.151031 0.670257 0.012179 0.003022
5 -0.004797 0.171615 -0.552879 0.050032 0.002180
```
By the way, if it are tuples and not strings, you can simply do the following:
```
df['stats'].apply(pd.Series)
```
This last one can also be combined with the above (if the `return_type='frame'` is not yet available):
```
df['stats'].str[1:-1].str.split(',').apply(pd.Series).astype(float)
``` |
Compressing multiple conditions in Python | 29,370,317 | 2 | 2015-03-31T13:57:11Z | 29,370,354 | 7 | 2015-03-31T13:58:47Z | [
"python",
"if-statement",
"python-3.x"
] | Suppose I have a list of numbers `mylist` and that I would like execute some code if all the elements of `mylist` are greater than 10. I might try
```
if mylist[0] > 10 and mylist[1] > 10 and ... :
do something
```
but this is obviously very cumbersome. I was wondering if Python has a way of compressing multiple conditions in an if statement. I tried
```
if mylist[i] > 10 for i in range(len(mylist)):
do something
```
but this returned an error.
I am using Python 3.4. | Your attempt is pretty close. You just needed the `all` function to examine the results of the expression.
```
if all(mylist[i] > 10 for i in range(len(mylist))):
do something
```
Incidentally, consider iterating over the items of the list directly, rather than its indices.
```
if all(item > 10 for item in mylist):
``` |
What is the difference between numpy.linalg.lstsq and scipy.linalg.lstsq? | 29,372,559 | 11 | 2015-03-31T15:34:38Z | 29,390,702 | 9 | 2015-04-01T12:20:37Z | [
"python",
"numpy",
"scipy",
"least-squares"
] | `lstsq` tries to solve `Ax=b` minimizing `|b - Ax|`. Both scipy and numpy provide a `linalg.lstsq` function with a very similar interface. The documentation does not mention which kind of algorithm is used, neither for [scipy.linalg.lstsq](http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.lstsq.html#scipy.linalg.lstsq) nor for [numpy.linalg.lstsq](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html#numpy.linalg.lstsq), but it seems to do pretty much the same.
The implementation seems to be different for [scipy.linalg.lstsq](https://github.com/scipy/scipy/blob/2d1bd60e0330bb07880f8eab717e21ee17422245/scipy/linalg/basic.py#L537) and [numpy.linalg.lstsq](https://github.com/numpy/numpy/blob/master/numpy/linalg/linalg.py#L1756). Both seem to use LAPACK, both algorithms seem to use a SVD.
Where is the difference? Which one should I use?
**Note:** do not confuse `linalg.lstsq` with `scipy.optimize.leastsq` which can solve also non-linear optimization problems. | If I read the source code right (Numpy 1.8.2, Scipy 0.14.1
), `numpy.linalg.lstsq()` uses the LAPACK routine `xGELSD` and `scipy.linalg.lstsq()` uses`xGELSS`.
The [LAPACK Manual Sec. 2.4](http://www.netlib.org/lapack/lug/node27.html#tabdrivellsq) states
> The subroutine xGELSD is significantly faster than its older counterpart xGELSS, especially for large problems, but may require somewhat more workspace depending on the matrix dimensions.
That means that Numpy is faster but uses more memory. |
How to group by multiple keys in spark? | 29,372,792 | 5 | 2015-03-31T15:44:40Z | 29,515,600 | 9 | 2015-04-08T13:10:25Z | [
"python",
"apache-spark",
"pyspark"
] | I have a bunch of tuples which are in form of composite keys and values. For example,
```
tfile.collect() = [(('id1','pd1','t1'),5.0),
(('id2','pd2','t2'),6.0),
(('id1','pd1','t2'),7.5),
(('id1','pd1','t3'),8.1) ]
```
I want to perform sql like operations on this collection, where I can aggregate the information based on id[1..n] or pd[1..n] . I want to implement using the vanilla pyspark apis and not using SQLContext.
In my current implementation I am reading from a bunch of files and merging the RDD.
```
def readfile():
fr = range(6,23)
tfile = sc.union([sc.textFile(basepath+str(f)+".txt")
.map(lambda view: set_feature(view,f))
.reduceByKey(lambda a, b: a+b)
for f in fr])
return tfile
```
I intend to create an aggregated array as a value. For example,
```
agg_tfile = [((id1,pd1),[5.0,7.5,8.1])]
```
where 5.0,7.5,8.1 represent [t1,t2,t3] . I am currently, achieving the same by vanilla python code using dictionaries. It works fine for smaller data sets. But I worry as this may not scale for larger data sets. Is there an efficient way achieving the same using pyspark apis ? | My guess is that you want to transpose the data according to multiple fields.
A simple way is to concatenate the target fields that you will group by, and make it a key in a paired RDD. For example:
```
lines = sc.parallelize(['id1,pd1,t1,5.0', 'id2,pd2,t2,6.0', 'id1,pd1,t2,7.5', 'id1,pd1,t3,8.1'])
rdd = lines.map(lambda x: x.split(',')).map(lambda x: (x[0] + ', ' + x[1], x[3])).reduceByKey(lambda a, b: a + ', ' + b)
print rdd.collect()
```
Then you will get the transposed result.
```
[('id1, pd1', '5.0, 7.5, 8.1'), ('id2, pd2', '6.0')]
``` |
First month of quarter given month in Python | 29,375,785 | 12 | 2015-03-31T18:25:50Z | 29,375,792 | 10 | 2015-03-31T18:26:16Z | [
"python",
"date"
] | Given a month in numeric form (e.g., 2 for February), how do you find the first month of its respective quarter (e.g., 1 for January)?
I read through the `datetime` module documentation and the Pandas documentation of their datetime functions, which ought to be relevant, but I could not find a function that solves this problem.
Essentially, what I am trying to understand is how I could produce a function like the one below that, given month *x*, outputs the number corresponding to the first month of *x*'s quarter.
```
>> first_month_quarter(5)
4
``` | Here is an answer suggested by [TigerhawkT3](http://stackoverflow.com/users/2617068/tigerhawkt3). Perhaps the leanest suggestion so far and, apparently, also the fastest.
```
import math
def first_month_quarter(month):
return int(math.ceil(month / 3.)) * 3 - 2
```
For example:
```
>> first_month_quarter(5)
4
``` |
First month of quarter given month in Python | 29,375,785 | 12 | 2015-03-31T18:25:50Z | 29,376,814 | 13 | 2015-03-31T19:23:22Z | [
"python",
"date"
] | Given a month in numeric form (e.g., 2 for February), how do you find the first month of its respective quarter (e.g., 1 for January)?
I read through the `datetime` module documentation and the Pandas documentation of their datetime functions, which ought to be relevant, but I could not find a function that solves this problem.
Essentially, what I am trying to understand is how I could produce a function like the one below that, given month *x*, outputs the number corresponding to the first month of *x*'s quarter.
```
>> first_month_quarter(5)
4
``` | ```
def first_month(month):
return (month-1)//3*3+1
for i in range(1,13):
print i, first_month(i)
``` |
First month of quarter given month in Python | 29,375,785 | 12 | 2015-03-31T18:25:50Z | 29,377,400 | 19 | 2015-03-31T19:55:10Z | [
"python",
"date"
] | Given a month in numeric form (e.g., 2 for February), how do you find the first month of its respective quarter (e.g., 1 for January)?
I read through the `datetime` module documentation and the Pandas documentation of their datetime functions, which ought to be relevant, but I could not find a function that solves this problem.
Essentially, what I am trying to understand is how I could produce a function like the one below that, given month *x*, outputs the number corresponding to the first month of *x*'s quarter.
```
>> first_month_quarter(5)
4
``` | It's not so pretty, but if speed is important a simple list lookup slaughters `math`:
```
def quarter(month, quarters=[None, 1, 1, 1, 4, 4, 4,
7, 7, 7, 10, 10, 10]):
"""Return the first month of the quarter for a given month."""
return quarters[month]
```
A [`timeit`](https://docs.python.org/2/library/timeit.html) comparison suggests this is about twice as fast as TigerhawkT3's mathematical approach.
---
Test script:
```
import math
def quarter(month, quarters=[None, 1, 1, 1, 4, 4, 4,
7, 7, 7, 10, 10, 10]):
"""Return the first month of the quarter for a given month."""
return quarters[month]
def firstMonthInQuarter1(month):
return (month - 1) // 3 * 3 + 1
def firstMonthInQuarter2(month):
return month - (month - 1) % 3
def first_month_quarter(month):
return int(math.ceil(month / 3.)) * 3 - 2
if __name__ == '__main__':
from timeit import timeit
methods = ['quarter', 'firstMonthInQuarter1', 'firstMonthInQuarter2',
'first_month_quarter']
setup = 'from __main__ import {}'.format(','.join(methods))
results = {method: timeit('[{}(x) for x in range(1, 13)]'.format(method),
setup=setup)
for method in methods}
for method in methods:
print '{}:\t{}'.format(method, results[method])
```
Results:
```
quarter: 3.01457574242
firstMonthInQuarter1: 4.51578357209
firstMonthInQuarter2: 4.01768559763
first_month_quarter: 8.08281871176
``` |
First month of quarter given month in Python | 29,375,785 | 12 | 2015-03-31T18:25:50Z | 29,381,448 | 19 | 2015-04-01T01:37:24Z | [
"python",
"date"
] | Given a month in numeric form (e.g., 2 for February), how do you find the first month of its respective quarter (e.g., 1 for January)?
I read through the `datetime` module documentation and the Pandas documentation of their datetime functions, which ought to be relevant, but I could not find a function that solves this problem.
Essentially, what I am trying to understand is how I could produce a function like the one below that, given month *x*, outputs the number corresponding to the first month of *x*'s quarter.
```
>> first_month_quarter(5)
4
``` | It's a simple mapping function that needs to convert:
```
1 2 3 4 5 6 7 8 9 10 11 12
|
V
1 1 1 4 4 4 7 7 7 10 10 10
```
This can be done in a number of ways with integral calculations, two of which are:
```
def firstMonthInQuarter(month):
return (month - 1) // 3 * 3 + 1
```
and:
```
def firstMonthInQuarter(month):
return month - (month - 1) % 3
```
The first involves integer division of the month converted to a zero-based month to get the zero-based quarter, multiplication to turn that back into a zero-based month (but the month at the *start* of the quarter), then adding one again to make the range `1..12`.
```
month -1 //3 *3 +1
----- -- --- -- --
1 0 0 0 1
2 1 0 0 1
3 2 0 0 1
4 3 1 3 4
5 4 1 3 4
6 5 1 3 4
7 6 2 6 7
8 7 2 6 7
9 8 2 6 7
10 9 3 9 10
11 10 3 9 10
12 11 3 9 10
```
The second just subtracts the position within a quarter (0, 1, 2) from the month itself to get the starting month.
```
month(a) -1 %3(b) a-b
-------- -- ----- ---
1 0 0 1
2 1 1 1
3 2 2 1
4 3 0 4
5 4 1 4
6 5 2 4
7 6 0 7
8 7 1 7
9 8 2 7
10 9 0 10
11 10 1 10
12 11 2 10
``` |
What's a good strategy to find mixed types in Pandas columns? | 29,376,026 | 3 | 2015-03-31T18:39:34Z | 29,376,221 | 8 | 2015-03-31T18:49:41Z | [
"python",
"pandas"
] | Ever so often I get this warning when parsing data files:
```
WARNING:py.warnings:/usr/local/python3/miniconda/lib/python3.4/site-
packages/pandas-0.16.0_12_gdcc7431-py3.4-linux-x86_64.egg/pandas
/io/parsers.py:1164: DtypeWarning: Columns (0,2,14,20) have mixed types.
Specify dtype option on import or set low_memory=False.
data = self._reader.read(nrows)
```
But if the data is large (I have 50k rows), how can I find WHERE in the data the change of dtype occurs? | I'm not entirely sure what you're after, but it's easy enough to find the rows which contain elements which don't share the type of the first row. For example:
```
>>> df = pd.DataFrame({"A": np.arange(500), "B": np.arange(500.0)})
>>> df.loc[321, "A"] = "Fred"
>>> df.loc[325, "B"] = True
>>> weird = (df.applymap(type) != df.iloc[0].apply(type)).any(axis=1)
>>> df[weird]
A B
321 Fred 321
325 325 True
``` |
How to save S3 object to a file using boto3 | 29,378,763 | 36 | 2015-03-31T21:17:58Z | 29,636,604 | 69 | 2015-04-14T20:15:44Z | [
"python",
"amazon-web-services",
"boto",
"boto3"
] | I'm trying to do a "hello world" with new [boto3](https://github.com/boto/boto3) client for AWS.
The use-case I have is fairly simple: get object from S3 and save it to the file.
In boto 2.X I would do it like this:
```
import boto
key = boto.connect_s3().get_bucket('foo').get_key('foo')
key.get_contents_to_filename('/tmp/foo')
```
In boto 3 . I can't find a clean way to do the same thing, so I'm manually iterating over the "Streaming" object:
```
import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
chunk = key['Body'].read(1024*8)
while chunk:
f.write(chunk)
chunk = key['Body'].read(1024*8)
```
or
```
import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
for chunk in iter(lambda: key['Body'].read(4096), b''):
f.write(chunk)
```
And it works fine. I was wondering is there any "native" boto3 function that will do the same task? | There is a customization that went into Boto3 recently which helps with this (among other things). It is currently exposed on the low-level S3 client, and can be used like this:
```
s3_client = boto3.client('s3')
open('hello.txt').write('Hello, world!')
# Upload the file to S3
s3_client.upload_file('hello.txt', 'MyBucket', 'hello-remote.txt')
# Download the file from S3
s3_client.download_file('MyBucket', 'hello-remote.txt', 'hello2.txt')
print(open('hello2.txt').read())
```
These functions will automatically handle reading/writing files as well as doing multipart uploads in parallel for large files. |
How to save S3 object to a file using boto3 | 29,378,763 | 36 | 2015-03-31T21:17:58Z | 35,367,531 | 17 | 2016-02-12T16:27:28Z | [
"python",
"amazon-web-services",
"boto",
"boto3"
] | I'm trying to do a "hello world" with new [boto3](https://github.com/boto/boto3) client for AWS.
The use-case I have is fairly simple: get object from S3 and save it to the file.
In boto 2.X I would do it like this:
```
import boto
key = boto.connect_s3().get_bucket('foo').get_key('foo')
key.get_contents_to_filename('/tmp/foo')
```
In boto 3 . I can't find a clean way to do the same thing, so I'm manually iterating over the "Streaming" object:
```
import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
chunk = key['Body'].read(1024*8)
while chunk:
f.write(chunk)
chunk = key['Body'].read(1024*8)
```
or
```
import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
for chunk in iter(lambda: key['Body'].read(4096), b''):
f.write(chunk)
```
And it works fine. I was wondering is there any "native" boto3 function that will do the same task? | boto3 now has a nicer interface than the client:
```
resource = boto3.resource('s3')
my_bucket = resource.Bucket('MyBucket')
my_bucket.download_file(key, local_filename)
```
This by itself isn't tremendously better than the `client` in the accepted answer (although the docs say that it does a better job retrying uploads and downloads on failure) but considering that resources are generally more ergonomic (for example, the s3 [bucket](http://boto3.readthedocs.org/en/latest/reference/services/s3.html#bucket) and [object](http://boto3.readthedocs.org/en/latest/reference/services/s3.html#object) resources are nicer than the client methods) this does allow you to stay at the resource layer without having to drop down.
[`Resources`](http://boto3.readthedocs.org/en/latest/reference/core/resources.html) generally can be created in the same way as clients, and they take all or most of the same arguments and just forward them to their internal clients. |
aggregate a field in elasticsearch-dsl using python | 29,380,198 | 3 | 2015-03-31T23:19:22Z | 31,039,444 | 7 | 2015-06-25T00:45:21Z | [
"python",
"elasticsearch",
"elasticsearch-dsl"
] | Can someone tell me how to write Python statements that will aggregate (sum and count) stuff about my documents?
---
SCRIPT
```
from datetime import datetime
from elasticsearch_dsl import DocType, String, Date, Integer
from elasticsearch_dsl.connections import connections
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q
# Define a default Elasticsearch client
client = connections.create_connection(hosts=['http://blahblahblah:9200'])
s = Search(using=client, index="attendance")
s = s.execute()
for tag in s.aggregations.per_tag.buckets:
print (tag.key)
```
---
OUTPUT
```
File "/Library/Python/2.7/site-packages/elasticsearch_dsl/utils.py", line 106, in __getattr__
'%r object has no attribute %r' % (self.__class__.__name__, attr_name))
AttributeError: 'Response' object has no attribute 'aggregations'
```
What is causing this? Is the "aggregations" keyword wrong? Is there some other package I need to import? If a document in the "attendance" index has a field called emailAddress, how would I count which documents have a value for that field? | First of all. I notice now that what I wrote here, actually has no aggregations defined. The documentation on how to use this is not very readable for me. Using what I wrote above, I'll expand. I'm changing the index name to make for a nicer example.
```
from datetime import datetime
from elasticsearch_dsl import DocType, String, Date, Integer
from elasticsearch_dsl.connections import connections
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q
# Define a default Elasticsearch client
client = connections.create_connection(hosts=['http://blahblahblah:9200'])
s = Search(using=client, index="airbnb", doc_type="sleep_overs")
s = s.execute()
# invalid! You haven't defined an aggregation.
#for tag in s.aggregations.per_tag.buckets:
# print (tag.key)
# Lets make an aggregation
# 'by_house' is a name you choose, 'terms' is a keyword for the type of aggregator
# 'field' is also a keyword, and 'house_number' is a field in our ES index
s.aggs.bucket('by_house', 'terms', field='house_number', size=0)
```
Above we're creating 1 bucket per house number. Therefore, the name of the bucket will be the house number. ElasticSearch (ES) will always give a document count of documents fitting into that bucket. Size=0 means to give use all results, since ES has a default setting to return 10 results only (or whatever your dev set it up to do).
```
# This runs the query.
s = s.execute()
# let's see what's in our results
print s.aggregations.by_house.doc_count
print s.hits.total
print s.aggregations.by_house.buckets
for item in s.aggregations.by_house.buckets:
print item.doc_count
```
My mistake before was thinking an Elastic Search query had aggregations by default. You sort of define them yourself, then execute them. Then your response can be split b the aggregators you mentioned.
The CURL for the above should look like:
NOTE: I use SENSE an ElasticSearch plugin/extension/add-on for Google Chrome. In SENSE you can use // to comment things out.
```
POST /airbnb/sleep_overs/_search
{
// the size 0 here actually means to not return any hits, just the aggregation part of the result
"size": 0,
"aggs": {
"by_house": {
"terms": {
// the size 0 here means to return all results, not just the the default 10 results
"field": "house_number",
"size": 0
}
}
}
}
```
Work-around. Someone on the GIT of DSL told me to forget translating, and just use this method. It's simpler, and you can just write the tough stuff in CURL. That's why I call it a work-around.
```
# Define a default Elasticsearch client
client = connections.create_connection(hosts=['http://blahblahblah:9200'])
s = Search(using=client, index="airbnb", doc_type="sleep_overs")
# how simple we just past CURL code here
body = {
"size": 0,
"aggs": {
"by_house": {
"terms": {
"field": "house_number",
"size": 0
}
}
}
}
s = Search.from_dict(body)
s = s.index("airbnb")
s = s.doc_type("sleepovers")
body = s.to_dict()
t = s.execute()
for item in t.aggregations.by_house.buckets:
# item.key will the house number
print item.key, item.doc_count
```
Hope this helps. I now design everything in CURL, then use Python statement to peel away at the results to get what I want. This helps for aggregations with multiple levels (sub-aggregations). |
How to simulate HTML5 Drag and Drop in Selenium Webdriver? | 29,381,233 | 5 | 2015-04-01T01:11:29Z | 29,381,532 | 14 | 2015-04-01T01:48:11Z | [
"python",
"html5",
"selenium",
"selenium-webdriver",
"drag-and-drop"
] | I am using Python 2.7 and Selenium 2.44.
I want to *automate drag and drop* action in Selenium WD but according to other related posts **Actions in HTML5 are not supported by Selenium** yet. Is there any way to simulate drag and drop in Python?
Here is the code I tried:
```
driver = webdriver.Firefox()
driver.get("http://html5demos.com/drag")
target = driver.find_element_by_id("one")
source = driver.find_element_by_id("bin")
actionChains = ActionChains(driver)
actionChains.drag_and_drop(target, source).perform()
```
and it did not work. | Yes, HTML5 "drag&drop" **is not currently supported** by Selenium:
* [Issue 3604: HTML5 Drag and Drop with Selenium Webdriver](https://github.com/seleniumhq/selenium-google-code-issue-archive/issues/3604)
One of the [suggested workarounds](http://elementalselenium.com/tips/39-drag-and-drop) is to *simulate HTML5 drag and drop* via JavaScript:
* download [`drag_and_drop_helper.js`](https://gist.github.com/rcorreia/2362544)
* execute the script via `execute_script()` calling `simulateDragDrop()` function on a `source` element passing the `target` element as a `dropTarget`
Sample code:
```
with open("drag_and_drop_helper.js") as f:
js = f.read()
driver.execute_script(js + "$('#one').simulateDragDrop({ dropTarget: '#bin'});")
```
The problem is that it won't work in your case "as is" since it **requires `jQuery`**.
---
Now we need to figure out **how to dynamically load jQuery**. Thankfully, [there is a solution](http://sqa.stackexchange.com/a/3453/5574).
Complete working example in Python:
```
from selenium import webdriver
jquery_url = "http://code.jquery.com/jquery-1.11.2.min.js"
driver = webdriver.Firefox()
driver.get("http://html5demos.com/drag")
driver.set_script_timeout(30)
# load jQuery helper
with open("jquery_load_helper.js") as f:
load_jquery_js = f.read()
# load drag and drop helper
with open("drag_and_drop_helper.js") as f:
drag_and_drop_js = f.read()
# load jQuery
driver.execute_async_script(load_jquery_js, jquery_url)
# perform drag&drop
driver.execute_script(drag_and_drop_js + "$('#one').simulateDragDrop({ dropTarget: '#bin'});")
```
where `jquery_load_helper.js` contains:
```
/** dynamically load jQuery */
(function(jqueryUrl, callback) {
if (typeof jqueryUrl != 'string') {
jqueryUrl = 'https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js';
}
if (typeof jQuery == 'undefined') {
var script = document.createElement('script');
var head = document.getElementsByTagName('head')[0];
var done = false;
script.onload = script.onreadystatechange = (function() {
if (!done && (!this.readyState || this.readyState == 'loaded'
|| this.readyState == 'complete')) {
done = true;
script.onload = script.onreadystatechange = null;
head.removeChild(script);
callback();
}
});
script.src = jqueryUrl;
head.appendChild(script);
}
else {
callback();
}
})(arguments[0], arguments[arguments.length - 1]);
```
Before/after result:

 |
How to apply piecewise linear fit in Python? | 29,382,903 | 9 | 2015-04-01T04:32:45Z | 29,384,899 | 11 | 2015-04-01T07:11:02Z | [
"python",
"numpy",
"scipy",
"curve-fitting",
"piecewise"
] | I am trying to fit piecewise linear fit as shown in fig.1 for a data set

This figure was obtained by setting on the lines. I attempted to apply a piecewise linear fit using the code:
```
from scipy import optimize
import matplotlib.pyplot as plt
import numpy as np
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11, 12, 13, 14, 15])
y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03])
def linear_fit(x, a, b):
return a * x + b
fit_a, fit_b = optimize.curve_fit(linear_fit, x[0:5], y[0:5])[0]
y_fit = fit_a * x[0:7] + fit_b
fit_a, fit_b = optimize.curve_fit(linear_fit, x[6:14], y[6:14])[0]
y_fit = np.append(y_fit, fit_a * x[6:14] + fit_b)
figure = plt.figure(figsize=(5.15, 5.15))
figure.clf()
plot = plt.subplot(111)
ax1 = plt.gca()
plot.plot(x, y, linestyle = '', linewidth = 0.25, markeredgecolor='none', marker = 'o', label = r'\textit{y_a}')
plot.plot(x, y_fit, linestyle = ':', linewidth = 0.25, markeredgecolor='none', marker = '', label = r'\textit{y_b}')
plot.set_ylabel('Y', labelpad = 6)
plot.set_xlabel('X', labelpad = 6)
figure.savefig('test.pdf', box_inches='tight')
plt.close()
```
But this gave me fitting of the form in fig. 2, I tried playing with the values but no change I can't get the fit of the upper line proper. The most important requirement for me is how can I get Python to get the gradient change point. In essence ***I want Python to recognize and fit two linear fits in the appropriate range. How can this be done in Python?***
 | You can use `numpy.piecewise()` to create the piecewise function and then use `curve_fit()`, Here is the code
```
from scipy import optimize
import matplotlib.pyplot as plt
import numpy as np
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11, 12, 13, 14, 15], dtype=float)
y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03])
def piecewise_linear(x, x0, y0, k1, k2):
return np.piecewise(x, [x < x0], [lambda x:k1*x + y0-k1*x0, lambda x:k2*x + y0-k2*x0])
p , e = optimize.curve_fit(piecewise_linear, x, y)
xd = np.linspace(0, 15, 100)
pl.plot(x, y, "o")
pl.plot(xd, piecewise_linear(xd, *p))
```
the output:
 |
How to get http headers in flask? | 29,386,995 | 28 | 2015-04-01T09:16:56Z | 29,387,151 | 42 | 2015-04-01T09:25:48Z | [
"python",
"http",
"flask",
"http-headers",
"authorization"
] | I am newbie to python and using Python Flask and generating REST API service.
I want to check authorization header which is sent by angularjs.
But I can't find way to get HTTP header in flask.
Any help for getting HTTP header authorization is appreciated. | ```
from flask import request
request.headers.get('your-header-name')
```
`request.headers` is a dictionary, so you can also get your header like you would any dictionary:
```
request.headers['your-header-name']
``` |
Does python iterate at a constant speed? | 29,389,858 | 3 | 2015-04-01T11:42:06Z | 29,389,945 | 10 | 2015-04-01T11:46:05Z | [
"python",
"iteration",
"gpio"
] | I writing some code to get sensor readings from GPIO against time.
To make sure the measurements corresponds to a specific time, I want to know if python iterates at a constant speed (so that the gap between iterations is constant) - and what is its minimum time gap between iterations.
If they're not, can someone let me know how to make the time gap constant.
Thank you! | No, Python does not and can not iterate at constant speed.
Python is just another process on your Raspberry PI, and your OS is responsible for allocating it time to run on the CPU (called [multi-tasking](http://en.wikipedia.org/wiki/Computer_multitasking)). Other processes also get allotted time. This means Python is never going to be running all the time and any processing times are going to be depend on what the other processes are doing.
Iteration itself is also *delegated* to specific types; how the next item is produced then varies widely, and even if Python was given constant access to the CPU iteration would still vary. Whatever you do in your *loop body* also takes time, and unless the inputs and outputs are always exactly the same, will almost certainly take a variable amount of time to do the work.
Instead of trying to time your loops, measure time with [`time.time()`](https://docs.python.org/2/library/time.html#time.time) or [`timeit.default_timer`](https://docs.python.org/2/library/timeit.html#timeit.default_timer) (depending on how precise you need to be, on your Raspberry it'll be the same function) in a loop and adjust your actions based on that. |
Manually trigger Django email error report | 29,392,281 | 20 | 2015-04-01T13:37:28Z | 29,878,519 | 15 | 2015-04-26T13:54:50Z | [
"python",
"django",
"exception-handling",
"django-email",
"django-errors"
] | [Django error reporting](https://docs.djangoproject.com/en/1.7/howto/error-reporting/) handles uncaught exceptions by sending an email, and (optionally) shows user a nice 500 error page.
This works very well, but in a few instances I'd like to allow users to continue with their business uninterrupted, but still have Django send me the email error report about the exception.
So basically: can I manually send email error report even if I catch the exception?
Of course, I'd like to avoid manually generating the error report email. | You can use the following code to send manually an email about a `request` and an exception `e`:
```
import sys
import traceback
from django.core import mail
from django.views.debug import ExceptionReporter
def send_manually_exception_email(request, e):
exc_info = sys.exc_info()
reporter = ExceptionReporter(request, is_email=True, *exc_info)
subject = e.message.replace('\n', '\\n').replace('\r', '\\r')[:989]
message = "%s\n\n%s" % (
'\n'.join(traceback.format_exception(*exc_info)),
reporter.filter.get_request_repr(request)
)
mail.mail_admins(
subject, message, fail_silently=True,
html_message=reporter.get_traceback_html()
)
```
You can test it in a view like this:
```
def test_view(request):
try:
raise Exception
except Exception as e:
send_manually_exception_email(request, e)
``` |
SpooledTemporaryFile: units of maximum (in-memory) size? | 29,393,847 | 5 | 2015-04-01T14:44:45Z | 29,393,919 | 7 | 2015-04-01T14:47:58Z | [
"python",
"temporary-files"
] | The parameter `max_size` of `tempfile.SpooledTemporaryFile()` is the maximum size of the temporary file that can fit in memory (before it is spilled to disk). What are the units of this parameter (bytes? kilobytes?)? The documentation (both for [Python 2.7](https://docs.python.org/2/library/tempfile.html#tempfile.SpooledTemporaryFile) and [Python 3.4](https://docs.python.org/3.4/library/tempfile.html#tempfile.SpooledTemporaryFile)) does not indicate this. | The size is in bytes. From the [`SpooledTemporaryFile()` source code](https://hg.python.org/cpython/file/d444496e714a/Lib/tempfile.py#l505):
```
def _check(self, file):
if self._rolled: return
max_size = self._max_size
if max_size and file.tell() > max_size:
self.rollover()
```
and `file.tell()` gives a position in bytes.
I'd say that any use of the term `size` in connection with Python file objects that is *not* expressed in bytes warrants an explicit mention. All other file methods that deal in terms of `size` always work in bytes as well. |
Minimum of Numpy Array Ignoring Diagonal | 29,394,377 | 4 | 2015-04-01T15:07:19Z | 29,394,823 | 9 | 2015-04-01T15:26:35Z | [
"python",
"numpy"
] | I have to find the maximum value of a numpy array ignoring the diagonal elements.
np.amax() provides ways to find it ignoring specific axes. How can I achieve the same ignoring all the diagonal elements? | You could use a mask
```
mask = np.ones(a.shape, dtype=bool)
np.fill_diagonal(mask, 0)
max_value = a[mask].max()
```
where `a` is the matrix you want to find the max of. The mask selects the off-diagonal elements, so `a[mask]` will be a long vector of all the off-diagonal elements. Then you just take the max.
Or, if you don't mind modifying the original array
```
np.fill_diagonal(a, -np.inf)
max_value = a.max()
```
Of course, you can always make a copy and then do the above without modifying the original. Also, this is assuming that `a` is some floating point format. |
Checking for duplicate lists at certain indices in a lists of lists | 29,403,322 | 2 | 2015-04-02T00:51:52Z | 29,403,380 | 7 | 2015-04-02T01:00:21Z | [
"python",
"python-3.x"
] | Given a list of indices, how do I check if the lists at those indices in a list of lists are the same or not?
```
# Given:
# indices = [0, 2, 3]
# lsts = [['A', 'B'], ['1', '2', '3'], ['A', 'B'], ['B', 'C']]
# would test if ['A', 'B'] == ['A', 'B'] == ['B', 'C']
# would return False
# Given:
# indices = [0, 2]
# lsts = [['A', 'B'], ['1', '2', '3'], ['A', 'B'], ['B', 'C']]
# would test ['A', 'B'] == ['A', 'B']
# would return True
```
I currently have:
```
for i in range(len(lsts)):
for i in range(len(indices) - 1):
if lsts[indices[i]] != lsts[indices[i + 1]]:
return False
else:
return True
``` | This should do it:
```
>>> indices = [0, 2, 3]
>>> lsts = [['A', 'B'], ['1', '2', '3'], ['A', 'B'], ['B', 'C']]
>>> all(lsts[indices[0]] == lsts[i] for i in indices)
False
>>> indices = [0, 2]
>>> lsts = [['A', 'B'], ['1', '2', '3'], ['A', 'B'], ['B', 'C']]
>>> all(lsts[indices[0]] == lsts[i] for i in indices)
True
```
By the way, thank you for providing clear examples of input and expected output. |
Python for loop and iterator behavior | 29,403,401 | 35 | 2015-04-02T01:03:16Z | 29,403,418 | 41 | 2015-04-02T01:06:06Z | [
"python",
"iterator"
] | I wanted to understand a bit more about `iterators`, so please correct me if I'm wrong.
An iterator is an object which has a pointer to the next object and is read as a buffer or stream (i.e. a linked list). They're particularly efficient cause all they do is tell you what is next by references instead of using indexing.
However I still don't understand why is the following behavior happening:
```
In [1]: iter = (i for i in range(5))
In [2]: for _ in iter:
....: print _
....:
0
1
2
3
4
In [3]: for _ in iter:
....: print _
....:
In [4]:
```
After a first loop through the iterator (`In [2]`) it's as if it was consumed and left empty, so the second loop (`In [3]`) prints nothing.
However I never assigned a new value to the `iter` variable.
What is really happening under the hood of the `for` loop? | Your suspicion is correct: the iterator has been consumed.
In actuality, your iterator is a [generator](https://wiki.python.org/moin/Generators), which is an object which has the ability to be iterated through *only once.*
```
type((i for i in range(5))) # says it's type generator
def another_generator():
yield 1 # the yield expression makes it a generator, not a function
type(another_generator()) # also a generator
```
The reason they are efficient has nothing to do with telling you what is next "by reference." They are efficient because they only generate the next item upon request; all of the items are not generated at once. In fact, you can have an infinite generator:
```
def my_gen():
while True:
yield 1 # again: yield means it is a generator, not a function
for _ in my_gen(): print(_) # hit ctl+c to stop this infinite loop!
```
Some other corrections to help improve your understanding:
* The generator is not a pointer, and does not behave like a pointer as you might be familiar with in other languages.
* One of the differences from other languages: as said above, each result of the generator is generated on the fly. The next result is not produced until it is requested.
* The keyword combination `for` `in` accepts an iterable object as its second argument.
* The iterable object can be a generator, as in your example case, but it can also be any other iterable object, such as a `list`, or `dict`, or a `str` object (string), or a user-defined type that provides the required functionality.
* The [`iter` function](https://docs.python.org/3/library/functions.html#iter) is applied to the object to get an iterator (by the way: don't use `iter` as a variable name in Python, as you have done - it is one of the keywords). Actually, to be more precise, the object's [`__iter__` method](https://docs.python.org/3/reference/datamodel.html#object.__iter__) is called (which is, for the most part, all the `iter` function does anyway; `__iter__` is one of Python's so-called "magic methods").
* If the call to `__iter__` is successful, the function [`next()`](https://docs.python.org/3/library/functions.html#next) is applied to the iterable object over and over again, in a loop, and the first variable supplied to `for` `in` is assigned to the result of the `next()` function. (Remember: the iterable object could be a generator, or a container object's iterator, or any other iterable object.) Actually, to be more precise: it calls the iterator object's [`__next__`](https://docs.python.org/3/library/stdtypes.html#iterator.__next__) method, which is another "magic method".
* The `for` loop ends when `next()` raises the [`StopIteration`](https://docs.python.org/3/library/exceptions.html#StopIteration) exception (which usually happens when the iterable does not have another object to yield when `next()` is called).
You can "manually" implement a `for` loop in python this way (probably not perfect, but close enough):
```
try:
temp = iterable.__iter__()
except AttributeError():
raise TypeError("'{}' object is not iterable".format(type(iterable).__name__))
else:
while True:
try:
_ = temp.__next__()
except StopIteration:
break
except AttributeError:
raise TypeError("iter() returned non-iterator of type '{}'".format(type(temp).__name__))
# this is the "body" of the for loop
continue
```
There is pretty much no difference between the above and your example code.
Actually, the more interesting part of a `for` loop is not the `for`, but the `in`. Using `in` by itself produces a different effect than `for` `in`, but it is very useful to understand what `in` does with its arguments, since `for` `in` implements very similar behavior.
* When used by itself, the `in` keyword first calls the object's [`__contains__` method](https://docs.python.org/3/reference/datamodel.html#object.__contains__), which is yet another "magic method" (note that this step is skipped when using `for` `in`). Using `in` by itself on a container, you can do things like this:
```
1 in [1, 2, 3] # True
'He' in 'Hello' # True
3 in range(10) # True
'eH' in 'Hello'[::-1] # True
```
* If the iterable object is NOT a container (i.e. it doesn't have a `__contains__` method), `in` next tries to call the object's `__iter__` method. As was said previously: the `__iter__` method returns what is known in Python as an [iterator](https://docs.python.org/3/library/stdtypes.html#iterator-types). Basically, an iterator is an object that you can use the built-in generic function [`next()`](https://docs.python.org/3/library/functions.html#next) on1. A generator is just one type of iterator.
* If the call to `__iter__` is successful, the `in` keyword applies the function [`next()`](https://docs.python.org/3/library/functions.html#next) to the iterable object over and over again. (Remember: the iterable object could be a generator, or a container object's iterator, or any other iterable object.) Actually, to be more precise: it calls the iterator object's [`__next__`](https://docs.python.org/3/library/stdtypes.html#iterator.__next__) method).
* If the object doesn't have a `__iter__` method to return an iterator, `in` then falls back on the old-style iteration protocol using the object's `__getitem__` method2.
* If all of the above attempts fail, you'll get a [`TypeError` exception](https://docs.python.org/3/library/exceptions.html#TypeError).
If you wish to create your own object type to iterate over (i.e, you can use `for` `in`, or just `in`, on it), it's useful to know about the `yield` keyword, which is used in [generators](https://docs.python.org/3/glossary.html#term-generator) (as mentioned above).
```
class MyIterable():
def __iter__(self):
yield 1
m = MyIterable()
for _ in m: print(_) # 1
1 in m # True
```
The presence of `yield` turns a function or method into a generator instead of a regular function/method. You don't need the `__next__` method if you use a generator (it brings `__next__` along with it automatically).
If you wish to create your own container object type (i.e, you can use `in` on it by itself, but NOT `for` `in`), you just need the `__contains__` method.
```
class MyUselessContainer():
def __contains__(self, obj):
return True
m = MyUselessContainer()
1 in m # True
'Foo' in m # True
TypeError in m # True
None in m # True
```
---
1 Note that, to be an iterator, an object must implement [the iterator protocol](https://docs.python.org/3/library/stdtypes.html#iterator-types). This only means that both the `__next__` and `__iter__` methods must be *correctly* implemented (generators come with this functionality "for free", so you don't need to worry about it when using them). Also note that the `___next__` method [is actually `next` (no underscores) in Python 2](https://docs.python.org/2/library/stdtypes.html#iterator.next).
2 See [this answer](http://stackoverflow.com/a/7542261/208880) for the different ways to create iterable classes. |
Python for loop and iterator behavior | 29,403,401 | 35 | 2015-04-02T01:03:16Z | 29,403,437 | 17 | 2015-04-02T01:09:51Z | [
"python",
"iterator"
] | I wanted to understand a bit more about `iterators`, so please correct me if I'm wrong.
An iterator is an object which has a pointer to the next object and is read as a buffer or stream (i.e. a linked list). They're particularly efficient cause all they do is tell you what is next by references instead of using indexing.
However I still don't understand why is the following behavior happening:
```
In [1]: iter = (i for i in range(5))
In [2]: for _ in iter:
....: print _
....:
0
1
2
3
4
In [3]: for _ in iter:
....: print _
....:
In [4]:
```
After a first loop through the iterator (`In [2]`) it's as if it was consumed and left empty, so the second loop (`In [3]`) prints nothing.
However I never assigned a new value to the `iter` variable.
What is really happening under the hood of the `for` loop? | For loop basically calls the `next` method of an object that is applied to (`__next__` in Python 3).
You can simulate this simply by doing:
```
iter = (i for i in range(5))
print(next(iter))
print(next(iter))
print(next(iter))
print(next(iter))
print(next(iter))
# this prints 1 2 3 4
```
At this point there is no next element in the input object. So doing this:
```
print(next(iter))
```
Will result in `StopIteration` exception thrown. At this point `for` will stop. And iterator can be [any object](http://stackoverflow.com/a/5262255/248823) which will respond to the `next()` function and throws the exception when there are no more elements. It does not have to be any pointer or reference (there are no such things in python anyway in C/C++ sense), linked list, etc. |
Can't convert len (x) into a usable int for string slicing? | 29,403,827 | 2 | 2015-04-02T02:01:40Z | 29,403,870 | 12 | 2015-04-02T02:07:04Z | [
"python"
] | I'm trying to write a function that takes a string and prints it normally, and then in reverse, like so:
```
string = "hello"
mirror(string)
'helloolleh'
```
This is the code i have so far:
```
def mirror(x) :
sentence = " "
length = len(x)
lengthstring = str(len(x))
lengthint = int(lengthstring)
sentence = x[lengthint, 0]
print x + sentence
```
but it keeps saying that len (x) is a tuple and not an int and it can't be part of the string slice? | The error I get with your code is:
```
File "<stdin>", line 6, in mirror
TypeError: string indices must be integers, not tuple
```
which says nothing about `len(x)`. In fact, it is referring to the line
```
sentence = x[lengthint, 0]
```
in which you are trying to index `x` using `lengthint, 0`. Python assumes that your use of the comma indicates you want to index `x` using the tuple `(lengthint, 0)`, which is not valid.
It seems that you are trying to use string slice syntax to reverse the string. String slice syntax uses `:`, not `,`, like this:
```
sentence = x[lengthint:0]
```
But, since `lengthint` is greater than or equal to zero, that will never produce anything other than an empty result. You need to *also* tell Python to index backwards, like this:
```
sentence = x[lengthint:0:-1]
```
This almost works. It omits the first character of your original string in the reversed copy, because `0` indicates the position *past* the end position. So you have to use the special value `None`:
```
sentence = x[lengthint:None:-1]
```
After you do that and verify that it works, you can remove the use of `lengthstring` and `lengthint` (which I presume were added in an attempt to avoid the original error):
```
sentence = x[len(x):None:-1]
```
Finally, Python lets you omit the first two slice parameters in this case, leaving just:
```
sentence = x[::-1]
``` |
Python exponentiation order of operations and grouping | 29,404,604 | 3 | 2015-04-02T03:41:04Z | 29,404,645 | 10 | 2015-04-02T03:46:05Z | [
"python",
"time",
"operators",
"exponentiation"
] | Simple question:
Why does `(7**3) ** 24 % 25` take almost no time to run, but `7 ** 3 ** 24 % 25` not terminate? | Exponentiation groups [from right to left](https://docs.python.org/2/reference/expressions.html#operator-precedence).
So, `7 ** 3 ** 24` is evaluated as `7 ** 282429536481` (hard), whereas `(7**3) ** 24` is just `343 ** 24` (easy).
---
As an interesting sidenote: CPython, which has a peephole optimiser, is able to optimise away the "easy" case with constant folding. But the "hard" case only gets as far as folding the `3 ** 24`.
```
>>> def foo():
return 7 ** 3 ** 24 % 25
...
>>> def bar():
return (7**3) ** 24 % 25
...
>>> dis.dis(foo)
2 0 LOAD_CONST 1 (7)
3 LOAD_CONST 5 (282429536481)
6 BINARY_POWER
7 LOAD_CONST 4 (25)
10 BINARY_MODULO
11 RETURN_VALUE
>>> dis.dis(bar)
2 0 LOAD_CONST 7 (1L)
3 RETURN_VALUE
``` |
How can I verify when a copy is made in Python? | 29,411,707 | 3 | 2015-04-02T11:22:46Z | 29,411,798 | 8 | 2015-04-02T11:27:15Z | [
"python",
"arrays",
"python-3.x",
"numpy"
] | In Python 3.x, I'm working with large numpy arrays. I would like to have confirmation (without having to actually do some kind of experiment) that methods I have written are either working with a copy of the array OR are working with a direct reference to the array.
I would like confirmation also that the array in question has or has not been modified.
In C++, if I wanted to ensure that an object was NOT modified, I could pass it as a const... in Python, I seem to have no such assurances and have to be extra careful.
So, to sum up: I need a way to tell whether a copy has or has not been produced of a numpy array. I need a way to tell if an array (or any object for that matter) has been modified. I would prefer a fast, automatic way rather than having to do an experiment. | You can use [`np.ndarray.flags`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flags.html):
```
>>> a = np.arange(5)
>>> a.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
```
For example, you can set an array to not be writeable, by using [`np.setflags`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.setflags.html#numpy.ndarray.setflags); In that case an attempt to modify the array will fail:
```
>>> a.setflags(write=False) # sets the WRITEABLE flag to False
>>> a[2] = 10 # the modification will fail
ValueError: assignment destination is read-only
```
Another useful flag is the `OWNDATA`, which for example can indicate that the array is in fact a view on another array, so does not own its data:
```
>>> a = np.arange(5)
>>> b = a[::2]
>>> a.flags['OWNDATA']
True
>>> b.flags['OWNDATA']
False
``` |
Pandas cumulative sum on column with condition | 29,421,356 | 4 | 2015-04-02T19:54:18Z | 29,421,580 | 8 | 2015-04-02T20:08:17Z | [
"python",
"pandas",
"dataframe"
] | I didn't found answer elsewhere, so I need to ask. Probably because I don't know how to correctly name it. (English is not my origin language)
I have large datetime data frame. Time is important here. One column in df has values [Nan, 1, -1]. I need to perform quick calculation to have cumulative sum reseting when value is changing.
Example.
```
Time sign desire_value
2014-01-24 05:00:00 Nan Nan
2014-01-24 06:00:00 Nan Nan
2014-01-24 07:00:00 Nan Nan
2014-01-24 08:00:00 1 1
2014-01-24 09:00:00 1 2
2014-01-24 10:00:00 1 3
2014-01-24 11:00:00 -1 1
2014-01-24 12:00:00 -1 2
2014-01-24 13:00:00 -1 3
2014-01-24 14:00:00 -1 4
2014-01-24 15:00:00 -1 5
2014-01-24 16:00:00 1 1
2014-01-24 17:00:00 1 2
2014-01-24 18:00:00 1 3
2014-01-24 19:00:00 -1 1
2014-01-24 20:00:00 -1 2
2014-01-24 21:00:00 1 1
2014-01-24 22:00:00 1 2
```
I have working solution using function, but it is not very efficient.
```
df['sign_1'] = df['sign'].shift(1)
for index, row in df.iterrows():
if row.sign is None:
df.loc[line, 'desire_value'] = None
elif row.sign == row.sign_1:
acc += 1
df.loc[index, 'desire_value'] = acc
else:
acc = 1
df.loc[index, 'desire_value'] = acc
```
I cannot find any array based approach. I found that the best way to iterate efficiently in Python is using Cython, but is there more "Python" way to solve this? | see the last section [here](http://pandas.pydata.org/pandas-docs/stable/cookbook.html#grouping)
This is an itertools like groupby
```
In [86]: v = df['value'].dropna()
```
The grouper is separated on the group breakpoints; cumsum makes it have separate groups
```
In [87]: grouper = (v!=v.shift()).cumsum()
In [88]: grouper
Out[88]:
3 1
4 1
5 1
6 2
7 2
8 2
9 2
10 2
11 3
12 3
13 3
14 4
15 4
16 5
17 5
Name: value, dtype: int64
```
Then just a simple cumsum
```
In [89]: df.groupby(grouper)['value'].cumsum()
Out[89]:
0 NaN
1 NaN
2 NaN
3 1
4 2
5 3
6 -1
7 -2
8 -3
9 -4
10 -5
11 1
12 2
13 3
14 -1
15 -2
16 1
17 2
dtype: float64
```
You can certainly `.abs()` the above if you do in fact want the absolute values. |
How to count the number of occurences of `None` in a list? | 29,422,691 | 4 | 2015-04-02T21:21:18Z | 29,422,718 | 9 | 2015-04-02T21:23:22Z | [
"python",
"boolean",
"nonetype"
] | I'm trying to count things that are not `None`, but I want `False` and numeric zeros to be accepted too. Reversed logic: I want to count everything except what it's been explicitly declared as `None`.
# Example
Just the 5th element it's not included in the count:
```
>>> list = ['hey', 'what', 0, False, None, 14]
>>> print(magic_count(list))
5
```
I know this isn't Python normal behavior, but how can I override Python's behavior?
## What I've tried
So far I founded people suggesting that `a if a is not None else "too bad"`, but it does not work.
I've also tried `isinstance`, but with no luck. | Just use `sum` checking if each object `is not None` which will be `True` or `False` so 1 or 0.
```
lst = ['hey','what',0,False,None,14]
print(sum(x is not None for x in lst))
```
Or using `filter`:
```
print(len(filter(lambda x: x is not None, lst)))
```
The advantage of `sum` is it lazily evaluates an element at a time instead of creating a full list of values.
On a side note avoid using `list` as a variable name. as it shadows the python `list`. |
Django setUpTestData() vs. setUp() | 29,428,894 | 10 | 2015-04-03T08:37:18Z | 29,442,551 | 9 | 2015-04-04T03:56:28Z | [
"python",
"django",
"unit-testing"
] | Django 1.8 shipped with [a refactored TestCase](https://docs.djangoproject.com/en/1.8/releases/1.8/#testcase-data-setup) which allows for data initialization at the class level using transactions and savepoints via the [setUpTestData()](https://docs.djangoproject.com/en/1.8/topics/testing/tools/#django.test.TestCase.setUpTestData) method. This is in contrast to unittest's [setUp()](https://docs.python.org/2/library/unittest.html#unittest.TestCase.setUp) which runs before every single test method.
Question: **What is the use case for `setUp()` in Django now that `setUpTestData()` exists?**
I'm looking for objective, high-level answers only, as otherwise this question would be too broad for Stack Overflow. | It's not uncommon for there to be set-up code that can't run as a class method. One notable example is the Django [test client](https://docs.djangoproject.com/en/1.8/topics/testing/tools/#the-test-client): you might not want to reuse the same client instance across tests that otherwise share much of the same data, and indeed, the client instances automatically included in subclasses of Django's `SimpleTestCase` are [created per test method](https://github.com/django/django/blob/1.8/django/test/testcases.py#L201) rather than for the entire class. Suppose you had a test from the pre-Django 1.8 world with a `setUp` method like this:
```
def setUp(self):
self.the_user = f.UserFactory.create()
self.the_post = f.PostFactory.create(author=self.the_user)
self.client.login(
username=self.the_user.username, password=TEST_PASSWORD
)
# ... &c.
```
You might tempted to modernize the test case by changing `setUp` to `setUpTestData`, slapping a `@classmethod` decorator on top, and changing all the `self`s to `cls`. But that will fail with a `AttributeError: type object 'MyTestCase' has no attribute 'client'`! Instead, you should use `setUpTestData` for the shared data and `setUp` for the per-test-method client:
```
@classmethod
def setUpTestData(cls):
cls.the_user = f.UserFactory.create()
cls.the_post = f.PostFactory.create(author=cls.the_user)
# ... &c.
def setUp(self):
self.client.login(
username=self.the_user.username, password=TEST_PASSWORD
)
``` |
Can't find the right energy using scipy.signal.welch | 29,429,733 | 3 | 2015-04-03T09:41:15Z | 33,251,324 | 8 | 2015-10-21T05:04:04Z | [
"python",
"numpy",
"signal-processing",
"fft",
"discrete-mathematics"
] | For a given discret time signal `x(t)` with spacing `dt` (which is equal to `1/fs`, `fs` being the sample rate), the energy is:
```
E[x(t)] = sum(abs(x)**2.0)/fs
```
Then I do a DFT of `x(t)`:
```
x_tf = np.fft.fftshift( np.fft.fft( x ) ) / ( fs * ( 2.0 * np.pi ) ** 0.5 )
```
and compute the energy again:
```
E[x_tf] = sum( abs( x_tf ) ** 2.0 ) * fs * 2 * np.pi / N
```
(here the factor `fs*2*np.pi/N` = pulsation spacing `dk`, the documentation of `fftfreq` gives more details about spacing in frequency domain), I have the same energy:
```
E[x(t)] = E[x_tf]
```
BUT... when I compute the power spectral density of `x(t)` using `scipy.signal.welch`, I can't find the right energy. `scipy.signal.welch` returns the vector of frequencies `f` and energy `Pxx` (or energy per frequency, depending on which `scaling` we enter in arguments of `scipy.signal.welch`).
How can I find the same energy as `E[x(t)]` or `E[x_tf]` using `Pxx`? I tried to compute:
```
E_psd = sum(Pxx_den) / nperseg
```
where `nperseg` being the length of each segment of Welch algorithm, factors like `fs` and `np.sqrt(2*np.pi)` being cancelled out, and rescale E[x(t)] with `nperseg`, but without any success (orders of magnitude smaller than `E[x(t)]` )
I used the following code to generate my signal:
```
#Generate a test signal, a 2 Vrms sine wave at 1234 Hz, corrupted by 0.001 V**2/Hz of white noise sampled at 10 kHz.
fs = 10e3 #sampling rate, dt = 1/fs
N = 1e5
amp = 2*np.sqrt(2)
freq = 1234.0
noise_power = 0.001 * fs / 2
time = np.arange(N) / fs
x = amp*np.sin(2*np.pi*freq*time)
x += np.random.normal(scale=np.sqrt(noise_power), size=time.shape)
```
and I did the following to get the power spectral density:
```
f, Pxx_den = signal.welch(x, fs )
``` | The resolution to this apparent discrepancy lies in a careful understanding and application of
* continuous vs. discrete Fourier transforms, and
* energy, power, and power spectral density of a given signal
I too have struggled with this exact question, so I will try to be as explicit as possible in the discussion below.
## Discrete Fourier transform (DFT)
A *continuous* signal x(t) satisfying certain integrability conditions has a Fourier transform X(f). When working with a *discrete* signal x[n], however, it is often conventional to work with the discrete-time Fourier transform (DTFT). I will denote the DTFT as X\_{dt}(f), where `dt` equals the time interval between adjacent samples. The key to answering your question requires that you recognize that the DTFT is *not* equal to the corresponding Fourier transform! In fact, the two are related as
X\_{dt}(f) = (1 / dt) \* X(f)
Further, the discrete Fourier transform (DFT) is simply a *discrete* sample of the DTFT. The DFT, of course, is what Python returns when using `np.fft.fft(...)`. Thus, your computed DFT is *not* equal to the Fourier transform!
## Power spectral density
`scipy.signal.welch(..., scaling='density', ...)` returns an estimate of the [power spectral density (PSD)](https://en.wikipedia.org/wiki/Spectral_density#Power_spectral_density) of discrete signal x[n]. A full discussion of the PSD is a bit beyond the scope of this post, but for a simple periodic signal (such as that in your example), the PSD S\_{xx}(f) is given as
S\_{xx} = |X(f)|^2 / T
where |X(f)| is the Fourier transform of the signal and T is the total duration (in time) of the signal (if your signal x(t) were instead a random process, we'd have to take an ensemble average over many realizations of the system...). The total power in the signal is simply the integral of S\_{xx} over the system's frequency bandwidth. Using your code above, we can write
```
import scipy.signal
# Estimate PSD `S_xx_welch` at discrete frequencies `f_welch`
f_welch, S_xx_welch = scipy.signal.welch(x, fs=fs)
# Integrate PSD over spectral bandwidth
# to obtain signal power `P_welch`
df_welch = f_welch[1] - f_welch[0]
P_welch = np.sum(S_xx_welch) * df_welch
```
To make contact with your `np.fft.fft(...)` computations (which return the DFT), we must use the information from the previous section, namely that
X[k] = X\_{dt}(f\_k) = (1 / dt) \* X(f\_k)
Thus, to compute the power spectral density (or total power) from the FFT computations, we need to recognize that
S\_{xx} = |X[k]|^2 \* (dt ^ 2) / T
```
# Compute DFT
Xk = np.fft.fft(x)
# Compute corresponding frequencies
dt = time[1] - time[0]
f_fft = np.fft.fftfreq(len(x), d=dt)
# Estimate PSD `S_xx_fft` at discrete frequencies `f_fft`
T = time[-1] - time[0]
S_xx_fft = ((np.abs(Xk) * dt) ** 2) / T
# Integrate PSD over spectral bandwidth to obtain signal power `P_fft`
df_fft = f_fft[1] - f_fft[0]
P_fft = np.sum(S_xx_fft) * df_fft
```
Your values for `P_welch` and `P_fft` should be very close to each other, as well as close to the *expected* power in the signal, which can be computed as
```
# Power in sinusoidal signal is simply squared RMS, and
# the RMS of a sinusoid is the amplitude divided by sqrt(2).
# Thus, the sinusoidal contribution to expected power is
P_exp = (amp / np.sqrt(2)) ** 2
# For white noise, as is considered in this example,
# the noise is simply the noise PSD (a constant)
# times the system bandwidth. This was already
# computed in the problem statement and is given
# as `noise_power`. Simply add to `P_exp` to get
# total expected signal power.
P_exp += noise_power
```
**Note:** `P_welch` and `P_fft` will not be *exactly* equal, and likely not even equal within numerical accuracy. This is attributable to the fact that there are random errors associated with the estimation of the power spectral density. In an effort to reduce such errors, Welch's method splits your signal into several segments (the size of which is controlled by the `nperseg` keyword), computes the PSD of each segment, and averages the PSDs to obtain a better estimate of the signal's PSD (the more segments averaged over, the less the resulting random error). The FFT method, in effect, is equivalent to only computing and averaging over one, large segment. Thus, we expect some differences between `P_welch` and `P_fft`, but we should expect that `P_welch` is more accurate.
## Signal Energy
As you stated, the signal energy can be obtained from the discrete version of Parseval's theorem
```
# Energy obtained via "integrating" over time
E = np.sum(x ** 2)
# Energy obtained via "integrating" DFT components over frequency.
# The fact that `E` = `E_fft` is the statement of
# the discrete version of Parseval's theorem.
N = len(x)
E_fft = np.sum(np.abs(Xk) ** 2) / N
```
We'd like to now understand how `S_xx_welch`, computed above via `scipy.signal.welch(...)`, relates to the total energy `E` in the signal. From above, `S_xx_fft = ((np.abs(Xk) * dt) ** 2) / T`. Rearranging the terms in this expression, we see that `np.abs(Xk) ** 2 = (T / (dt ** 2)) * S_xx_fft`. Further,
From above, we know that `np.sum(S_xx_fft) = P_fft / df_fft` and that `P_fft` and `P_welch` are approximately equal. Further, `P_welch = np.sum(S_xx_welch) / df_welch` so that we obtain
`np.sum(S_xx_fft) = (df_welch / df_fft) * np.sum(S_xx_welch)`
Further, `S_xx_fft = ((np.abs(Xk) * dt) ** 2) / T`. Substituting `S_xx_fft` into the equation above and rearranging terms, we arrive at
`np.sum(np.abs(Xk) ** 2) = (T / (dt ** 2)) * (df_welch / df_fft) * np.sum(S_xx_welch)`
The left-hand side (LHS) in the above equation should now look very close to the expression for the total energy in the signal as computed from the DFT components. Now, note that `T / dt = N`, where `N` is the number of sample points in your signal. Dividing through by `N`, we now have a LHS that is, by definition, equal to the `E_fft` computed above. Thus, we can obtain the total energy in the signal from Welch's PSD via
```
# Signal energy from Welch's PSD
E_welch = (1. / dt) * (df_welch / df_fft) * np.sum(S_xx_welch)
```
`E`, `E_fft`, and `E_welch` should all be very close in value :) As discussed at the end of the preceding section, we do expect some slight differences between `E_welch` compared to `E` and `E_fft`, but this is attributable to the fact that values derived from Welch's method have reduced random error (i.e. are more accurate). |
Correlation matrix using pandas | 29,432,629 | 13 | 2015-04-03T12:57:22Z | 29,432,741 | 27 | 2015-04-03T13:04:18Z | [
"python",
"pandas",
"matplotlib",
"data-visualization",
"information-visualization"
] | I have a data set with huge number of features, so analysing the correlation matrix has become very difficult. I want to plot a correlation matrix which we get using dataframe.corr() function from pandas library. Is there any inbuilt function provided by pandas library to plot this matrix? | You can use [`matshow()`](http://matplotlib.org/examples/pylab_examples/matshow.html) from matplotlib:
`plt.matshow(dataframe.corr())` |
Correlation matrix using pandas | 29,432,629 | 13 | 2015-04-03T12:57:22Z | 31,384,328 | 19 | 2015-07-13T13:10:12Z | [
"python",
"pandas",
"matplotlib",
"data-visualization",
"information-visualization"
] | I have a data set with huge number of features, so analysing the correlation matrix has become very difficult. I want to plot a correlation matrix which we get using dataframe.corr() function from pandas library. Is there any inbuilt function provided by pandas library to plot this matrix? | Try this function, which also displays variable names for the correlation matrix:
```
def plot_corr(df,size=10):
'''Function plots a graphical correlation matrix for each pair of columns in the dataframe.
Input:
df: pandas DataFrame
size: vertical and horizontal size of the plot'''
corr = df.corr()
fig, ax = plt.subplots(figsize=(size, size))
ax.matshow(corr)
plt.xticks(range(len(corr.columns)), corr.columns);
plt.yticks(range(len(corr.columns)), corr.columns);
``` |
Convert a csv.DictReader object to a list of dictionaries? | 29,432,912 | 4 | 2015-04-03T13:13:50Z | 29,432,995 | 7 | 2015-04-03T13:19:19Z | [
"python",
"csv",
"dictionary"
] | A csv file `names.csv` has content:
```
first_name last_name
Baked Beans
Lovely Spam
Wonderful Spam
```
I would like to read it into a list of dictionaries, with the first row containing the keys:
```
>>> import csv
>>> with open('names.csv') as csvfile:
... reader = csv.DictReader(csvfile)
... for row in reader:
... print(row['first_name'], row['last_name'])
...
Baked Beans
Lovely Spam
Wonderful Spam
```
But is the type of `reader` `csv.DictReader`?
How can I convert `reader` into a list of dictionaries?
Thanks. | ```
import csv
with open("in.csv") as csvfile:
reader = csv.DictReader(csvfile,delimiter=" ")
print(list(reader))
[{'first_name': 'Baked', 'last_name': 'Beans'}, {'first_name': 'Lovely', 'last_name': 'Spam'}, {'first_name': 'Wonderful', 'last_name': 'Spam'}]
```
If the delimiter is not actually a `,` you need to specify `" "` or whatever it is |
Token in query string with Django REST Framework's TokenAuthentication | 29,433,416 | 2 | 2015-04-03T13:46:23Z | 29,435,607 | 7 | 2015-04-03T16:00:58Z | [
"python",
"django",
"authentication",
"django-rest-framework"
] | In an API built with [Django REST Framework](http://www.django-rest-framework.org/) authentication can be done using the TokenAuthentication method. Its [documentation](http://www.django-rest-framework.org/api-guide/authentication/#tokenauthentication) says the authentication token should be sent via an `Authorization` header.
Often one can send API-keys or tokens via a query string in order to authenticate, like `https://domain.com/v1/resource?api-key=lala`.
Is there a way to do the same with Django REST Framework's TokenAuthentication? | By deafult DRF doesn't support query string to authenticate, but you can easily override their `authenticate` method in `TokenAuthentication` class to support it.
An example would be:
```
class TokenAuthSupportQueryString(TokenAuthentication):
"""
Extend the TokenAuthentication class to support querystring authentication
in the form of "http://www.example.com/?auth_token=<token_key>"
"""
def authenticate(self, request):
# Check if 'token_auth' is in the request query params.
# Give precedence to 'Authorization' header.
if 'auth_token' in request.QUERY_PARAMS and \
'HTTP_AUTHORIZATION' not in request.META:
return self.authenticate_credentials(request.QUERY_PARAMS.get('auth_token'))
else:
return super(TokenAuthSupportQueryString, self).authenticate(request)
``` |
How to map a series of conditions as keys in a dictionary? | 29,433,635 | 12 | 2015-04-03T14:01:01Z | 29,433,705 | 15 | 2015-04-03T14:05:31Z | [
"python",
"python-2.7",
"dictionary",
"lambda"
] | I know you can use a dictionary as an alternative to a switch statement such as the following:
```
def printMessage(mystring):
# Switch statement without a dictionary
if mystring == "helloworld":
print "say hello"
elif mystring == "byeworld":
print "say bye"
elif mystring == "goodafternoonworld":
print "good afternoon"
def printMessage(mystring):
# Dictionary equivalent of a switch statement
myDictionary = {"helloworld": "say hello",
"byeworld": "say bye",
"goodafternoonworld": "good afternoon"}
print myDictionary[mystring]
```
However if conditions are used, other than equality (==) which return true of false these cant be mapped as easily i.e.:
```
if i > 0.5:
print "greater than 0.5"
elif i == 5:
print "it is equal to 5"
elif i > 5 and i < 6:
print "somewhere between 5 and 6"
```
The above cannot be directly converted to a dictionary key-value pair as is:
```
# this does not work
mydictionary = { i > 0.5: "greater than 0.5" }
```
A lambda can be used since it is hash-able but the only way to get the resulting string out of the map is by passing the same lambda object into the dictionary and not when the evaluation of the lambda is true:
```
x = lambda i: i > 0.5
mydictionary[x] = "greater than 0.5"
# you can get the string by doing this:
mydictionary[x]
# which doesnt result in the evaluation of x
# however a lambda is a hashable item in a dictionary
mydictionary = {lambda i: i > 0.5: "greater than 0.5"}
```
Does anyone know of a technique or method to create a mapping between a lambda evaluation and a return value?
(this maybe similar to pattern matching in functional language) | Your conditions are sequential in nature; you want to test one after the other, not map a small number of keys to a value here. Changing the order of the conditions could alter the outcome; a value of `5` results in `"greater than 0.5"` in your sample, not `"it is equal to 5"`.
Use a list of tuples:
```
myconditions = [
(lambda i: i > 0.5, "greater than 0.5"),
(lambda i: i == 5, "it is equal to 5"),
(lambda i: i > 5 and i < 6, "somewhere between 5 and 6"),
]
```
after which you can access each one in turn until one matches:
```
for test, message in myconditions:
if test(i):
return message
```
Re-ordering the tests will change the outcome.
A dictionary works for your first example because there is a simple *equality test* against multiple static values that is optimised by a dictionary, but there are no such simple equalities available here. |
Unable to "import matplotlib.pyplot as plt" in virtualenv | 29,433,824 | 13 | 2015-04-03T14:11:37Z | 33,447,513 | 9 | 2015-10-31T02:39:29Z | [
"python",
"osx",
"matplotlib",
"flask",
"virtualenv"
] | I am working with flask in a virtual environment. I was able to install matplotlib with pip, and I can `import matplotlib` in a Python session. However, when I import it as
```
matplotlib.pyplot as plt
```
I get the following error:
```
>>> import matplotlib.pyplot as plt
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "//anaconda/envs/myenv/lib/python2.7/site-packages/matplotlib/pyplot.py", line 109, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "//anaconda/envs/myenv/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
globals(),locals(),[backend_name],0)
File "//anaconda/envs/myenv/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py", line 24, in <module>
from matplotlib.backends import _macosx
RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends.
```
I am confused about why it asks me to install Python as framework. Doesn't it already exists? What does it mean to "install Python as framework", and how do I install it? | I had similar problem when I used pip to install matplotlib. By default, it installed the latest version which was 1.5.0. However, I had another virtual environment with Python 3.4 and matplotlib 1.4.3 and this environment worked fine when I imported matplotlib.pyplot. Therefore, I installed the earlier version of matplotlib using the following:
```
cd path_to_virtual_environment # assume directory is called env3
env3/bin/pip install matplotlib==1.4.3
```
I know this is only a work-around, but it worked for me as a short-term fix. |
Unable to "import matplotlib.pyplot as plt" in virtualenv | 29,433,824 | 13 | 2015-04-03T14:11:37Z | 34,392,196 | 15 | 2015-12-21T09:13:15Z | [
"python",
"osx",
"matplotlib",
"flask",
"virtualenv"
] | I am working with flask in a virtual environment. I was able to install matplotlib with pip, and I can `import matplotlib` in a Python session. However, when I import it as
```
matplotlib.pyplot as plt
```
I get the following error:
```
>>> import matplotlib.pyplot as plt
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "//anaconda/envs/myenv/lib/python2.7/site-packages/matplotlib/pyplot.py", line 109, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "//anaconda/envs/myenv/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
globals(),locals(),[backend_name],0)
File "//anaconda/envs/myenv/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py", line 24, in <module>
from matplotlib.backends import _macosx
RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends.
```
I am confused about why it asks me to install Python as framework. Doesn't it already exists? What does it mean to "install Python as framework", and how do I install it? | I got the same error, and tried `Jonathan`'s answer:
> You can fix this issue by using the backend Agg
>
> Go to `User/yourname/.matplotlib` and open/create `matplotlibrc` and add the following line `backend : Agg` and it should work for you.
I run the program, no error, but also no plots, and I tried `backend: Qt4Agg`,
it prints out that I haven't got PyQt4 installed.
Then I tried another backend: `backend: TkAgg`, it works!
So maybe we can try difference backends and some may work or install the requeired packages like PyQt4.
Here is a sample python snippet that you can try and test matplotlib.
```
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
plt.plot([1, 2, 3], [0, 3, 7])
plt.show()
``` |
Unable to "import matplotlib.pyplot as plt" in virtualenv | 29,433,824 | 13 | 2015-04-03T14:11:37Z | 35,107,136 | 25 | 2016-01-30T21:09:05Z | [
"python",
"osx",
"matplotlib",
"flask",
"virtualenv"
] | I am working with flask in a virtual environment. I was able to install matplotlib with pip, and I can `import matplotlib` in a Python session. However, when I import it as
```
matplotlib.pyplot as plt
```
I get the following error:
```
>>> import matplotlib.pyplot as plt
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "//anaconda/envs/myenv/lib/python2.7/site-packages/matplotlib/pyplot.py", line 109, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "//anaconda/envs/myenv/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
globals(),locals(),[backend_name],0)
File "//anaconda/envs/myenv/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py", line 24, in <module>
from matplotlib.backends import _macosx
RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends.
```
I am confused about why it asks me to install Python as framework. Doesn't it already exists? What does it mean to "install Python as framework", and how do I install it? | This [solution](http://stackoverflow.com/questions/21784641/installation-issue-with-matplotlib-python) worked for me. If you already installed matplotlib using pip on your virtual environment, you can just type the following:
```
$ cd ~/.matplotlib
$ nano matplotlibrc
```
And then, write `backend: TkAgg` in there.
If you need more information, just go to the solution link. |
Error in Tumblelog Application development using Flask and MongoEngine | 29,434,854 | 9 | 2015-04-03T15:12:16Z | 29,477,297 | 18 | 2015-04-06T18:32:52Z | [
"python",
"mongodb",
"python-2.7",
"flask"
] | I am trying to follow below tutorial but I am facing few issue, when i run manage.py. Any help could be great help.
<http://docs.mongodb.org/ecosystem/tutorial/write-a-tumblelog-application-with-flask-mongoengine/#id1>
manage.py run output:
```
(Tumbler)afiz Tumbler $ python manage.py
Traceback (most recent call last):
File "manage.py", line 6, in <module>
from tumblelog import app
File "/home/afiz/.virtualenvs/tumblelog/__init__.py", line 18, in <module>
register_blueprints(app)
File "/home/afiz/.virtualenvs/tumblelog/__init__.py", line 13, in register_blueprints
from tumblelog.views import posts
File "/home/afiz/.virtualenvs/tumblelog/views.py", line 5, in <module>
from tumblelog.models import Post, Comment
File "/home/afiz/.virtualenvs/tumblelog/models.py", line 6, in <module>
class Post(db.DynamicDocument):
File "/home/afiz/.virtualenvs/Tumbler/local/lib/python2.7/site-packages/mongoengine/base/metaclasses.py", line 361, in __new__
meta['index_specs'] = new_class._build_index_specs(meta['indexes'])
File "/home/afiz/.virtualenvs/Tumbler/local/lib/python2.7/site-packages/mongoengine/base/document.py", line 722, in _build_index_specs
unique_indices = cls._unique_with_indexes()
File "/home/afiz/.virtualenvs/Tumbler/local/lib/python2.7/site-packages/mongoengine/base/document.py", line 861, in _unique_with_indexes
field.document_type != cls):
File "/home/afiz/.virtualenvs/Tumbler/local/lib/python2.7/site-packages/mongoengine/fields.py", line 563, in document_type
self.document_type_obj = get_document(self.document_type_obj)
File "/home/afiz/.virtualenvs/Tumbler/local/lib/python2.7/site-packages/mongoengine/base/common.py", line 25, in get_document
""".strip() % name)
mongoengine.errors.NotRegistered: `Comment` has not been registered in the document registry.
Importing the document class automatically registers it, has it
been imported?
```
manage.py file:
```
#set the path
import os, sys
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from flask.ext.script import Manager, Server
from tumblelog import app
manager = Manager(app)
# Turn on debugger by default and reloader
manager.add_command("runserver", Server(
use_debugger = True,
use_reloader = True,
host = '0.0.0.0')
)
if __name__ == "__main__":
manager.run()
``` | I had the same problem as you are facing now. In models.py file I just wrote
> class Comment(db.EmbeddedDocument):
and it's content first then added
> class Post(db.Document):
and then it's content. In other words, I first wrote Comment class then Post class and problem got solved ;) :) Cheers!! |
Bokeh Plotting: Enable tooltips for only some glyphs | 29,435,200 | 8 | 2015-04-03T15:35:04Z | 32,116,970 | 7 | 2015-08-20T11:25:17Z | [
"python",
"plot",
"hover",
"tooltip",
"bokeh"
] | I have a figure with some glyphs, but only want tooltips to display for certain glyphs. Is there currently a way to accomplish this in Bokeh?
Alternatively, is there a way to plot two figures on top of each other? It seems like that would let me accomplish what I want to do. | Thanks to this page in Google Groups I figured out how this can be done.
[Link here](https://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&uact=8&ved=0CDIQFjADahUKEwiMj_m6u7fHAhUDNj4KHTfEBww&url=https%3A%2F%2Fgroups.google.com%2Fa%2Fcontinuum.io%2Fd%2Ftopic%2Fbokeh%2FDg4oNVlwuDw&ei=jK3VVYyZA4Ps-AG3iJ9g&usg=AFQjCNHBUrPfM3jZGtSRw-TJ3vHkrxIHxw)
**Edit 2015-10-20**: looks like the google group link doesn't work anymore unfortunately. It was a message from Sarah Bird @bokehplot.
Essentially you need to (bokeh version 0.9.2):
1. not add `hover` in your `tools` when you create the figure
2. create glyphs individually
3. add glyphs to your figure
4. set up the hover tool for this set of glyphs
5. add the hover tool to your figure
Example:
```
import bokeh.models as bkm
import bokeh.plotting as bkp
source = bkm.ColumnDataSource(data=your_frame)
p = bkp.figure(tools='add the tools you want here, but no hover!')
g1 = bkm.Cross(x='col1', y='col2')
g1_r = p.add_glyph(source_or_glyph=source, glyph=g1)
g1_hover = bkm.HoverTool(renderers=[g1_r],
tooltips=[('x', '@col1'), ('y', '@col2')])
p.add_tools(g1_hover)
# now repeat the above for the next sets of glyphs you want to add.
# for those you don't want tooltips to show when hovering over, just don't
# add hover tool for them!
```
Also if you need to add legend to each of the glyphs you are adding, try using `bokeh.plotting_helpers._update_legend()` method. [github source](https://github.com/bokeh/bokeh/blob/master/bokeh/plotting_helpers.py) Eg:
```
_update_legend(plot=p, legend_name='data1', glyph_renderer=g1_r)
``` |
Stratified Train/Test-split in scikit-learn | 29,438,265 | 6 | 2015-04-03T19:11:22Z | 29,477,256 | 20 | 2015-04-06T18:30:36Z | [
"python",
"scikit-learn"
] | I need to split my data into a training set (75%) and test set (25%). I currently do that with the code below:
```
X, Xt, userInfo, userInfo_train = sklearn.cross_validation.train_test_split(X, userInfo)
```
However, I'd like to stratify my training dataset. How do I do that? I've been looking into the `StratifiedKFold` method, but doesn't let me specifiy the 75%/25% split and only stratify the training dataset. | [update for 0.17]
```
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y)
```
[/update for 0.17]
There is a pull request [here](https://github.com/scikit-learn/scikit-learn/pull/4438).
But you can simply do `train, test = next(iter(StratifiedKFold(...)))`
and use the train and test indices if you want. |
How to create a pivot table on extremely large dataframes in Pandas | 29,439,589 | 7 | 2015-04-03T20:58:52Z | 29,439,946 | 8 | 2015-04-03T21:27:53Z | [
"python",
"python-3.x",
"pandas",
"pivot-table"
] | I need to create a pivot table of 2000 columns by around 30-50 million rows from a dataset of around 60 million rows. I've tried pivoting in chunks of 100,000 rows, and that works, but when I try to recombine the DataFrames by doing a .append() followed by .groupby('someKey').sum(), all my memory is taken up and python eventually crashes.
How can I do a pivot on data this large with a limited ammount of RAM?
EDIT: adding sample code
The following code includes various test outputs along the way, but the last print is what we're really interested in. Note that if we change segMax to 3, instead of 4, the code will produce a false positive for correct output. The main issue is that if a shipmentid entry is not in each and every chunk that sum(wawa) looks at, it doesn't show up in the output.
```
import pandas as pd
import numpy as np
import random
from pandas.io.pytables import *
import os
pd.set_option('io.hdf.default_format','table')
# create a small dataframe to simulate the real data.
def loadFrame():
frame = pd.DataFrame()
frame['shipmentid']=[1,2,3,1,2,3,1,2,3] #evenly distributing shipmentid values for testing purposes
frame['qty']= np.random.randint(1,5,9) #random quantity is ok for this test
frame['catid'] = np.random.randint(1,5,9) #random category is ok for this test
return frame
def pivotSegment(segmentNumber,passedFrame):
segmentSize = 3 #take 3 rows at a time
frame = passedFrame[(segmentNumber*segmentSize):(segmentNumber*segmentSize + segmentSize)] #slice the input DF
# ensure that all chunks are identically formatted after the pivot by appending a dummy DF with all possible category values
span = pd.DataFrame()
span['catid'] = range(1,5+1)
span['shipmentid']=1
span['qty']=0
frame = frame.append(span)
return frame.pivot_table(['qty'],index=['shipmentid'],columns='catid', \
aggfunc='sum',fill_value=0).reset_index()
def createStore():
store = pd.HDFStore('testdata.h5')
return store
segMin = 0
segMax = 4
store = createStore()
frame = loadFrame()
print('Printing Frame')
print(frame)
print(frame.info())
for i in range(segMin,segMax):
segment = pivotSegment(i,frame)
store.append('data',frame[(i*3):(i*3 + 3)])
store.append('pivotedData',segment)
print('\nPrinting Store')
print(store)
print('\nPrinting Store: data')
print(store['data'])
print('\nPrinting Store: pivotedData')
print(store['pivotedData'])
print('**************')
print(store['pivotedData'].set_index('shipmentid').groupby('shipmentid',level=0).sum())
print('**************')
print('$$$')
for df in store.select('pivotedData',chunksize=3):
print(df.set_index('shipmentid').groupby('shipmentid',level=0).sum())
print('$$$')
store['pivotedAndSummed'] = sum((df.set_index('shipmentid').groupby('shipmentid',level=0).sum() for df in store.select('pivotedData',chunksize=3)))
print('\nPrinting Store: pivotedAndSummed')
print(store['pivotedAndSummed'])
store.close()
os.remove('testdata.h5')
print('closed')
``` | You could do the appending with HDF5/pytables. This keeps it out of RAM.
Use the [table format](http://pandas.pydata.org/pandas-docs/dev/io.html#table-format):
```
store = pd.HDFStore('store.h5')
for ...:
...
chunk # the chunk of the DataFrame (which you want to append)
store.append('df', chunk)
```
Now you can read it in as a DataFrame in one go (assuming this DataFrame can fit in memory!):
```
df = store['df']
```
You can also query, to get only subsections of the DataFrame.
Aside: You should also buy more RAM, it's cheap.
---
Edit: you can groupby/sum from the store [iteratively](http://pandas.pydata.org/pandas-docs/stable/io.html#iterator) since this "map-reduces" over the chunks:
```
# note: this doesn't work, see below
sum(df.groupby().sum() for df in store.select('df', chunksize=50000))
# equivalent to (but doesn't read in the entire frame)
store['df'].groupby().sum()
```
Edit2: Using sum as above doesn't actually work in pandas 0.16 (I thought it did in 0.15.2), instead you can use [`reduce`](https://docs.python.org/2/library/functions.html#reduce) with [`add`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add.html):
```
reduce(lambda x, y: x.add(y, fill_value=0),
(df.groupby().sum() for df in store.select('df', chunksize=50000)))
```
*In python 3 you must [import reduce from functools](https://docs.python.org/3.0/library/functools.html#functools.reduce).*
Perhaps it's more pythonic/readable to write this as:
```
chunks = (df.groupby().sum() for df in store.select('df', chunksize=50000))
res = next(chunks) # will raise if there are no chunks!
for c in chunks:
res = res.add(c, fill_value=0)
```
*If performance is poor / if there are a large number of new groups then it may be preferable to start the res as zero of the correct size (by getting the unique group keys e.g. by looping through the chunks), and then add in place.* |
How to install lxml on Windows | 29,440,482 | 5 | 2015-04-03T22:24:35Z | 29,441,115 | 8 | 2015-04-03T23:35:23Z | [
"python",
"windows",
"python-3.x",
"pip",
"lxml"
] | I'm trying to install `lmxl` on my Windows 8.1 laptop with Python 3.4 and failing miserably.
First off, I tried the simple and obvious solution: `pip install lxml`. However, this didn't work. Here's what it said:
```
Downloading/unpacking lxml
Running setup.py (path:C:\Users\CARTE_~1\AppData\Local\Temp\pip_build_carte_000\lxml\setup.py) egg_info for package lxml
Building lxml version 3.4.2.
Building without Cython.
ERROR: b"'xslt-config' is not recognized as an internal or external command,\r\noperable program or batch file.\r\n"
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
C:\Python34\lib\distutils\dist.py:260: UserWarning: Unknown distribution option: 'bugtrack_url'
warnings.warn(msg)
warning: no previously-included files found matching '*.py'
Installing collected packages: lxml
Running setup.py install for lxml
Building lxml version 3.4.2.
Building without Cython.
ERROR: b"'xslt-config' is not recognized as an internal or external command,\r\noperable program or batch file.\r\n"
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
building 'lxml.etree' extension
C:\Python34\lib\distutils\dist.py:260: UserWarning: Unknown distribution option: 'bugtrack_url'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
Complete output from command C:\Python34\python.exe -c "import setuptools, tokenize;__file__='C:\\Users\\CARTE_~1\\AppData\\Local\\Temp\\pip_build_carte_000\\lxml\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record C:\Users\CARTE_~1\AppData\Local\Temp\pip-l8vvrv9g-record\install-record.txt --single-version-externally-managed --compile:
Building lxml version 3.4.2.
Building without Cython.
ERROR: b"'xslt-config' is not recognized as an internal or external command,\r\noperable program or batch file.\r\n"
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
running install
running build
running build_py
creating build
creating build\lib.win32-3.4
creating build\lib.win32-3.4\lxml
copying src\lxml\builder.py -> build\lib.win32-3.4\lxml
copying src\lxml\cssselect.py -> build\lib.win32-3.4\lxml
copying src\lxml\doctestcompare.py -> build\lib.win32-3.4\lxml
copying src\lxml\ElementInclude.py -> build\lib.win32-3.4\lxml
copying src\lxml\pyclasslookup.py -> build\lib.win32-3.4\lxml
copying src\lxml\sax.py -> build\lib.win32-3.4\lxml
copying src\lxml\usedoctest.py -> build\lib.win32-3.4\lxml
copying src\lxml\_elementpath.py -> build\lib.win32-3.4\lxml
copying src\lxml\__init__.py -> build\lib.win32-3.4\lxml
creating build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\__init__.py -> build\lib.win32-3.4\lxml\includes
creating build\lib.win32-3.4\lxml\html
copying src\lxml\html\builder.py -> build\lib.win32-3.4\lxml\html
copying src\lxml\html\clean.py -> build\lib.win32-3.4\lxml\html
copying src\lxml\html\defs.py -> build\lib.win32-3.4\lxml\html
copying src\lxml\html\diff.py -> build\lib.win32-3.4\lxml\html
copying src\lxml\html\ElementSoup.py -> build\lib.win32-3.4\lxml\html
copying src\lxml\html\formfill.py -> build\lib.win32-3.4\lxml\html
copying src\lxml\html\html5parser.py -> build\lib.win32-3.4\lxml\html
copying src\lxml\html\soupparser.py -> build\lib.win32-3.4\lxml\html
copying src\lxml\html\usedoctest.py -> build\lib.win32-3.4\lxml\html
copying src\lxml\html\_diffcommand.py -> build\lib.win32-3.4\lxml\html
copying src\lxml\html\_html5builder.py -> build\lib.win32-3.4\lxml\html
copying src\lxml\html\_setmixin.py -> build\lib.win32-3.4\lxml\html
copying src\lxml\html\__init__.py -> build\lib.win32-3.4\lxml\html
creating build\lib.win32-3.4\lxml\isoschematron
copying src\lxml\isoschematron\__init__.py -> build\lib.win32-3.4\lxml\isoschematron
copying src\lxml\lxml.etree.h -> build\lib.win32-3.4\lxml
copying src\lxml\lxml.etree_api.h -> build\lib.win32-3.4\lxml
copying src\lxml\includes\c14n.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\config.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\dtdvalid.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\etreepublic.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\htmlparser.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\relaxng.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\schematron.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\tree.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\uri.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\xinclude.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\xmlerror.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\xmlparser.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\xmlschema.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\xpath.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\xslt.pxd -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\etree_defs.h -> build\lib.win32-3.4\lxml\includes
copying src\lxml\includes\lxml-version.h -> build\lib.win32-3.4\lxml\includes
creating build\lib.win32-3.4\lxml\isoschematron\resources
creating build\lib.win32-3.4\lxml\isoschematron\resources\rng
copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win32-3.4\lxml\isoschematron\resources\rng
creating build\lib.win32-3.4\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win32-3.4\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win32-3.4\lxml\isoschematron\resources\xsl
creating build\lib.win32-3.4\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win32-3.4\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win32-3.4\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win32-3.4\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win32-3.4\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win32-3.4\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win32-3.4\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
C:\Python34\lib\distutils\dist.py:260: UserWarning: Unknown distribution option: 'bugtrack_url'
warnings.warn(msg)
error: Unable to find vcvarsall.bat
----------------------------------------
Cleaning up...
Command C:\Python34\python.exe -c "import setuptools, tokenize;__file__='C:\\Users\\CARTE_~1\\AppData\\Local\\Temp\\pip_build_carte_000\\lxml\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record C:\Users\CARTE_~1\AppData\Local\Temp\pip-l8vvrv9g-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in C:\Users\CARTE_~1\AppData\Local\Temp\pip_build_carte_000\lxml
Storing debug log for failure in C:\Users\carte_000\pip\pip.log
```
So then I looked on this great and helpful thing called *The Internet* and a lot of people have the same error of needing `libxml2` and `libxlst`. They recommend a guy called Christoph Gohlke's page where he provides some sort of binary thingy for a bunch of packages. You can find it [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/) ([quicklink to the lxml part](http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml)).
So after I gave up on trying to find libxml2 and libxslt for pip, I decided to go there, and found *an absolute ton* of downloads. I know I need a 64-bit one, but I have no idea which "`cp`" I need.
So an answer either giving me a solution on the `pip` method or the Gohlke index method would be great. | First, following the comments, I downloaded the `lxml-3.4.2-cp34-none-win_amd64.whl` file and tried to open it with a `pip install`, but it just told me it wasn't a valid wheel file on my system or something.
Then, I downloaded the `win_32` file and it worked! Maybe it's because I have an Intel processor and AMD64 is, unsurprisingly, only for AMD processors. |
What is the theorical foundation for scikit-learn dummy classifier? | 29,441,943 | 6 | 2015-04-04T02:00:27Z | 29,442,397 | 9 | 2015-04-04T03:25:41Z | [
"python",
"machine-learning",
"artificial-intelligence",
"scikit-learn",
"svm"
] | By the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html) I read that a dummy classifier can be used to test it against a classification algorithm.
> This classifier is useful as a simple baseline to compare with other
> (real) classifiers. Do not use it for real problems.
What does the dummy classifier do when it uses the stratified aproach. I know that the docummentation says that:
> generates predictions by respecting the training setâs class
> distribution.
Could anybody give me a more theorical explanation of why this is a proof for the performance of the classifier?. | The dummy classifier gives you a measure of "baseline" performance--i.e. the success rate one should expect to achieve even if simply guessing.
Suppose you wish to determine whether a given object possesses or does not possess a certain property. If you have analyzed a large number of those objects and have found that 90% contain the target property, then guessing that every future instance of the object possesses the target property gives you a 90% likelihood of guessing correctly. Structuring your guesses this way is equivalent to using the `most_frequent` method in the documentation you cite.
Because many machine learning tasks attempt to increase the success rate of (e.g.) classification tasks, evaluating the baseline success rate can afford a floor value for the minimal value one's classifier should out-perform. In the hypothetical discussed above, you would want your classifier to get more than 90% accuracy, because 90% is the success rate available to even "dummy" classifiers.
If one trains a dummy classifier with the `stratified` parameter using the data discussed above, that classifier will predict that there is a 90% probability that each object it encounters possesses the target property. This is different from training a dummy classifier with the `most_frequent` parameter, as the latter would guess that *all* future objects possess the target property. Here's some code to illustrate:
```
from sklearn.dummy import DummyClassifier
import numpy as np
two_dimensional_values = []
class_labels = []
for i in xrange(90):
two_dimensional_values.append( [1,1] )
class_labels.append(1)
for i in xrange(10):
two_dimensional_values.append( [0,0] )
class_labels.append(0)
#now 90% of the training data contains the target property
X = np.array( two_dimensional_values )
y = np.array( class_labels )
#train a dummy classifier to make predictions based on the most_frequent class value
dummy_classifier = DummyClassifier(strategy="most_frequent")
dummy_classifier.fit( X,y )
#this produces 100 predictions that say "1"
for i in two_dimensional_values:
print( dummy_classifier.predict( [i]) )
#train a dummy classifier to make predictions based on the class values
new_dummy_classifier = DummyClassifier(strategy="stratified")
new_dummy_classifier.fit( X,y )
#this produces roughly 90 guesses that say "1" and roughly 10 guesses that say "0"
for i in two_dimensional_values:
print( new_dummy_classifier.predict( [i]) )
``` |
Autocommit Migration from Django 1.7 to 1.8 | 29,443,216 | 9 | 2015-04-04T05:46:19Z | 29,444,039 | 11 | 2015-04-04T07:53:19Z | [
"python",
"django"
] | I was migrating from Django 1.7 to 1.8 via following steps
1. Active virtualenv
2. Uninstall Django 1.7
3. Install Django 1.8
4. python manage.py runserver
On execution of step 4 for I am getting the following error.
```
Unhandled exception in thread started by <function wrapper at 0x7f4e473a8230>
Traceback (most recent call last):
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 223, in wrapper
fn(*args, **kwargs)
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 112, in inner_run
self.check_migrations()
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 164, in check_migrations
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/db/migrations/executor.py", line 19, in __init__
self.loader = MigrationLoader(self.connection)
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 47, in __init__
self.build_graph()
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 180, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/db/migrations/recorder.py", line 59, in applied_migrations
self.ensure_schema()
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/db/migrations/recorder.py", line 49, in ensure_schema
if self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()):
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 162, in cursor
cursor = self.make_debug_cursor(self._cursor())
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 135, in _cursor
self.ensure_connection()
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection
self.connect()
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/db/utils.py", line 97, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection
self.connect()
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 119, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 172, in get_new_connection
connection = Database.connect(**conn_params)
File "/home/lenovo/Envs/boilerplate/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
django.db.utils.OperationalError: invalid connection option "autocommit"
```
Error seems to be from psycopg2 module and related to autocommit. In the **feature removed** section of 1.8 documentation I found the following line.
```
the decorators and context managers autocommit, commit_on_success, and commit_manually, defined in django.db.transaction
```
I couldn't relate this to the error I got. Can somebody throw light into this?
## Update:
I found out why. Below is my DB connection config. In that there is autocommit=True. On commenting that line, the issue got resolved. But still I want to know why we cannot give autocommit=True option.
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'HOST': 'localhost',
'PORT': '5432',
'NAME': 'bp_django_auth',
'USER': 'postgres',
'PASSWORD': 'abcd1234',
'OPTIONS': {
"autocommit": True,
},
}
}
``` | The following was outlined in the [Django 1.7 Databases docs](https://docs.djangoproject.com/en/1.7/ref/databases/#autocommit-mode):
> In previous versions of Django, database-level autocommit could be enabled by setting the autocommit key in the OPTIONS part of your database configuration in DATABASES.
>
> Since Django 1.6, **autocommit is turned on by default**. This configuration is ignored and can be safely removed.
And as per the [1.8 Release Notes](https://docs.djangoproject.com/en/1.8/releases/1.8/#features-removed-in-1-8), this feature was removed.
If you still want to keep the setting for some reason, simply move it out of `OPTIONS`:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'HOST': 'localhost',
'PORT': '5432',
'NAME': 'bp_django_auth',
'USER': 'postgres',
'PASSWORD': 'abcd1234',
'AUTOCOMMIT': True,
}
}
``` |
How to create equal spaced values in between unequal spaced values in Python? | 29,444,666 | 4 | 2015-04-04T09:14:08Z | 29,444,822 | 7 | 2015-04-04T09:33:00Z | [
"python",
"arrays",
"numpy",
"scipy"
] | I have an array A (*variable*) of the form:
```
A = [1, 3, 7, 9, 15, 20, 24]
```
Now I want to create 10 (*variable*) equally spaced values in between values of array A so that I get array B of the form:
```
B = [1, 1.2, 1.4, ... 2.8, 3, 3.4, 3.8, ... , 6.6, 7, 7.2, ..., 23.6, 24]
```
In essence B should always have the values of A and equally spaced values in between values of A.
I did solve this by using the code:
```
import numpy as np
A = np.array([1, 3, 7, 9, 15, 20, 24])
B = []
for i in range(len(A) - 1):
B = np.append(B, np.linspace(A[i], A[i + 1], 11))
print (B)
```
But does NumPy already have any function or are there any other better methods to create such array. | Alternative method using [interpolation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html) instead of concatenation:
```
n = 10
x = np.arange(0, n * len(A), n) # 0, 10, .., 50, 60
xx = np.arange((len(A) - 1) * n + 1) # 0, 1, .., 59, 60
B = np.interp(xx, x, A)
```
Result:
```
In [31]: B
Out[31]:
array([ 1. , 1.2, 1.4, 1.6, 1.8, 2. , 2.2, 2.4, 2.6,
2.8, 3. , 3.4, 3.8, 4.2, 4.6, 5. , 5.4, 5.8,
6.2, 6.6, 7. , 7.2, 7.4, 7.6, 7.8, 8. , 8.2,
8.4, 8.6, 8.8, 9. , 9.6, 10.2, 10.8, 11.4, 12. ,
12.6, 13.2, 13.8, 14.4, 15. , 15.5, 16. , 16.5, 17. ,
17.5, 18. , 18.5, 19. , 19.5, 20. , 20.4, 20.8, 21.2,
21.6, 22. , 22.4, 22.8, 23.2, 23.6, 24. ])
```
This should be faster than the other solutions, since it does not use a Python for-loop, and does not do the many calls to `linspace`. Quick timing comparison:
```
In [58]: timeit np.interp(np.arange((len(A) - 1) * 10 + 1), np.arange(0, 10*len(A), 10), A)
100000 loops, best of 3: 10.3 µs per loop
In [59]: timeit np.append(np.concatenate([np.linspace(i, j, 10, False) for i, j in zip(A, A[1:])]), A[-1])
10000 loops, best of 3: 94.2 µs per loop
In [60]: timeit np.unique(np.hstack(np.linspace(a, b, 10 + 1) for a, b in zip(A[:-1], A[1:])))
10000 loops, best of 3: 140 µs per loop
``` |
Easier way to write conditional statement | 29,449,985 | 3 | 2015-04-04T18:30:21Z | 29,450,002 | 11 | 2015-04-04T18:31:50Z | [
"python",
"if-statement",
"condition"
] | Is there any prettier way to write this if-statement:
```
if not (self.is_legal(l) or self.is_legal(u) or self.is_legal(r) or self.is_legal(d)):
```
I've tried this, but it didn't work.
```
if not self.is_legal(l or r or d or u):
```
Or maybe the first one is the prettiest? | You can use [`any`](https://docs.python.org/3/library/functions.html#any) and a [generator expression](https://docs.python.org/3/reference/expressions.html#grammar-token-generator_expression):
```
if not any(self.is_legal(x) for x in (l, u, r, d)):
```
Or, if you prefer [`all`](https://docs.python.org/3/library/functions.html#all) instead of `any`:
```
if all(not self.is_legal(x) for x in (l, u, r, d)):
```
The first solution seems to read a little better though.
---
As for why your attempted solution did not work, the `or` operator in Python does not behave as you think it does. From the [docs](https://docs.python.org/3/reference/expressions.html#grammar-token-or_test):
> The expression `x or y` first evaluates `x`; if `x` is true, its value
> is returned; otherwise, `y` is evaluated and the resulting value is
> returned.
So, `self.is_legal(l or r or d or u)` was only passing the first truthy value to the `self.is_legal` method, not all of them. |
Python: Why Does str.split() Return a list While str.partition() Returns a tuple? | 29,451,794 | 5 | 2015-04-04T21:53:23Z | 29,451,831 | 8 | 2015-04-04T21:56:47Z | [
"python"
] | Comparing Python's `str.split()` with `str.partition()`, I see that they not only have different functions (`split()` tokenizes the whole string at each occurrence of the delimiter, while `partition()` just returns everything before and everything after the first delimiter occurrence), but that they also have different return types. That is, `str.split()` returns a `list` while `str.partition()` returns a `tuple`. This is significant since a `list` is mutable while a `tuple` is not. Is there any deliberate reason behind this choice in the API design, or is it "just the way things are." I am curious. | The key difference between those methods is that `split()` returns a variable number of results, and `partition()` returns a fixed number. Tuples are usually not used for APIs which return a variable number of items. |
django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty | 29,455,057 | 3 | 2015-04-05T07:19:26Z | 29,455,099 | 11 | 2015-04-05T07:28:22Z | [
"python",
"django"
] | I created a new project in django and pasted some files from another project. Whenever I try to run the server, I get the following error message:
```
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
.
.
.
File "/Library/Python/2.7/site-packages/django/conf/__init__.py", line 115, in __init__
raise ImproperlyConfigured("The SECRET_KEY setting must not be empty.")
django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.
```
Here's my settings.py
```
""" Django settings for dbe project. """
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
from os.path import join as pjoin
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'shared',
'issues',
'blog',
'bombquiz',
'forum',
'portfolio',
'questionnaire',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
TEMPLATE_DIRS = (
os.path.join(BASE_DIR, "templates"),
os.path.join(BASE_DIR, "templates", "issues"),
os.path.join(BASE_DIR, "templates", "blog"),
os.path.join(BASE_DIR, "templates", "bombquiz"),
os.path.join(BASE_DIR, "templates", "forum"),
os.path.join(BASE_DIR, "templates", "portfolio"),
os.path.join(BASE_DIR, "templates", "questionnaire"),
)
ROOT_URLCONF = 'dbe.urls'
WSGI_APPLICATION = 'dbe.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.7/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
MEDIA_URL = '/media/'
MEDIA_ROOT = pjoin(BASE_DIR, "media")
STATIC_URL = '/static/'
STATICFILES_DIRS = (
pjoin(BASE_DIR, "static"),
)
try:
from local_settings import *
except:
pass
```
Here's manage.py as well
```
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dbe.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
```
Any help? Thanks! | Just like the error says, you have no `SECRET_KEY` defined. You need to add one to your **settings.py**.
> Django will refuse to start if `SECRET_KEY` is not set.
You can read more about this setting [in the docs](https://docs.djangoproject.com/en/1.7/ref/settings/#std:setting-SECRET_KEY).
The `SECRET_KEY` can be just about anything...but if you want to use Django to generate one, you can do the following from the python shell:
```
>>> from django.utils.crypto import get_random_string
>>> chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'
>>> SECRET_KEY = get_random_string(50, chars)
>>> print SECRET_KEY
```
Copy the `SECRET_KEY` to your settings file. |
unicode string equivalent of contain | 29,456,800 | 6 | 2015-04-05T11:31:07Z | 29,456,847 | 7 | 2015-04-05T11:36:58Z | [
"python",
"string",
"unicode"
] | I have an error when trying to use contain in python.
```
s = u"some utf8 words"
k = u"one utf8 word"
if s.contains(k):
print "contains"
```
How do i achieve the same result?
Example with normal ASCII string
```
s = "haha i am going home"
k = "haha"
if s.contains(k):
print "contains"
```
I am using python 2.7.x | The same for ascii and utf8 strings:
```
if k in s:
print "contains"
```
There is no `contains()` on either ascii or uft8 strings:
```
>>> "strrtinggg".contains
AttributeError: 'str' object has no attribute 'contains'
```
---
What you can use instead of `contains` is `find` or `index`:
```
if k.find(s) > -1:
print "contains"
```
or
```
try:
k.index(s)
except ValueError:
pass # ValueError: substring not found
else:
print "contains"
```
But of course, the `in` operator is the way to go, it's much more elegant. |
Checking if string is only letters and spaces - Python | 29,460,405 | 2 | 2015-04-05T17:55:27Z | 29,460,426 | 8 | 2015-04-05T17:57:51Z | [
"python",
"contain"
] | Trying to get python to return that a string contains ONLY alphabetic letters AND spaces
```
string = input("Enter a string: ")
if all(x.isalpha() and x.isspace() for x in string):
print("Only alphabetical letters and spaces: yes")
else:
print("Only alphabetical letters and spaces: no")
```
I've been trying `please` and it comes up as `Only alphabetical letters and spaces: no` I've used `or` instead of `and`, but that only satisfies one condition. I need both conditions satisfied. That is the sentence must contain *only letters* and *only spaces* but must have at least one of each kind. It must *not* contain any numerals.
What am I missing here for python to return both letters and spaces are only contained in the string? | A character cannot be both an alpha **and** a space. It can be an alpha **or** a space.
To require that the string contains only alphas and spaces:
```
string = input("Enter a string: ")
if all(x.isalpha() or x.isspace() for x in string):
print("Only alphabetical letters and spaces: yes")
else:
print("Only alphabetical letters and spaces: no")
```
To require that the string contains at least one alpha and at least one space:
```
if any(x.isalpha() for x in string) and any(x.isspace() for x in string):
```
To require that the string contains at least one alpha, at least one space, and only alphas and spaces:
```
if (any(x.isalpha() for x in string)
and any(x.isspace() for x in string)
and all(x.isalpha() or x.isspace() for x in string)):
```
Test:
```
>>> string = "PLEASE"
>>> if (any(x.isalpha() for x in string)
... and any(x.isspace() for x in string)
... and all(x.isalpha() or x.isspace() for x in string)):
... print "match"
... else:
... print "no match"
...
no match
>>> string = "PLEASE "
>>> if (any(x.isalpha() for x in string)
... and any(x.isspace() for x in string)
... and all(x.isalpha() or x.isspace() for x in string)):
... print "match"
... else:
... print "no match"
...
match
``` |
Compare Python Pandas DataFrames for matching rows | 29,464,234 | 8 | 2015-04-06T01:30:50Z | 29,464,365 | 13 | 2015-04-06T01:54:35Z | [
"python",
"pandas",
"rows",
"matching"
] | I have this DataFrame (`df1`) in Pandas:
```
df1 = pd.DataFrame(np.random.rand(10,4),columns=list('ABCD'))
print df1
A B C D
0.860379 0.726956 0.394529 0.833217
0.014180 0.813828 0.559891 0.339647
0.782838 0.698993 0.551252 0.361034
0.833370 0.982056 0.741821 0.006864
0.855955 0.546562 0.270425 0.136006
0.491538 0.445024 0.971603 0.690001
0.911696 0.065338 0.796946 0.853456
0.744923 0.545661 0.492739 0.337628
0.576235 0.219831 0.946772 0.752403
0.164873 0.454862 0.745890 0.437729
```
I would like to check if any row (all columns) from another dataframe (`df2`) are present in `df1`. Here is `df2`:
```
df2 = df1.ix[4:8]
df2.reset_index(drop=True,inplace=True)
df2.loc[-1] = [2, 3, 4, 5]
df2.loc[-2] = [14, 15, 16, 17]
df2.reset_index(drop=True,inplace=True)
print df2
A B C D
0.855955 0.546562 0.270425 0.136006
0.491538 0.445024 0.971603 0.690001
0.911696 0.065338 0.796946 0.853456
0.744923 0.545661 0.492739 0.337628
0.576235 0.219831 0.946772 0.752403
2.000000 3.000000 4.000000 5.000000
14.000000 15.000000 16.000000 17.000000
```
I tried using `df.lookup` to search for one row at a time. I did it this way:
```
list1 = df2.ix[0].tolist()
cols = df1.columns.tolist()
print df1.lookup(list1, cols)
```
but I got this error message:
```
File "C:\Users\test.py", line 19, in <module>
print df1.lookup(list1, cols)
File "C:\python27\lib\site-packages\pandas\core\frame.py", line 2217, in lookup
raise KeyError('One or more row labels was not found')
KeyError: 'One or more row labels was not found'
```
I also tried `.all()` using:
```
print (df2 == df1).all(1).any()
```
but I got this error message:
```
File "C:\Users\test.py", line 12, in <module>
print (df2 == df1).all(1).any()
File "C:\python27\lib\site-packages\pandas\core\ops.py", line 884, in f
return self._compare_frame(other, func, str_rep)
File "C:\python27\lib\site-packages\pandas\core\frame.py", line 3010, in _compare_frame
raise ValueError('Can only compare identically-labeled '
ValueError: Can only compare identically-labeled DataFrame objects
```
I also tried `isin()` like this:
```
print df2.isin(df1)
```
but I got `False` everywhere, which is not correct:
```
A B C D
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
```
Is it possible to search for a set of rows in a DataFrame, by comparing it to another dataframe's rows?
EDIT:
Is is possible to drop `df2` rows if those rows are also present in `df1`? | One possible solution to your problem would be to use [merge](http://pandas.pydata.org/pandas-docs/version/0.15.2/merging.html). Checking if any row (all columns) from another dataframe (df2) are present in df1 is equivalent to determining the intersection of the the two dataframes. This can be accomplished using the following function:
```
pd.merge(df1, df2, on=['A', 'B', 'C', 'D'], how='inner')
```
For example, if df1 was
```
A B C D
0 0.403846 0.312230 0.209882 0.397923
1 0.934957 0.731730 0.484712 0.734747
2 0.588245 0.961589 0.910292 0.382072
3 0.534226 0.276908 0.323282 0.629398
4 0.259533 0.277465 0.043652 0.925743
5 0.667415 0.051182 0.928655 0.737673
6 0.217923 0.665446 0.224268 0.772592
7 0.023578 0.561884 0.615515 0.362084
8 0.346373 0.375366 0.083003 0.663622
9 0.352584 0.103263 0.661686 0.246862
```
and df2 was defined as:
```
A B C D
0 0.259533 0.277465 0.043652 0.925743
1 0.667415 0.051182 0.928655 0.737673
2 0.217923 0.665446 0.224268 0.772592
3 0.023578 0.561884 0.615515 0.362084
4 0.346373 0.375366 0.083003 0.663622
5 2.000000 3.000000 4.000000 5.000000
6 14.000000 15.000000 16.000000 17.000000
```
The function `pd.merge(df1, df2, on=['A', 'B', 'C', 'D'], how='inner')` produces:
```
A B C D
0 0.259533 0.277465 0.043652 0.925743
1 0.667415 0.051182 0.928655 0.737673
2 0.217923 0.665446 0.224268 0.772592
3 0.023578 0.561884 0.615515 0.362084
4 0.346373 0.375366 0.083003 0.663622
```
The results are all of the rows (all columns) that are both in df1 and df2.
We can also modify this example if the columns are not the same in df1 and df2 and just compare the row values that are the same for a subset of the columns. If we modify the original example:
```
df1 = pd.DataFrame(np.random.rand(10,4),columns=list('ABCD'))
df2 = df1.ix[4:8]
df2.reset_index(drop=True,inplace=True)
df2.loc[-1] = [2, 3, 4, 5]
df2.loc[-2] = [14, 15, 16, 17]
df2.reset_index(drop=True,inplace=True)
df2 = df2[['A', 'B', 'C']] # df2 has only columns A B C
```
Then we can look at the common columns using `common_cols = list(set(df1.columns) & set(df2.columns))` between the two dataframes then merge:
```
pd.merge(df1, df2, on=common_cols, how='inner')
```
**EDIT:** New question (comments), having identified the rows from df2 that were also present in the first dataframe (df1), is it possible to take the result of the pd.merge() and to then drop the rows from df2 that are also present in df1
I do not know of a straightforward way to accomplish the task of dropping the rows from df2 that are also present in df1. That said, you could use the following:
```
ds1 = set(tuple(line) for line in df1.values)
ds2 = set(tuple(line) for line in df2.values)
df = pd.DataFrame(list(ds2.difference(ds1)), columns=df2.columns)
```
There probably exists a better way to accomplish that task but i am unaware of such a method / function.
**EDIT 2:** How to drop the rows from df2 that are also present in df1 as shown in @WR answer.
The method provided `df2[~df2['A'].isin(df12['A'])]` does not account for all types of situations. Consider the following DataFrames:
df1:
```
A B C D
0 6 4 1 6
1 7 6 6 8
2 1 6 2 7
3 8 0 4 1
4 1 0 2 3
5 8 4 7 5
6 4 7 1 1
7 3 7 3 4
8 5 2 8 8
9 3 2 8 4
```
df2:
```
A B C D
0 1 0 2 3
1 8 4 7 5
2 4 7 1 1
3 3 7 3 4
4 5 2 8 8
5 1 1 1 1
6 2 2 2 2
```
df12:
```
A B C D
0 1 0 2 3
1 8 4 7 5
2 4 7 1 1
3 3 7 3 4
4 5 2 8 8
```
Using the above DataFrames with the goal of dropping rows from df2 that are also present in df1 would result in the following:
```
A B C D
0 1 1 1 1
1 2 2 2 2
```
Rows (1, 1, 1, 1) and (2, 2, 2, 2) are in df2 and not in df1. Unfortunately, using the provided method (`df2[~df2['A'].isin(df12['A'])]`) results in:
```
A B C D
6 2 2 2 2
```
This occurs because the value of 1 in column A is found in both the intersection DataFrame (i.e. (1, 0, 2, 3)) and df2 and thus removes both (1, 0, 2, 3) and (1, 1, 1, 1). This is unintended since the row (1, 1, 1, 1) is not in df1 and should not be removed.
I think the following will provide a solution. It creates a dummy column that is later used to subset the DataFrame to the desired results:
```
df12['key'] = 'x'
temp_df = pd.merge(df2, df12, on=df2.columns.tolist(), how='left')
temp_df[temp_df['key'].isnull()].drop('key', axis=1)
``` |
Memory error while using pip install Matplotlib | 29,466,663 | 16 | 2015-04-06T06:47:11Z | 29,467,260 | 10 | 2015-04-06T07:36:33Z | [
"python",
"matplotlib"
] | I am using Python 2.7, If i try to install Matplotlib I am getting this error if i use "pip install matplotlib"
```
Exception:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 232, in main
status = self.run(options, args)
File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 339, in run
requirement_set.prepare_files(finder)
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 355, in prepare_files
do_download, session=self.session,
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 782, in unpack_url
session,
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 667, in unpack_http_url
from_path, content_type = _download_http_url(link, session, temp_dir)
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 843, in _download_http_url
_download_url(resp, link, content_file)
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 615, in _download_url
for chunk in progress_indicator(resp_read(4096), 4096):
File "/usr/local/lib/python2.7/dist-packages/pip/utils/ui.py", line 46, in iter
for x in it:
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 580, in resp_read
decode_content=False):
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/response.py", line 256, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/response.py", line 186, in read
data = self._fp.read(amt)
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/cachecontrol/filewrapper.py", line 54, in read
self.__callback(self.__buf.getvalue())
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/cachecontrol/controller.py", line 205, in cache_response
self.serializer.dumps(request, response, body=body),
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/cachecontrol/serialize.py", line 81, in dumps
).encode("utf8"),
MemoryError"
```
What might the problem be?
I am using raspberry Pi 2 with a 16gb SD card. I still have 8gb data free but still getting this error.
Kindly help | It seems that you have insufficient RAM to build matplotlib from scratch. To overcome that, either turn on swap:
```
# create swap file of 512 MB
dd if=/dev/zero of=/swapfile bs=1024 count=524288
# modify permissions
chown root:root /swapfile
chmod 0600 /swapfile
# setup swap area
mkswap /swapfile
# turn swap on
swapon /swapfile
```
Or, if you have raspbian installed on your SD card, you can install matplotlib from the repository:
```
apt-get install python-matplotlib
``` |
Memory error while using pip install Matplotlib | 29,466,663 | 16 | 2015-04-06T06:47:11Z | 31,526,029 | 36 | 2015-07-20T20:46:26Z | [
"python",
"matplotlib"
] | I am using Python 2.7, If i try to install Matplotlib I am getting this error if i use "pip install matplotlib"
```
Exception:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 232, in main
status = self.run(options, args)
File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 339, in run
requirement_set.prepare_files(finder)
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 355, in prepare_files
do_download, session=self.session,
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 782, in unpack_url
session,
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 667, in unpack_http_url
from_path, content_type = _download_http_url(link, session, temp_dir)
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 843, in _download_http_url
_download_url(resp, link, content_file)
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 615, in _download_url
for chunk in progress_indicator(resp_read(4096), 4096):
File "/usr/local/lib/python2.7/dist-packages/pip/utils/ui.py", line 46, in iter
for x in it:
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 580, in resp_read
decode_content=False):
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/response.py", line 256, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/response.py", line 186, in read
data = self._fp.read(amt)
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/cachecontrol/filewrapper.py", line 54, in read
self.__callback(self.__buf.getvalue())
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/cachecontrol/controller.py", line 205, in cache_response
self.serializer.dumps(request, response, body=body),
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/cachecontrol/serialize.py", line 81, in dumps
).encode("utf8"),
MemoryError"
```
What might the problem be?
I am using raspberry Pi 2 with a 16gb SD card. I still have 8gb data free but still getting this error.
Kindly help | This error is coming up because, it seems, pip's caching mechanism is trying to read the entire file into memory before caching it⦠which poses a problem in a limited-memory environent, as matplotlib is ~50mb.
A simpler solution, until pip is patched to use a constant-space caching algorithm, is to run `pip` with `--no-cache-dir` to avoid the cache:
```
$ pip --no-cache-dir install matplotlib
``` |
Generating random ID from list - jinja | 29,472,403 | 3 | 2015-04-06T13:38:12Z | 29,472,684 | 9 | 2015-04-06T13:54:54Z | [
"python",
"random",
"jinja2"
] | I am trying to generate a random ID from a list of contacts (in Python, with jinja2) to display in an HTML template.
So I have a list of contacts, and for the moment I display all of them in a few cells in my HTML template by looping through the list of contacts:
```
# for contact_db in contact_dbs
<tr>
<td>{{contact_db.key.id()}}</td>
<td>{{contact_db.name}}</td>
<td>{{contact_db.phone}}</td>
<td>{{contact_db.email}}</td>
</tr>
# endfor
```
The view that renders the above is:
```
def contact_list():
contact_dbs, contact_cursor = model.Contact.get_dbs(
user_key=auth.current_user_key(),
)
return flask.render_template(
'contact_list.html',
html_class='contact-list',
title='Contacts',
contact_dbs=contact_dbs,
next_url=util.generate_next_url(contact_cursor),
)
```
Instead, I would like to display one contact, selected randomly by its ID, and it should display another contact with all its information every time the user refreshes the page (I am not dealing with displaying the same contact twice for now by the way).
I know that it is possible to use random in a python file to deal with random choices, so but not sure how it translates in jinja in the template.
Any help appreciated thanks! | There is a `random` filter in `jinja2`.
> **random(seq)**
>
> Return a random item from the sequence.
Use it like this:
```
{% set selected_contact = contact_dbs|random %}
```
note: I assumed `contact_dbs` is `iterable`. |
Python method available for both instantiated/uninstantiated class | 29,473,131 | 4 | 2015-04-06T14:20:11Z | 29,473,221 | 7 | 2015-04-06T14:25:30Z | [
"python",
"methods",
"class-method"
] | I have a class which fetches details and populates the class with information if it's instantiated already with an `id` using a `details` method. If it's not instantiated yet I want it to instead use an argument passed into `details` as the `id` and return a new instantiated object. Something like the following:
```
f = Foo()
f.id = '123'
f.details()
```
but also allow for:
```
f = Foo.details(id='123')
```
Can I use the same `details` method to accomplish this? Or do I need to create two separate methods and make one a `@classmethod`? Can they have the same name if I declare one as a `@classmethod` and the other not? | You'll have to create your own descriptor to handle this; it'll have to bind to the class if no instance is available, otherwise to the instance:
```
class class_or_instance_method(object):
def __init__(self, func):
self.func = func
def __get__(self, instance, cls=None):
if instance is None:
return classmethod(self.func).__get__(None, cls)
return self.func.__get__(instance, cls)
```
This descriptor delegates to a `classmethod()` object if no instance is available, to produce the right binding.
Use it like this:
```
class Foo(object):
@class_or_instance_method
def details(cls_or_self, id=None):
if isinstance(cls_or_self, type):
# called on a class
else:
# called on an instance
```
You can could make it more fancy by returning your own method-like wrapper object that passes in keyword arguments for the binding instead.
Demo:
```
>>> class Foo(object):
... @class_or_instance_method
... def details(cls_or_self, id=None):
... if isinstance(cls_or_self, type):
... return 'Class method with id {}'.format(id)
... else:
... return 'Instance method with id {}'.format(cls_or_self.id)
...
>>> Foo.details(42)
'Class method with id 42'
>>> f = Foo()
>>> f.id = 42
>>> f.details()
'Instance method with id 42'
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.