title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Listing contents of a bucket with boto3 | 30,249,069 | 13 | 2015-05-14T23:22:55Z | 30,262,449 | 9 | 2015-05-15T14:45:35Z | [
"python",
"amazon-s3",
"boto",
"boto3"
] | How can I see what's inside a bucket in S3 with `boto3`? (i.e. do an `"ls"`)?
Doing the following:
```
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('some/path/')
```
returns:
```
s3.Bucket(name='some/path/')
```
How do I see its contents? | This is similar to an 'ls' but it does not take into account the prefix folder convention and will list the objects in the bucket. It's left up to the reader to filter out prefixes which are part of the Key name.
Python2
```
from boto.s3.connection import S3Connection
conn = S3Connection() # assumes boto.cfg setup
bucket = conn.get_bucket('bucket_name')
for obj in bucket.get_all_keys():
print(obj.key)
```
Python 3
```
from boto3 import client
conn = client('s3') # again assumes boto.cfg setup, assume AWS S3
for key in conn.list_objects(Bucket='bucket_name')['Contents']:
print(key['Key'])
``` |
When is a reference created, and when is a new memory block allocated and then copy? | 30,249,921 | 7 | 2015-05-15T01:04:04Z | 30,250,018 | 7 | 2015-05-15T01:16:48Z | [
"python"
] | ```
>>> d
{1: 1, 2: 2, 3: 3}
>>> lst = [d, d]
>>> c=lst[0]
>>> c[1]=5
>>> lst
[{1: 5, 2: 2, 3: 3}, {1: 5, 2: 2, 3: 3}]
```
When `lst = [d, d]`, are `lst[0]` and `lsg[1]` both references to the memory block of `d`, instead of creating two memory blocks and copy the content of `d` to them respectively?
When `c=lst[0]`, is `c` just a reference to the memory occupied by `lst[0]`, instead of creating a new memory block and copy the content from `lst[0]`?
In Python, when is a reference created to point to an existing memory block, and when is a new memory block allocated and then copy?
This language feature of Python is different from C. What is the name of this language feature?
Thanks. | All variables (and other containers, such as dictionaries, lists, and object attributes) hold references to objects. Memory allocation occurs when the object is instantiated. Simple assignment always creates another reference to the existing object. For example, if you have:
```
a = [1, 2, 3]
b = a
```
Then `b` and `a` point to the same object, a list. You can verify this using the `is` operator:
```
print(b is a) # True
```
If you change `a`, then `b` changes too, because they are two names for the same object.
```
a.append(4)
print(b[3] == 4) # True
print(b[3] is a[3]) # also True
```
If you want to create a copy, you must do so explicitly. Here are some ways of doing this:
* For lists, use a slice: `b = a[:]`.
* For many types, you can use the type name to copy an existing object of that type: `b = list(a)`. When creating your own classes, this is a good approach to take if you need copy functionality.
* The `copy` module has methods that can be used to copy objects (either shallowly or deeply).
For immutable types, such as strings, numbers, and tuples, there is never any need to make a copy. You can only "change" these kinds of values by referencing different ones.
The best way of describing this is probably "everything's an object." In C, "primitive" types like integers are treated differently from arrays. In Python, they are not: all values are stored as references to objectsâeven integers. |
What's the difference between the square bracket and dot notations in Python? | 30,250,282 | 8 | 2015-05-15T01:52:25Z | 30,250,361 | 10 | 2015-05-15T02:02:49Z | [
"python",
"object",
"collections",
"attributes",
"member"
] | I come from a Javascript background (where properties can be accessed through both `.` and `[]` notation), so please forgive me, but, what, exactly is the difference between the two in Python?
From my experimentation it seeems that `[]` should always be used, both to get the index of a `list` or `set` and to get the key in a `dictionary`. Is this correct, and, if not, when do you use a `.` in Python? | The dot operator is used for accessing attributes of any object. For example, a complex number
```
>>> c = 3+4j
```
has (among others) the two attributes `real` and `imag`:
```
>>> c.real
3.0
>>> c.imag
4.0
```
As well as those, it has a method, `conjugate()`, which is also an attribute:
```
>>> c.conjugate
<built-in method conjugate of complex object at 0x7f4422d73050>
>>> c.conjugate()
(3-4j)
```
Square bracket notation is used for accessing members of a collection, whether that's by key in the case of a dictionary or other mapping:
```
>>> d = {'a': 1, 'b': 2}
>>> d['a']
1
```
... or by index in the case of a sequence like a list or string:
```
>>> s = ['x', 'y', 'z']
>>> s[2]
'z'
>>> t = 'Kapow!'
>>> t[3]
'o'
```
These collections also, separately, have attributes:
```
>>> d.pop
<built-in method pop of dict object at 0x7f44204068c8>
>>> s.reverse
<built-in method reverse of list object at 0x7f4420454d08>
>>> t.lower
<built-in method lower of str object at 0x7f4422ce2688>
```
... and again, in the above cases, these attributes happen to be methods.
While all objects have some attributes, not all objects have members. For example, if we try to use square bracket notation to access a member of our complex number `c`:
```
>>> c[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'complex' object is not subscriptable
```
... we get an error (which makes sense, since there's no obvious way for a complex number to have members).
It's possible to define how `[]` and `.` access work in a user-defined class, using the special methods [`__getitem__()`](https://docs.python.org/3/reference/datamodel.html#object.__getitem__) and [`__getattr__()`](https://docs.python.org/3/reference/datamodel.html#object.__getattr__) respectively. Explaining how to do so is beyond the scope of this question, but you can read more about it in the [Python Tutorial](https://docs.python.org/3/tutorial/classes.html#a-first-look-at-classes). |
Collecting values from a Python dictionary with list of keys | 30,250,633 | 2 | 2015-05-15T02:37:36Z | 30,250,647 | 7 | 2015-05-15T02:38:45Z | [
"python",
"function",
"dictionary",
"key"
] | ```
Dict = {'w1': 56, 'w2': 19, 'w3': 77, 'w4': 45, 'w5': 31}
myWords = ['w1','w4','w5']
OutputList=[]
for items in myWords:
tmps = Dict[items]
OutputList.append(tmps)
```
My question is can we, *without* using **`for` loop**, collect the values (output) from a dictionary with a particular list ("myWord")? | You can use a *list comprehension*:
```
OutputList = [Dict[x] for x in myWords]
``` |
Collecting values from a Python dictionary with list of keys | 30,250,633 | 2 | 2015-05-15T02:37:36Z | 30,250,759 | 18 | 2015-05-15T02:51:19Z | [
"python",
"function",
"dictionary",
"key"
] | ```
Dict = {'w1': 56, 'w2': 19, 'w3': 77, 'w4': 45, 'w5': 31}
myWords = ['w1','w4','w5']
OutputList=[]
for items in myWords:
tmps = Dict[items]
OutputList.append(tmps)
```
My question is can we, *without* using **`for` loop**, collect the values (output) from a dictionary with a particular list ("myWord")? | This is what [`operator.itemgetter`](https://docs.python.org/2/library/operator.html#operator.itemgetter) is for:
```
>>> import operator
>>> Dict = {'w1': 56, 'w2': 19, 'w3': 77, 'w4': 45, 'w5': 31}
>>> myWords = ['w1','w4','w5']
>>> operator.itemgetter(*myWords)(Dict)
[56, 45, 31]
``` |
How do you get the first 3 elements in Python OrderedDict? | 30,250,715 | 5 | 2015-05-15T02:45:56Z | 30,250,803 | 10 | 2015-05-15T02:56:44Z | [
"python",
"dictionary",
"ordereddictionary"
] | How do you get the first 3 elements in Python OrderedDict?
Also is it possible to delete data from this dictionary.
For example: How would I get the first 3 elements in Python OrderedDict and delete the rest of the elements? | Let's create a simple `OrderedDict`:
```
>>> from collections import OrderedDict
>>> od = OrderedDict(enumerate("abcdefg"))
>>> od
OrderedDict([(0, 'a'), (1, 'b'), (2, 'c'), (3, 'd'), (4, 'e'), (5, 'f'), (6, 'g')])
```
To return the first three *keys*, *values* or *items* respectively:
```
>>> list(od)[:3]
[0, 1, 2]
>>> list(od.values())[:3]
['a', 'b', 'c']
>>> list(od.items())[:3]
[(0, 'a'), (1, 'b'), (2, 'c')]
```
To remove everything except the first three items:
```
>>> while len(od) > 3:
... od.popitem()
...
(6, 'g')
(5, 'f')
(4, 'e')
(3, 'd')
>>> od
OrderedDict([(0, 'a'), (1, 'b'), (2, 'c')])
``` |
How to check if a number is in a interval | 30,255,450 | 3 | 2015-05-15T08:59:05Z | 30,255,466 | 7 | 2015-05-15T08:59:44Z | [
"python",
"coding-style",
"range",
"intervals"
] | Suppose I got:
```
first_var = 1
second_var = 5
interval = 2
```
I want an interval from second\_var like `second_var ± interval` (from 3 to 7).
I wank to check if first\_var is in that interval.
So in this specific case I want `False`
If `first_var = 4`, I want `True`
I can do this:
```
if (first_var > second_var-interval) and (first_var < second_var+interval):
#True
```
Is there a more pythonic way to do this? | You can use math-like sequence as Python supports that
```
if (second_var-interval < first_var < second_var+interval):
# True
```
Note that comments in python begin with a `#` |
Why doesn't len(None) return 0? | 30,257,332 | 2 | 2015-05-15T10:34:17Z | 30,257,360 | 13 | 2015-05-15T10:36:14Z | [
"python",
"nonetype"
] | `None` in Python is an object.
```
>>> isinstance(None, object)
True
```
And as such it can employ functions like \_\_str\_\_()
```
>>> str(None)
'None'
```
But why doesn't it do the same for \_\_len\_\_()?
```
>>> len(None)
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
len(None)
TypeError: object of type 'NoneType' has no len()
```
It seems like it would be Pythonic the same way that `if list` is acceptable even if the variable is `None` and not just an empty list.
Are there cases that would make the use of `len(None)` more of a problem? | `len` only makes sense for collections of objects - None is not a collection. |
Proper way to consume data from RESTFUL API in django | 30,259,452 | 11 | 2015-05-15T12:22:29Z | 30,312,778 | 32 | 2015-05-18T21:02:40Z | [
"python",
"django",
"rest",
"restful-architecture"
] | I'm trying to learn django so while I have a current solution I'm not sure if it follows best practices in django. I would like to display information from a web api on my website. Let's say the api url is as follows:
```
http://api.example.com/books?author=edwards&year=2009
```
Thsis would return a list of books by Edwards written in the year 2009. Returned in the following format:
```
{'results':
[
{
'title':'Book 1',
'Author':'Edwards Man',
'Year':2009
},
{
'title':'Book 2',
'Author':'Edwards Man',
'Year':2009}
]
}
```
Currently I am consuming the API in my views file as follows:
```
class BooksPage(generic.TemplateView):
def get(self,request):
r = requests.get('http://api.example.com/books?author=edwards&year=2009')
books = r.json()
books_list = {'books':books['results']}
return render(request,'books.html',books_list)
```
Normally, we grab data from the database in the models.py file, but I am unsure if I should be grabbing this API data in models.py or views.py. If it should be in models.py, can someone provide an example of how to do this? I wrote the above example sepecifically for stackoverflow, so any bugs are purely a result of writing it here. | I like the approach of putting that kind of logic in a separate service layer (services.py); the data you are rendering is quite not a "model" in the Django ORM sense, and it's more than simple "view" logic. A clean encapsulation ensures you can do things like control the interface to the backing service (i.e., make it look like a Python API vs. URL with parameters), add enhancements such as caching, as @sobolevn mentioned, test the API in isolation, etc.
So I'd suggest a simple `services.py`, that looks something like this:
```
def get_books(year, author):
url = 'http://api.example.com/books'
params = {'year': year, 'author': author}
r = requests.get('http://api.example.com/books', params=params)
books = r.json()
books_list = {'books':books['results']}
```
Note how the parameters get passed (using a capability of the `requests` package).
Then in `views.py`:
```
import services
class BooksPage(generic.TemplateView):
def get(self,request):
books_list = services.get_books('2009', 'edwards')
return render(request,'books.html',books_list)
```
See also:
* [Separation of business logic and data access in django](http://stackoverflow.com/questions/12578908/separation-of-business-logic-and-data-access-in-django) |
Why the below piece of regular expression code is returning comma(,) | 30,259,636 | 2 | 2015-05-15T12:32:02Z | 30,259,680 | 9 | 2015-05-15T12:34:49Z | [
"python",
"regex",
"python-2.7"
] | Please let me know why the following piece of code is giving the below result
```
>>> pattern = re.compile(r'[!#$%&()*+-.]')
>>> pattern.findall("a,b")
[',']
```
There is no comma(,) symbol in re.compile method, but why it is matching comma also? | `[+-.]` is a single character in the range from `+` (ASCII 43) to `.` (ASCII 46).
Between those two characters you find `,` (ASCII 44) and `-` (ASCII 45).
I guess you wanted `\-` instead of `-`. |
Python 3: AttributeError: 'module' object has no attribute '__path__' using urllib in terminal | 30,261,860 | 5 | 2015-05-15T14:20:01Z | 30,261,879 | 7 | 2015-05-15T14:20:37Z | [
"python",
"urllib"
] | My code is runnning perfectly in PyCharm, but I have error messages while trying to open it in terminal. What's wrong with my code, or where I made mistakes?
```
import urllib.request
with urllib.request.urlopen('http://python.org/') as response:
html = response.read()
print(html)
```
Output from terminal:
```
λ python Desktop\url1.py
Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 2218, in _find_and_load_unlocked
AttributeError: 'module' object has no attribute '__path__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "Desktop\url1.py", line 1, in <module>
import urllib.request
File "C:\Users\Przemek\Desktop\urllib.py", line 1, in <module>
import urllib.request
ImportError: No module named 'urllib.request'; 'urllib' is not a package
``` | You called a file `C:\Users\Przemek\Desktop\urllib.py`, you need to rename it. You are importing from that not the actual module. rename `C:\Users\Przemek\Desktop\urllib.py` and remove any `C:\Users\Przemek\Desktop\urllib.pyc`.
It is not the file you are running but you have the file in the same directory so python checks the current directory first hence the error. |
A fast way to find an all zero answer | 30,262,104 | 21 | 2015-05-15T14:30:26Z | 31,240,100 | 7 | 2015-07-06T07:33:25Z | [
"python",
"algorithm",
"math",
"numpy",
"optimization"
] | For every array of length n+h-1 with values from 0 and 1, I would like to check if there exists another non-zero array of length n with values from -1,0,1 so that all the h inner products are zero. My naive way to do this is
```
import numpy as np
import itertools
(n,h)= 4,3
for longtuple in itertools.product([0,1], repeat = n+h-1):
bad = 0
for v in itertools.product([-1,0,1], repeat = n):
if not any(v):
continue
if (not np.correlate(v, longtuple, 'valid').any()):
bad = 1
break
if (bad == 0):
print "Good"
print longtuple
```
This is very slow if we set `n = 19` and `h = 10` which is what I would like to test.
> My goal is to find a single "Good" array of length `n+h-1`. Is there a
> way to speed this up so that `n = 19` and `h = 10` is feasible?
The current naive approach takes 2^(n+h-1)3^(n) iterations, each one of which takes roughly n time. That is 311,992,186,885,373,952 iterations for `n = 19` and `h = 10` which is impossible.
**Note 1** Changed `convolve` to `correlate` so that the code considers `v` the right way round.
---
**July 10 2015**
The problem is still open with no solution fast enough for `n=19` and `h=10` given yet. | Consider the following "meet in the middle" approach.
First, recast the situation in the matrix formulation provided by leekaiinthesky.
Next, note that we only have to consider "short" vectors `s` of the form `{0,1}^n` (i.e., short vectors containing only 0's and 1's) if we change the problem to finding an `h x n` [Hankel matrix](https://en.wikipedia.org/wiki/Hankel_matrix) `H` of 0's and 1's such that `Hs1` is never equal to `Hs2` for two different short vectors of 0's and 1's. That is because `Hs1 = Hs2` implies `H(s1-s2)=0` which implies there is a vector `v` of 1's, 0's and -1's, namely `s1-s2`, such that `Hv = 0`; conversely, if `Hv = 0` for `v` in `{-1,0,1}^n`, then we can find `s1` and `s2` in `{0,1}^n` such that `v = s1 - s2` so `Hs1 = Hs2`.
When `n=19` there are only 524,288 vectors `s` in `{0,1}^n` to try; hash the results `Hs` and if the same result occurs twice then `H` is no good and try another `H`. In terms of memory this approach is quite feasible. There are `2^(n+h-1)` Hankel matrices `H` to try; when `n=19` and `h=10` that's 268,435,456 matrices. That's `2^38` tests, or 274,877,906,944, each with about `nh` operations to multiply the matrix `H` and the vector `s`, about 52 trillion ops. That seems feasible, no?
Since you're now only dealing with 0's and 1's, not -1's, you might also be able to speed up the process by using bit operations (shift, and, and count 1's).
**Update**
I implemented my idea in C++. I'm using bit operations to calculate dot products, encoding the resulting vector as a long integer, and using unordered\_set to detect duplicates, taking an early exit from a given long vector when a duplicate vector of dot products is found.
I obtained 00000000010010111000100100 for n=17 and h=10 after a few minutes, and 000000111011110001001101011 for n=18 and h=10 in a little while longer. I'm just about to run it for n=19 and h=10.
```
#include <iostream>
#include <bitset>
#include <unordered_set>
/* Count the number of 1 bits in 32 bit int x in 21 instructions.
* From /Hackers Delight/ by Henry S. Warren, Jr., 5-2
*/
int count1Bits(int x) {
x = x - ((x >> 1) & 0x55555555);
x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
x = (x + (x >> 4)) & 0x0F0F0F0F;
x = x + (x >> 8);
x = x + (x >> 16);
return x & 0x0000003F;
}
int main () {
const int n = 19;
const int h = 10;
std::unordered_set<long> dotProductSet;
// look at all 2^(n+h-1) possibilities for longVec
// upper n bits cannot be all 0 so we can start with 1 in pos h
for (int longVec = (1 << (h-1)); longVec < (1 << (n+h-1)); ++longVec) {
dotProductSet.clear();
bool good = true;
// now look at all n digit non-zero shortVecs
for (int shortVec = 1; shortVec < (1 << n); ++shortVec) {
// longVec dot products with shifted shortVecs generates h results
// each between 0 and n inclusive, can encode as h digit number in
// base n+1, up to (n+1)^h = 20^10 approx 13 digits, need long
long dotProduct = 0;
// encode h dot products of shifted shortVec with longVec
// as base n+1 integer
for(int startShort = 0; startShort < h; ++startShort) {
int shortVecShifted = shortVec << startShort;
dotProduct *= n+1;
dotProduct += count1Bits(longVec & shortVecShifted);
}
auto ret = dotProductSet.insert(dotProduct);
if (!ret.second) {
good = false;
break;
}
}
if (good) {
std::cout << std::bitset<(n+h-1)>(longVec) << std::endl;
break;
}
}
return 0;
}
```
**Second Update**
The program for n=19 and h=10 ran for two weeks in the background on my laptop. At the end, it just exited without printing any results. Barring some kind of error in the program, it looks like there are no long vectors with the property you want. I suggest looking for theoretical reasons why there are no such long vectors. Perhaps some kind of counting argument will work. |
Django 1.8 TEMPLATE_DIRS being ignored | 30,262,343 | 3 | 2015-05-15T14:41:03Z | 30,262,395 | 14 | 2015-05-15T14:43:11Z | [
"python",
"django"
] | This is driving me crazy. I've done something weird and it appears that my TEMPLATE\_DIRS entries are being ignored. I have only one settings.py file, located in the project directory, and it contains:
```
TEMPLATE_DIRS = (
os.path.join(BASE_DIR, 'templates'),
os.path.join(BASE_DIR, 'web_app/views/'),
)
```
I'm putting project-level templates in the /templates folder, and then have folders for different view categories in my app folder (e.g. authentication views, account views, etc.).
For example, my main index page view is in web\_app/views/main/views\_main.py and looks like
```
from web_app.views.view_classes import AuthenticatedView, AppView
class Index(AppView):
template_name = "main/templates/index.html"
```
where an AppView is just an extension of TemplateView. Here's my problem: when I try to visit the page, I get a TemplateDoesNotExist exception and the part that's really confusing me is the Template-Loader Postmortem:
```
Template-loader postmortem
Django tried loading these templates, in this order:
Using loader django.template.loaders.filesystem.Loader:
Using loader django.template.loaders.app_directories.Loader:
C:\Python34\lib\site-packages\django\contrib\admin\templates\main\templates\index.html (File does not exist)
C:\Python34\lib\site-packages\django\contrib\auth\templates\main\templates\index.html (File does not exist)
```
Why in the world are the 'templates' and 'web\_app/views' directories not being searched? I've checked Settings via the debugger and a breakpoint in views\_main.py and it looks like they're in there. Has anyone had a similar problem? Thanks. | What version of Django are you using? `TEMPLATE_DIRS`is deprecated since **1.8**
> Deprecated since version 1.8:
> Set the DIRS option of a DjangoTemplates backend instead.
<https://docs.djangoproject.com/en/1.8/ref/settings/#template-dirs>
So try this instead:
```
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
# insert your TEMPLATE_DIRS here
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
# Insert your TEMPLATE_CONTEXT_PROCESSORS here or use this
# list if you haven't customized them:
'django.contrib.auth.context_processors.auth',
'django.template.context_processors.debug',
'django.template.context_processors.i18n',
'django.template.context_processors.media',
'django.template.context_processors.static',
'django.template.context_processors.tz',
'django.contrib.messages.context_processors.messages',
],
},
},
```
]
Here's a link to an upgrade guide: <https://docs.djangoproject.com/en/1.8/ref/templates/upgrading/> |
InvalidBasesError: Cannot resolve bases for [<ModelState: 'users.GroupProxy'>] | 30,267,237 | 9 | 2015-05-15T19:21:10Z | 31,733,326 | 9 | 2015-07-30T20:22:37Z | [
"python",
"django",
"unit-testing",
"django-models",
"django-migrations"
] | When I run tests I get this error during database initialization:
```
django.db.migrations.state.InvalidBasesError: Cannot resolve bases for [<ModelState: 'users.GroupProxy'>]
This can happen if you are inheriting models from an app with migrations (e.g. contrib.auth)
```
I created this proxy for contrib.auth Group model to place it in my app in django admin:
```
class GroupProxy(Group):
class Meta:
proxy = True
verbose_name = Group._meta.verbose_name
verbose_name_plural = Group._meta.verbose_name_plural
```
So what can I do to fix this issue? | After a lot of digging on this the only thing that worked for me was
`comment out the offending apps, run migrations, then add them in again.`
Just a workaround but hopefully it helps somebody. |
Limit memory usage? | 30,269,238 | 15 | 2015-05-15T21:45:35Z | 30,269,998 | 8 | 2015-05-15T23:02:55Z | [
"python",
"linux",
"memory",
"ulimit"
] | I run Python 2.7 on a Linux machine with 16GB Ram and 64 bit OS. A python script I wrote can load too much data into memory, which slows the machine down to the point where I cannot even kill the process any more.
While I can limit memory by calling:
```
ulimit -v 12000000
```
in my shell before running the script, I'd like to include a limiting option in the script itself. Everywhere I looked, the `resource` module is cited as having the same power as `ulimit`. But calling:
```
import resource
_, hard = resource.getrlimit(resource.RLIMIT_DATA)
resource.setrlimit(resource.RLIMIT_DATA, (12000, hard))
```
at the beginning of my script does absolutely nothing. Even setting the value as low as 12000 never crashed the process. I tried the same with `RLIMIT_STACK`, as well with the same result. Curiously, calling:
```
import subprocess
subprocess.call('ulimit -v 12000', shell=True)
```
does nothing as well.
What am I doing wrong? I couldn't find any actual usage examples online. | [`resource.RLIMIT_VMEM`](https://docs.python.org/2/library/resource.html?highlight=setrlimit#resource.RLIMIT_VMEM) is the resource [corresponding to `ulimit -v`](http://git.savannah.gnu.org/cgit/bash.git/tree/builtins/ulimit.def).
`RLIMIT_DATA` [only affects `brk/sbrk` system calls](http://linux.die.net/man/2/setrlimit) while [newer memory managers tend to use `mmap` instead](http://cboard.cprogramming.com/linux-programming/101090-what-differences-between-brk-mmap.html).
The second thing to note is that [`ulimit`](http://linux.die.net/man/3/ulimit)/[`setrlimit`](http://linux.die.net/man/2/setrlimit) only affects the current process and its future children.
Regarding the `AttributeError: 'module' object has no attribute 'RLIMIT_VMEM'` message: the [`resource` module docs](https://docs.python.org/2/library/resource.html) mention this possibility:
> This module does not attempt to mask platform differences â symbols
> not defined for a platform will not be available from this module on
> that platform.
According to the [`bash` `ulimit` source](http://git.savannah.gnu.org/cgit/bash.git/tree/builtins/ulimit.def) linked to above, it uses `RLIMIT_AS` if `RLIMIT_VMEM` is not defined. |
Python classes: Inheritance vs Instantiation | 30,269,817 | 10 | 2015-05-15T22:43:10Z | 30,270,005 | 7 | 2015-05-15T23:03:32Z | [
"python",
"class",
"tkinter"
] | I'm creating a GUIclass that uses Frame() as its base class.
In my GUIclassâs init method I want to create a Frame widget
Right now I have:
```
class GUIclass(Frame):
def __init__(self, parent):
frame = Frame(self, parent)
```
But I've seen this elsewhere for the third line:
```
Frame.__init__(self, parent)
```
I'm new to programming, python and definitely inheritance and I wanted to know if I understand the difference between the two correctly. I did a lot of researching and reading, I promise, but I couldn't quite find anything that made it completely clear:
In the first situation I don't call the init method as I created a Frame object (frame) and when an object is created its init method is called implicitly by python.
In the second scenario, one is calling the init method on the class (which I believe is totally legit?) because a Frame object wasn't created, so therefore wouldn't do it automatically.
Is that right?
I've also seen:
```
frame = Frame.__init__(self, parent)
```
which really threw me off. Is this just someone doing something redundant or is there a reason for this?
Thank you for your help, I want to take it slow for now and make sure I fully understand any and every line of code I write as I go rather than writing and running a whole program I half understand. | You should call
```
super(GUIclass, self).__init__(parent)
```
This is the proper way to call (all) your inherited `__init__()` method(s). It has in many cases identical results compared to the mentioned
```
Frame.__init__(self, parent)
```
which only lacks the abstraction concerning the inheritance relationships and states the class `Frame` as the one and only class whose `__init__()` method you might want to call (more about that later).
The also mentioned
```
frame = Frame(self.parent)
```
is wrong in this context. It belongs to another pattern of object relationship, namely *contents* relationship instead of *inheritance* relationship (which you aim at). It will *create* a new object of class `Frame` instead of initializing the `Frame` parts of yourself; in inheritance relationships you *are* a `Frame`, so you have to initialize `yourself` as one as well as initializing your specialized parts (which is done in the rest of your `__init__()` method). In *contents* relationship models you merely *have* a `Frame`.
Now, what about that "slight" difference I mentioned above between calling `super(GUIclass, self).__init__(parent)` and `Frame.__init__(self, parent)`?
To understand that you need to dig deeper into inheritance relationships and the various possibilities these offer, especially with multiple inheritance.
Consider a diamond-shaped relationship model which looks like this:
```
Frame
/ \
GUIclass SomeOtherClass
\ /
AnotherClass
```
In your current scenario you only have the top left two classes, but one never knows what's going to come, and you should always code in a way so that the next user of your code keeps all options.
In this diamond-shaped pattern you have `AnotherClass` which inherits `GUIClass` and `SomeOtherClass` which in turn both inherit `Frame`.
If you now use the pattern `Frame.__init__(self, parent)` in both `GUIclass` and `SomeOtherClass`, then calling their `__init__()` methods from the `__init__()` method of `AnotherClass` will result in a doubled calling of the `Frame`'s `__init__()` method. This typically is not intended, and to take care that this does not happen, the `super` call was invented. It takes care that a decent calling order calls each of the `__init__()` methods only and exactly once. |
How to include prerequisite apps in INSTALLED_APPS | 30,270,334 | 9 | 2015-05-15T23:51:50Z | 30,472,909 | 14 | 2015-05-27T03:59:20Z | [
"python",
"django"
] | I have a Django App I'm building, which we will call `foo`.
Because of the way Foo is built, it requires a number of third-party django apps to function. For example, to run `Foo` an install apps might look like:
```
INSTALLED_APPS = ('prereq1',prereq2','foo')
```
In fact, for `Foo` to even be functional, `'prereq1', prereq2'` *have to be installed* in django. Now, I can add requirements to `requirements.txt` or `setup.py` to make sure the libraries are installed when someone goes to install `Foo`, but I can't figure out if there is a way to have them installed in Django itself.
The reason for this is if someone wants to use Foo, I don't want to include instructions like:
> In your `INSTALLED_APPS` add `foo` but also add `scary_looking_library_name` and `thing_you_dont_understand`.
So is it possible for an app in `INSTALLED_APPS` to somehow require or inject further apps into that list? | I agree with Daniel Roseman's answer about the [system checks framework](https://docs.djangoproject.com/en/1.8/topics/checks/#writing-your-own-checks) being an optimal place for these checks. The system checks framework was introduced Django 1.7.
However, assuming you have documentation, you can also document these prerequisites such as [Django REST Framework did in their installation instructions](http://www.django-rest-framework.org/#installation).
You can then do something like the below in your code ([django-mptt](https://github.com/django-mptt/django-mptt) used as an example):
```
try:
from mptt.fields import TreeForeignKey
from mptt.models import MPTTModel
except ImportError:
raise ImportError(
'You are using the `foo` app which requires the `django-mptt` module.'
'Be sure to add `mptt` to your INSTALLED_APPS for `foo` to work properly.'
)
```
This is the method I have seen used in multiple applications. The onus of reading the documentation lies on the developer.
Perhaps this is an unwanted/unnecessary opinion, but the injection of dependencies into the `INSTALLED_APPS` is nothing that I feel you should be handling with your application.
I typically try to follow the [Zen of Python](https://www.python.org/dev/peps/pep-0020/) when designing applications:
1. ***"Explicit is better than implicit."***
* Have the developer manually enter the dependencies in `INSTALLED_APPS`
2. ***"If the implementation is hard to explain, it's a bad idea."***
* Trying to figure out a way to inject the dependencies into the `INSTALLED_APPS` is hard to explain. If the third-party dependencies are complicated, let the developer decide.
3. ***"Simple is better than complex."***
* It is easier to document the dependencies and require the developer to add them to the `INSTALLED_APPS`.
4. ***"There should be one-- and preferably only one --obvious way to do it."***
* The common practice is to have the developer add the third party apps to the `INSTALLED_APPS` - which is why there is no obvious way to do what you want (injection).
If a developer wants to activate an app, they will. As you so eloquently stated in your example, `scary_looking_library_name` and `thing_you_dont_understand` is the responsibility of the developer to understand. Choosing to install it for developer is imposing an unnecessary security risk. Let the developer choose to use your application and initialize its dependencies. |
Django 1.8 LookupError AUTH_USER_MODEL | 30,270,381 | 4 | 2015-05-15T23:57:51Z | 30,270,705 | 13 | 2015-05-16T00:50:37Z | [
"python",
"django",
"django-models",
"django-authentication",
"django-1.8"
] | I'm using a custon user model as such in my app called `fowl`.
When I run `syncdb` or `makemigrations` or `migrate` I get a `LookupError`. Please help
In `settings.py` I have defined `AUTH_USER_MODEL` as `'fowl.User'`
**fowl/models.py**
```
from django.db import models
from django.contrib.auth.models import AbstractBaseUser, PermissionsMixin, BaseUserManager
from django.utils import timezone
from django.core.mail import send_mail
from django.utils.translation import ugettext_lazy as _
class UserManager(BaseUserManager):
def create_user(self, email, password=None):
"""
Creates and saves a User with the given email, date of
birth and password.
"""
if not email:
raise ValueError('Users must have an email address')
user = self.model(
email=self.normalize_email(email),
)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, email, password):
"""
Creates and saves a superuser with the given email, date of
birth and password.
"""
user = self.create_user(email,
password=password,
)
user.is_admin = True
user.save(using=self._db)
return user
class User(AbstractBaseUser, PermissionsMixin):
"""
Custom user class.
"""
email = models.EmailField(_('email address'), unique=True, db_index=True)
is_active = models.BooleanField(default=True)
is_admin = models.BooleanField(default=False)
is_staff = models.BooleanField(_('staff status'), default=False)
date_joined = models.DateTimeField(default=timezone.now)
first_name = models.CharField(_('first name'), max_length=30, blank=True)
last_name = models.CharField(_('last name'), max_length=30, blank=True)
is_festival = models.BooleanField(default=True)
objects = UserManager()
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = []
def __unicode__(self):
return self.email
class Meta:
verbose_name = _('user')
verbose_name_plural = _('users')
abstract = True
def get_full_name(self):
"""
Returns the first_name plus the last_name, with a space in between.
"""
full_name = '%s %s' % (self.first_name, self.last_name)
return full_name.strip()
def get_short_name(self):
"""
Returns the short name for the user.
"""
return self.first_name
def email_user(self, subject, message, from_email=None, **kwargs):
"""
Sends an email to this User.
"""
send_mail(subject, message, from_email, [self.email], **kwargs)
@property
def is_festival(self):
"""Is the user a member of staff?"""
return self.is_festival
```
When I run `syncdb` or `makemigrations` I get a `LookupError`
```
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Users/Blu/projects/fowl/env/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/Users/Blu/projects/fowl/env/lib/python2.7/site-packages/django/core/management/__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/Blu/projects/fowl/env/lib/python2.7/site-packages/django/core/management/base.py", line 390, in run_from_argv
self.execute(*args, **cmd_options)
File "/Users/Blu/projects/fowl/env/lib/python2.7/site-packages/django/core/management/base.py", line 440, in execute
self.check()
File "/Users/Blu/projects/fowl/env/lib/python2.7/site-packages/django/core/management/base.py", line 478, in check
include_deployment_checks=include_deployment_checks,
File "/Users/Blu/projects/fowl/env/lib/python2.7/site-packages/django/core/checks/registry.py", line 72, in run_checks
new_errors = check(app_configs=app_configs)
File "/Users/Blu/projects/fowl/env/lib/python2.7/site-packages/django/contrib/auth/checks.py", line 12, in check_user_model
cls = apps.get_model(settings.AUTH_USER_MODEL)
File "/Users/Blu/projects/fowl/env/lib/python2.7/site-packages/django/apps/registry.py", line 202, in get_model
return self.get_app_config(app_label).get_model(model_name.lower())
File "/Users/Blu/projects/fowl/env/lib/python2.7/site-packages/django/apps/config.py", line 162, in get_model
"App '%s' doesn't have a '%s' model." % (self.label, model_name))
LookupError: App 'fowl' doesn't have a 'user' model.
```
What Am I doing wrong? | In your `User` class' `Meta`, you have set the `abstract=True` which is causing the issue. Remove it and then run `makemigrations`.
Also, note that you are repeating many fields already present in [`AbstractUser` class](https://github.com/django/django/blob/master/django/contrib/auth/models.py). You may eliminate them.
```
class User(AbstractBaseUser, PermissionsMixin):
is_admin = models.BooleanField(default=False)
is_festival = models.BooleanField(default=True)
#The properties, methods, etc..
``` |
Localise float notation | 30,273,056 | 4 | 2015-05-16T07:35:41Z | 30,273,093 | 9 | 2015-05-16T07:41:57Z | [
"python",
"localization"
] | In Python a float has the following notation: `35.45`. However in Belgium the notation is a bit different: `34,78` . For my thesis it's very important that the floats are printed in the right notation. I could convert every float to a string and change the `'.'` to a `','` but I wondered if there was any other solution. | You can use [`str`](https://docs.python.org/2/library/locale.html#locale.str) function from [`locale`](https://docs.python.org/2/library/locale.html) package:
```
>>> import locale
>>> locale.setlocale(locale.LC_ALL, "nl_BE")
'nl_BE'
>>> locale.str(234.2)
'234,2'
```
You can also convert localised string to float:
```
>>> locale.atof("23424,2")
23424.2
``` |
How to swap values in python? | 30,274,234 | 2 | 2015-05-16T09:59:22Z | 30,274,329 | 7 | 2015-05-16T10:08:54Z | [
"python",
"list",
"variables",
"numbers",
"swap"
] | I know how to swap two variables together but I want to know if there is a quicker way to follow a certain pattern.
So I have this list of number. `list=[1,2,3,4,5,6]`
And what I want to do is to swap a number with the following one and swap the next number with the number after it. so after swapping them it would become `list=[2,1,4,3,6,3]`
So I was wondering if there was a way to be able to swap the numbers more simply. Thank you. | ```
lst = [1,2,3,4,5,6] # As an example
for x in range(0, len(lst), 2):
if x+1 == len(lst): # A fix for lists which have an odd-length
break
lst[x], lst[x+1] = lst[x+1], lst[x]
```
This doesn't create a new list.
Edit: Tested and it's even faster than a list comprehension. |
All fields in ModelSerializer django rest framework | 30,274,591 | 6 | 2015-05-16T10:37:03Z | 30,277,903 | 7 | 2015-05-16T16:10:54Z | [
"python",
"django",
"django-rest-framework"
] | **models.py**:
```
class Car():
producer = models.ForeignKey(Producer, blank=True, null=True,)
color = models.CharField()
car_model = models.CharField()
doors = models.CharField()
```
**serializers.py**:
```
class CarSerializer(ModelSerializer):
class Meta:
model = Car
fields = Car._meta.get_all_field_names()
```
So, here I want to use all fields. But I have an error:
**Field name `producer_id` is not valid for model `Car`.**
How to fix that?
Thanks! | According to the [Django REST Framework's Documentation on ModelSerializers](http://www.django-rest-framework.org/api-guide/serializers/#modelserializer):
> By default, all the model fields on the class will be mapped to a corresponding serializer fields.
This is different than [Django's ModelForms](https://docs.djangoproject.com/en/1.8/topics/forms/modelforms/), which requires you to [specify the special attribute `'__all__'`](https://docs.djangoproject.com/en/1.8/topics/forms/modelforms/#selecting-the-fields-to-use) to utilize all model fields. Therefore, all that is necessary is to declare the model.
```
class CarSerializer(ModelSerializer):
class Meta:
model = Car
``` |
if/else statements accepting strings in both capital and lower-case letters in python | 30,277,347 | 7 | 2015-05-16T15:16:38Z | 30,277,370 | 8 | 2015-05-16T15:19:12Z | [
"python",
"if-statement",
"uppercase",
"lowercase",
"capitalize"
] | Is there a quick way for an "if" statement to accept a string regardless of whether it's lower-case, upper-case or both in python?
I'm attempting to write a piece of code where the number "3" can be entered as well as the word "three"or "Three" or any other mixture of capital and lower-case and it will still be accepted by the "if" statement in the code. I know that I can use "or" to get it to accept "3" as well as any other string however don't know how to get it to accept the string in more than one case. So far I have:
```
if (Class == "3" or Class=="three"):
f=open("class3.txt", "a+")
``` | Just convert [`Class`](https://docs.python.org/2/tutorial/classes.html#classes) to lowercase using [`str.lower()`](https://docs.python.org/2/library/stdtypes.html#str.lower) and test it.
```
if Class == "3" or Class.lower() == "three":
f=open("class3.txt", "a+")
```
Of course, you can also use [`str.upper()`](https://docs.python.org/2/library/stdtypes.html#str.upper) too.
```
if Class == "3" or Class.upper() == "THREE":
f=open("class3.txt", "a+")
```
One last thing is that you can check for `"3"` and `"three"` at the same time using `in`.
```
if Class.lower() in {"3", "three"}:
f=open("class3.txt", "a+")
```
When using `in` for an `if` statement, you have several options. You can use a set, `{"3", "three"}`, which I used, a list, `["3", "three"]`, or a tuple, `("3", "three")`.
One last thing to note is that calling [`str.lower()`](https://docs.python.org/2/library/stdtypes.html#str.lower) or [`str.upper()`](https://docs.python.org/2/library/stdtypes.html#str.upper) on `"3"` will give you `"3"`, but calling it on the integer `3`, will throw an error, so you can't use `in` if `3` as an integer is a possibly value for [`Class`](https://docs.python.org/2/tutorial/classes.html#classes). |
if/else statements accepting strings in both capital and lower-case letters in python | 30,277,347 | 7 | 2015-05-16T15:16:38Z | 30,277,407 | 11 | 2015-05-16T15:22:36Z | [
"python",
"if-statement",
"uppercase",
"lowercase",
"capitalize"
] | Is there a quick way for an "if" statement to accept a string regardless of whether it's lower-case, upper-case or both in python?
I'm attempting to write a piece of code where the number "3" can be entered as well as the word "three"or "Three" or any other mixture of capital and lower-case and it will still be accepted by the "if" statement in the code. I know that I can use "or" to get it to accept "3" as well as any other string however don't know how to get it to accept the string in more than one case. So far I have:
```
if (Class == "3" or Class=="three"):
f=open("class3.txt", "a+")
``` | You can use `in` operator with `list`.
```
if Class.lower() in ['3', 'three']:
```
---
Just for reference `'3'.lower()` returns string `3`.
```
>>> '3'.lower()
'3'
``` |
Apache Spark: How to use pyspark with Python 3 | 30,279,783 | 17 | 2015-05-16T19:19:06Z | 32,094,874 | 21 | 2015-08-19T11:57:53Z | [
"python",
"python-3.x",
"apache-spark"
] | I built Spark 1.4 from the GH development master, and the build went through fine. But when I do a `bin/pyspark` I get the Python 2.7.9 version. How can I change this? | Just set the environment variable:
`export PYSPARK_PYTHON=python3`
incase you want this change to be permanent add this line to pyspark script |
Pythonic way to store top 10 results | 30,284,693 | 3 | 2015-05-17T08:04:56Z | 30,284,846 | 11 | 2015-05-17T08:23:16Z | [
"python"
] | I'm working on a python project that runs for a couple hours before it's finished all it's calculations. I would like to hold the top 10 results of the calculation as it progresses.
There's the obvious way of doing:
```
if calc > highest_calc:
second_calc = highest_calc
highest_calc = calc
if calc < highest_calc and calc > second_calc:
third_calc = second_calc
second_calc = calc
if calc < second_calc and calc > third_calc:
fourth_calc = third_calc
third_calc = calc
etc.
```
But is there a better, more dynamic and pythonic way?
**Bonus**
For my project, each calcultion has three coresponding names with it: `name_a` `name_b` `name_c` . What I don't want is more then one of the top 10 values to have the same three names. But, if the last `calc` has the same names, I want to keep the highest of the two. What's the best way to do this?
For example, lets say `2.3` is the value of `calc`, using `MCD` `SBUX` and `CAT` to calculate `calc`. But what if I had already made a `calc` using `MCD` `SBUX` and `CAT` and it made it to the top to? How do I find the value of this `calc` so I can see if it is less than or greater then the new `calc`. If it is greater than, remove the old calc with the same and add the new `calc`. If it is less than, `pass` new calc. Hopefully that makes sense:
```
If name_a in top10 and name_b in top10 and name_c in top10:
if calc > old_calc_with_same_names:
add_calc = calc, name_a, name_b, name_c
top10.insert(bisect.bisect(calc, top10[0]), add_calc)
else:
add to top10
```
**Finished Code**
```
csc = []
top_ports = []
add_sharpe = [sharpe, name_a, weight_a, exchange_a, name_b, weight_b, exchange_b, name_c, weight_c, exchange_c]
if init__calc == 0:
csc.append(add_sharpe)
if init__calc > 1:
if name_a == prev_name_a and name_b == prev_name_b and name_c == prev_name_c:
csc.append(add_sharpe)
if name_a != prev_name_a or name_b != prev_name_b or name_c != prev_name_c:
if csc:
hs = max(csc, key=lambda x: x[0])
if top_ports:
ls = min(top_ports, key=lambda x: x[0])
if hs[0] > ls[0]:
hsi = csc.index(hs)
top_ports.append(csc[hsi])
else:
hsi = csc.index(hs)
top_ports.append(csc[hsi])
csc = []
csc.append(add_sharpe)
```
Later on in the script...
```
top_ports = sorted(top_ports, key=itemgetter(0), reverse=True)
print "The highest sharpe is: {0}".format(top_ports[0])
print " ==============================================="
print " ==============================================="
print datetime.now() - startTime
print "Second: {0}".format(top_ports[1])
print "Third: {0}".format(top_ports[2])
print "Fourth: {0}".format(top_ports[3])
print "Fifth: {0}".format(top_ports[4])
```
etc. | The simplest way to store all your scores in a list, then sort it in reverse (highest first) and take the first 10.
```
import random
# sample random scores
scores = [int(1000*random.random()) for x in xrange(100)]
# uncomment if scores must be unique
#scores = set(scores)
topten = sorted(scores, reverse=True)[:10]
print topten
```
If you need to prevent duplicate scores in the list use a set.
This is the 'vanilla' method for getting the top 10 scores, but it misses the opportunity for an optimization, that will make a difference for larger data sets.
Namely the entire list need not be sorted each time the top 10 are asked for, if the top ten score list is maintained as scores are added. For this perhaps 2 lists could be maintained; the complete list and the top 10, for the later the `heapq` method suggested by @thijs van Dien is superior. |
Pythonic way to store top 10 results | 30,284,693 | 3 | 2015-05-17T08:04:56Z | 30,285,245 | 7 | 2015-05-17T09:11:26Z | [
"python"
] | I'm working on a python project that runs for a couple hours before it's finished all it's calculations. I would like to hold the top 10 results of the calculation as it progresses.
There's the obvious way of doing:
```
if calc > highest_calc:
second_calc = highest_calc
highest_calc = calc
if calc < highest_calc and calc > second_calc:
third_calc = second_calc
second_calc = calc
if calc < second_calc and calc > third_calc:
fourth_calc = third_calc
third_calc = calc
etc.
```
But is there a better, more dynamic and pythonic way?
**Bonus**
For my project, each calcultion has three coresponding names with it: `name_a` `name_b` `name_c` . What I don't want is more then one of the top 10 values to have the same three names. But, if the last `calc` has the same names, I want to keep the highest of the two. What's the best way to do this?
For example, lets say `2.3` is the value of `calc`, using `MCD` `SBUX` and `CAT` to calculate `calc`. But what if I had already made a `calc` using `MCD` `SBUX` and `CAT` and it made it to the top to? How do I find the value of this `calc` so I can see if it is less than or greater then the new `calc`. If it is greater than, remove the old calc with the same and add the new `calc`. If it is less than, `pass` new calc. Hopefully that makes sense:
```
If name_a in top10 and name_b in top10 and name_c in top10:
if calc > old_calc_with_same_names:
add_calc = calc, name_a, name_b, name_c
top10.insert(bisect.bisect(calc, top10[0]), add_calc)
else:
add to top10
```
**Finished Code**
```
csc = []
top_ports = []
add_sharpe = [sharpe, name_a, weight_a, exchange_a, name_b, weight_b, exchange_b, name_c, weight_c, exchange_c]
if init__calc == 0:
csc.append(add_sharpe)
if init__calc > 1:
if name_a == prev_name_a and name_b == prev_name_b and name_c == prev_name_c:
csc.append(add_sharpe)
if name_a != prev_name_a or name_b != prev_name_b or name_c != prev_name_c:
if csc:
hs = max(csc, key=lambda x: x[0])
if top_ports:
ls = min(top_ports, key=lambda x: x[0])
if hs[0] > ls[0]:
hsi = csc.index(hs)
top_ports.append(csc[hsi])
else:
hsi = csc.index(hs)
top_ports.append(csc[hsi])
csc = []
csc.append(add_sharpe)
```
Later on in the script...
```
top_ports = sorted(top_ports, key=itemgetter(0), reverse=True)
print "The highest sharpe is: {0}".format(top_ports[0])
print " ==============================================="
print " ==============================================="
print datetime.now() - startTime
print "Second: {0}".format(top_ports[1])
print "Third: {0}".format(top_ports[2])
print "Fourth: {0}".format(top_ports[3])
print "Fifth: {0}".format(top_ports[4])
```
etc. | Use the `heapq` module. Instead of needlessly storing all results, at every step it adds the new result and then efficiently removes the lowestâwhich may be the one just addedâeffectively keeping the top 10. Storing all results is not necessarily bad though; it can be valuable to collect statistics, and make it easier to determine what to keep afterwards.
```
from heapq import heappush, heappushpop
heap = []
for x in [18, 85, 36, 57, 2, 45, 55, 1, 28, 73, 95, 38, 89, 15, 7, 61]:
calculation_result = x + 1 # Dummy calculation
if len(heap) < 10:
heappush(heap, calculation_result)
else:
heappushpop(heap, calculation_result)
top10 = sorted(heap, reverse=True) # [96, 90, 86, 74, 62, 58, 56, 46, 39, 37]
```
Note that this module has more useful functions to only request the highest/lowest value, et cetera. This may help you to add the behavior concerning names.
Actually this construct is so common that it is available as `heapq.nlargest`. However, to not store all your results after all, you'd have to model the calculator as a generator, which is a bit more advanced.
```
from heapq import nlargest
def calculate_gen():
for x in [18, 85, 36, 57, 2, 45, 55, 1, 28, 73, 95, 38, 89, 15, 7, 61]:
yield x + 1 # Dummy calculation
top10 = nlargest(10, calculate_gen()) # [96, 90, 86, 74, 62, 58, 56, 46, 39, 37]
```
**Bonus**
Here is some idea to make the results unique for each combination of associated names.
Using a heap is not going to cut it anymore, because a heap is not good at locating any item that is not the absolute minimum/maximum, and what we are interested in here is some kind of local minimum given the criteria of a name combination.
Instead, you can use a `dict` to keep the highest value for each name combination. First you need to encode the name combination as an immutable value for it to work as a key, and because the order of the names shouldn't matter, decide on some order and stick with it. I'm going with alphabetical strings to keep it simple.
In the code below, each result is put in the `dict` at a place that is unique for its name combinationâtherefore normalization might be neededâas long as there isn't a better result already. Later the top *n* is compiled from the highest results for each combination.
```
from heapq import nlargest
calculations = [('ABC', 18), ('CDE', 85), ('BAC', 36), ('CDE', 57),
('ECD', 2), ('BAD', 45), ('EFG', 55), ('DCE', 1)]
highest_per_name_combi = dict()
for name_combi, value in calculations:
normal_name_combi = ''.join(sorted(name_combi)) # Slow solution
current = highest_per_name_combi.get(normal_name_combi, float('-inf'))
highest_per_name_combi[normal_name_combi] = max(value, current)
top3 = nlargest(3, highest_per_name_combi.iteritems(), key=lambda x: x[1])
```
The only problem with this approach might be the amount of memory used. Since with 150 names there can be 551300 (150 choose 3) combinations, you may have to decide to clean up the `dict` every now and then, which is simple. In the loop, check for the size of the `dict` and if it exceeds some (still large) number, compose the current top *n* and create a new, minimal `dict` from it. Also, some micro optimizations could be applied by reducing the number of lookups/calls, e.g. not using `get` and/or `max`.
All of this would be a lot easier if you'd have control over the order in which calculations are performed. If you'd know that the next 1000 calculations are all for the same name combination, you could just find the best of those first before adding it to the overall results.
Also, with a truly massive amount of results, the simplest way may actually be the best. Just write them to a file in a convenient format, sort them there (first by name combination, then reversely by value), take only the first occurrence for each name combination (easy when they are grouped) and sort the result again, just by value. |
Make requests using Python over Tor | 30,286,293 | 4 | 2015-05-17T11:17:02Z | 33,875,657 | 7 | 2015-11-23T16:17:50Z | [
"python",
"tor"
] | I want to make multiple GET requests using Tor to a webpage. I want to use a different ipaddress for each request.
```
import socks
import socket
socks.set_default_proxy(socks.SOCKS5, "127.0.0.1", 9150)
socket.socket = socks.socksocket
import requests
print (requests.get('http://icanhazip.com')).content
```
Using this, I made one request. How can I change the ipaddress to make another? | There are 2 aspects to your question -
1. Making requests using Tor
2. Renewing the connection as per requirement (in your case, after every request)
---
# Part 1
The first one is easy to do with the [`requesocks`](https://pypi.python.org/pypi/requesocks) library, which is basically a port of the [`requests`](http://docs.python-requests.org/en/latest/) library. Please note that the library is an old fork (last updated 2013-03-25) and may not have the same functionalities as the latest requests library.
**Installation** -
```
pip install requesocks
```
**Basic usage** -
```
import requesocks
session = requesocks.session()
# Tor uses the 9050 port as the default socks port
session.proxies = {'http': 'socks5://127.0.0.1:9050',
'https': 'socks5://127.0.0.1:9050'}
# Make a request through the Tor connection
# IP visible through Tor
print session.get("http://httpbin.org/ip").text
# Above should print an IP different than your public IP
# Following prints your normal public IP
import requests
print requests.get("http://httpbin.org/ip").text
```
---
# Part 2
To renew the Tor IP, i.e. to have a fresh visible exit IP, you need to be able to connect to the Tor service through it's `ControlPort` and then send a `NEWNYM` signal.
Normal Tor installation does not enable the `ControlPort` by default. You'll have to edit your [torrc file](https://www.torproject.org/docs/faq.html.en#torrc) and uncomment the corresponding lines.
```
ControlPort 9051
## If you enable the controlport, be sure to enable one of these
## authentication methods, to prevent attackers from accessing it.
HashedControlPassword 16:05834BCEDD478D1060F1D7E2CE98E9C13075E8D3061D702F63BCD674DE
```
Please note that the `HashedControlPassword` above is for the password `"password"`. If you want to set a different password, replace the `HashedControlPassword` in the torrc by noting the output from `tor --hash-password "<new_password>"` where `<new_password>` is the password that you want to set.
Okay, so now that we have Tor configured properly, you will have to restart Tor if it is already running.
```
sudo service tor restart
```
Tor should now be up & running on the 9051 `ControlPort` through which we can send commands to it. I prefer to use the [official stem library](https://stem.torproject.org/index.html) to control Tor.
**Installation -**
```
pip install stem
```
You may now renew the Tor IP by calling the following function.
```
from stem import Signal
from stem.control import Controller
# signal TOR for a new connection
def renew_connection():
with Controller.from_port(port = 9051) as controller:
controller.authenticate(password="password")
controller.signal(Signal.NEWNYM)
```
To verify that Tor has a new exit IP, just rerun the relevant code from Part 1.
```
print session.get("http://httpbin.org/ip").text
``` |
Python - Find same values in a list and group together a new list | 30,293,071 | 3 | 2015-05-17T23:11:24Z | 30,293,132 | 15 | 2015-05-17T23:20:44Z | [
"python",
"list",
"duplicates",
"append"
] | I'm stuck figuring this out and wonder if anyone could point me in the right direction...
From this list:
```
N = [1,2,2,3,3,3,4,4,4,4,5,5,5,5,5]
```
I'm trying to create:
```
L = [[1],[2,2],[3,3,3],[4,4,4,4],[5,5,5,5,5]]
```
Any value which is found to be the same is grouped into it's own sublist.
Here is my attempt so far, I'm thinking I should use a `while` loop?
```
global n
n = [1,2,2,3,3,3,4,4,4,4,5,5,5,5,5] #Sorted list
l = [] #Empty list to append values to
def compare(val):
""" This function receives index values
from the n list (n[0] etc) """
global valin
valin = val
global count
count = 0
for i in xrange(len(n)):
if valin == n[count]: # If the input value i.e. n[x] == n[iteration]
temp = valin, n[count]
l.append(temp) #append the values to a new list
count +=1
else:
count +=1
for x in xrange (len(n)):
compare(n[x]) #pass the n[x] to compare function
``` | Keep calm and use [`itertools.groupby`](https://docs.python.org/2/library/itertools.html#itertools.groupby):
```
from itertools import groupby
N = [1,2,2,3,3,3,4,4,4,4,5,5,5,5,5]
print([list(j) for i, j in groupby(N)])
```
**Output:**
```
[[1], [2, 2], [3, 3, 3], [4, 4, 4, 4], [5, 5, 5, 5, 5]]
```
Side note: Prevent from using global variable when you don't **need** to. |
Calling Different Functions in Python Based on Values in a List | 30,293,680 | 2 | 2015-05-18T00:53:12Z | 30,293,700 | 7 | 2015-05-18T00:57:00Z | [
"python",
"list",
"python-2.7"
] | I have a script that takes a list of metrics as an input, and then fetches those metrics from the database to perform various operations with them.
My problem is that different clients get different subsets of the metrics, but I don't want to write a new IF block every time we add a new client. So right now, I have a large IF block that calls different functions based on whether the corresponding metric is in the list. What is the most elegant or *Pythonic* way of handling this?
Setup and function definitions:
```
clientOne = ['churn','penetration','bounce']
clientTwo = ['engagement','bounce']
def calcChurn(clientId):
churn = cursor.execute(sql to get churn)
[...]
return churn
def calcEngagement(clientId):
engagement = cursor.execute(sql to get engagement)
[...]
return engagement
```
Imagine three other functions in a similar format, so there is one function that corresponds to each unique metric. Now here is the block of code in the script that takes the list of metrics:
```
def scriptName(client, clientId):
if churn in client:
churn = calcChurn(clientId)
if engagement in client:
engagement = calcEngagement(clientId)
if penetration in client:
[...]
``` | Generally, you'd create a mapping of names to functions and use that to calculate the stuff you want:
```
client_action_map = {
'churn': calcChurn,
'engagement': calcEngagement,
...
}
def scriptName(actions, clientId):
results = {}
for action in actions:
results[action] = client_action_map[action](clientId)
return results
``` |
Python: Fastest way to process large file | 30,294,146 | 5 | 2015-05-18T02:10:47Z | 30,294,434 | 12 | 2015-05-18T02:56:08Z | [
"python",
"file",
"python-2.7",
"filereader"
] | I have multiple 3GB tab delimited files. There are 20 million rows in each file. All the rows have to be independently processed, no relation between any two rows.
My question is, what will be faster A. Reading line by line using `
```
with open() as infile:
for line in infile:
```
Or B. Reading the file into memory in chunks and processing it, say 250 MB at a time?
The processing is not very complicated, I am just grabbing value in column1 to List1, column2 to List2 etc. Might need to add some column values together.
I am using python 2.7 on a linux box that has 30GB of memory. ASCII Text.
Any way to speed things up in parallel? Right now I am using the former method and the process is very slow. Is using any CSVReader module going to help?
I don't have to do it in python, any other language or database use ideas are welcome.
Thanks.
` | It sounds like your code is I/O bound. This means that multiprocessing isn't going to helpâif you spend 90% of your time reading from disk, having an extra 7 processes waiting on the next read isn't going to help anything.
And, while using a CSV reading module (whether the stdlib's `csv` or something like NumPy or Pandas) may be a good idea for simplicity, it's unlikely to make much different to performance.
Still, it's worth checking that you really *are* I/O bound, instead of just guessing. Run your program and see whether your CPU usage is close to 0% or close to 100% or a core. Do what Amadan suggested in a comment, and run your program with just `pass` for the processing and see whether that cuts off 5% of the time or 70%. You may even want to try comparing with a loop over `os.open` and `os.read(1024*1024)` or something and see if that's any faster.
---
Since your using Python 2.x, Python is relying on the C stdio library to guess how much to buffer at a time, so it might be worth forcing it to buffer more. The simplest way to do that is to use `readlines(bufsize)` for some large `bufsize`. (You can try different numbers and measure them to see where the peak is. In my experience, usually anything from 64K-8MB is about the same, but depending on your system that may be differentâespecially if you're, e.g., reading off a network filesystem with great throughput but horrible latency that swamps the throughput-vs.-latency of the actual physical drive and the caching the OS does.)
So, for example:
```
bufsize = 65536
with open(path) as infile:
while True:
lines = infile.readlines(bufsize)
if not lines:
break
for line in lines:
process(line)
```
---
Meanwhile, assuming you're on a 64-bit system, you may want to try using [`mmap`](https://docs.python.org/2/library/mmap.html) instead of reading the file in the first place. This certainly isn't *guaranteed* to be better, but it *may* be better, depending on your system. For example:
```
with open(path) as infile:
m = mmap.mmap(infile, 0, access=mmap.ACCESS_READ)
```
A Python `mmap` is sort of a weird objectâit acts like a `str` and like a `file` at the same time, so you can, e.g., manually iterate scanning for newlines, or you can call `readline` on it as if it were a file. Both of those will take more processing from Python than iterating the file as lines or doing batch `readlines` (because a loop that would be in C is now in pure Python⦠although maybe you can get around that with `re`, or with a simple Cython extension?)⦠but the I/O advantage of the OS knowing what you're doing with the mapping may swamp the CPU disadvantage.
Unfortunately, Python doesn't expose the [`madvise`](http://man7.org/linux/man-pages/man2/madvise.2.html) call that you'd use to tweak things in an attempt to optimize this in C (e.g., explicitly setting `MADV_SEQUENTIAL` instead of making the kernel guess, or forcing transparent huge pages)âbut you can actually `ctypes` the function out of `libc`. |
Any elegant way to add a method to an existing object in python? | 30,294,458 | 14 | 2015-05-18T03:01:22Z | 30,294,947 | 38 | 2015-05-18T04:07:50Z | [
"python"
] | After a lot of searching, I have found that there are a few ways to add an bound method or unbound class methods to an existing instance objects
Such ways include approaches the code below is taking.
```
import types
class A(object):
pass
def instance_func(self):
print 'hi'
def class_func(self):
print 'hi'
a = A()
# add bound methods to an instance using type.MethodType
a.instance_func = types.MethodType(instance_func, a) # using attribute
a.__dict__['instance_func'] = types.MethodType(instance_func, a) # using __dict__
# add bound methods to an class
A.instance_func = instance_func
A.__dict__['instance_func'] = instance_func
# add class methods to an class
A.class_func = classmethod(class_func)
A.__dict__['class_func'] = classmethod(class_func)
```
What makes me annoying is, typing the function's name, `instance_func` or `class_func` twice.
Is there any simple way to add an existing function to an class or instance without typing the function's name again?
For example,
`A.add_function_as_bound_method(f)` will be far much elegant way to add an existing function to an instance or class since the function already has `__name__` attribute. | Normally, functions stored in object dictionaries don't automatically turn into boundmethods when you look them up with dotted access.
That said, you can use [*functools.partial*](https://docs.python.org/2.7/library/functools.html#functools.partial) to pre-bind the function and store it in the object dictionary so it can be accessed like a method:
```
>>> from functools import partial
>>> class Dog:
def __init__(self, name):
self.name = name
>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> def bark(self): # normal function
print('Woof! %s is barking' % self.name)
>>> e.bark = partial(bark, e) # pre-bound and stored in the instance
>>> e.bark() # access like a normal method
Woof! Buddy is barking
```
This is a somewhat elegant way to add a method to an existing object (without needing to change its class and without affecting other existing objects).
**Follow-up to Comment:**
You can use a helper function to add the pre-bound function is a single step:
```
>>> def add_method(obj, func):
'Bind a function and store it in an object'
setattr(obj, func.__name__, partial(func, obj))
```
Use it like this:
```
>>> add_method(e, bark)
>>> e.bark()
Woof! Fido is barking
```
Hope this is exactly what you need :-) |
How to check if a key exists in a word2vec trained model or not | 30,301,922 | 7 | 2015-05-18T11:24:45Z | 31,112,810 | 7 | 2015-06-29T10:02:48Z | [
"python",
"gensim",
"word2vec"
] | I have trained a word2vec model using a corpus of documents with Gensim. Once the model is training, I am writing the following piece of code to get the raw feature vector of a word say "view".
```
myModel["view"]
```
However, I get a KeyError for the word which is probably because this doesn't exist as a key in the list of keys indexed by word2vec. How can I check if a key exits in the index before trying to get the raw feature vector? | Word2Vec also provides a 'vocab' member, which you can access directly.
Using a pythonistic approach:
```
if word in w2v_model.vocab:
# Do something
``` |
Python: Combination in lists of lists (?) | 30,303,053 | 5 | 2015-05-18T12:18:34Z | 30,303,159 | 13 | 2015-05-18T12:24:38Z | [
"python",
"list",
"graph",
"vertices",
"edges"
] | First of all I wanted to say that my title is probably not describing my question correctly. I don't know how the process I am trying to accomplish is called, which made searching for a solution on stackoverflow or google very difficult. A hint regarding this could already help me a lot!
What I currently have are basically two lists with lists as elements.
Example:
```
List1 = [ [a,b], [c,d,e], [f] ]
List2 = [ [g,h,i], [j], [k,l] ]
```
These lists are basically vertices of a graph I am trying to create later in my project, where the edges are supposed to be from List1 to List2 by rows.
If we look at the first row of each of the lists, I therefore have:
```
[a,b] -> [g,h,i]
```
However, I want to have assingments/edges of unique elements, so I need:
```
[a] -> [g]
[a] -> [h]
[a] -> [i]
[b] -> [g]
[b] -> [h]
[b] -> [i]
```
The result I want to have is another list, with these unique assigments as elements, i.e.
```
List3 = [ [a,g], [a,h], [a,i], [b,g], ...]
```
Is there any elegant way to get from List1 and List2 to List 3?
The way I wanted to accomplish that is by going row by row, determining the amount of elements of each row and then write clauses and loops to create a new list with all combinations possible. This, however, feels like a very inefficient way to do it. | You can `zip` your two lists, then use `itertools.product` to create each of your combinations. You can use `itertools.chain.from_iterable` to flatten the resulting list.
```
>>> import itertools
>>> List1 = [ ['a','b'], ['c','d','e'], ['f'] ]
>>> List2 = [ ['g','h','i'], ['j'], ['k','l'] ]
>>> list(itertools.chain.from_iterable(itertools.product(a,b) for a,b in zip(List1, List2)))
[('a', 'g'), ('a', 'h'), ('a', 'i'), ('b', 'g'), ('b', 'h'), ('b', 'i'), ('c', 'j'), ('d', 'j'), ('e', 'j'), ('f', 'k'), ('f', 'l')]
``` |
no such column: django_content_type.name | 30,305,188 | 7 | 2015-05-18T13:57:09Z | 30,316,658 | 10 | 2015-05-19T04:15:14Z | [
"python",
"django",
"apache",
"django-oscar"
] | I had django-oscar's sample app sandbox deployed on my website at example.com . i wanted to move that to example.com:8000 and run another project at example.com url. I successfully did the second part, and when you enter example.com, you can see the newer django project up and running, but the thing is, the first django project which was django-oscar's sandbox, wont respond properly.
when you enter example.com:8000, you see the current debug log:
```
no such column: django_content_type.name
Request Method: GET
Request URL: http://example.com:8000/fa/
Django Version: 1.7.8
Exception Type: OperationalError
Exception Value:
no such column: django_content_type.name
Exception Location: /usr/local/lib/python2.7/dist-packages/django/db/backends/sqlite3/base.py in execute, line 485
Python Executable: /usr/bin/python
Python Version: 2.7.3
Python Path:
['/var/www/setak/setakenv/setakmain/django-oscar/sites/sandbox',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-linux2',
'/usr/lib/python2.7/lib-tk',
'/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages/PIL',
'/usr/lib/python2.7/dist-packages/gst-0.10',
'/usr/lib/python2.7/dist-packages/gtk-2.0',
'/usr/lib/python2.7/dist-packages/ubuntu-sso-client',
'/usr/lib/python2.7/dist-packages/ubuntuone-client',
'/usr/lib/python2.7/dist-packages/ubuntuone-control-panel',
'/usr/lib/python2.7/dist-packages/ubuntuone-couch',
'/usr/lib/python2.7/dist-packages/ubuntuone-installer',
'/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol']
```
now i googled this error and I didn't get any valuable results.
Also, when I run
```
sudo python manage.py migrate
```
the following happens, which also I did not find any correct resolution to fix:
```
Operations to perform:
Synchronize unmigrated apps: reports_dashboard, treebeard, oscar, communications_dashboard, reviews_dashboard, debug_toolbar, widget_tweaks, offers_dashboard, catalogue_dashboard, sitemaps, compressor, django_extensions, dashboard, thumbnail, haystack, ranges_dashboard, checkout, gateway, django_tables2
Apply all migrations: customer, promotions, shipping, wishlists, offer, admin, sessions, contenttypes, auth, payment, reviews, analytics, catalogue, flatpages, sites, address, basket, partner, order, voucher
Synchronizing apps without migrations:
Creating tables...
Installing custom SQL...
Installing indexes...
Running migrations:
No migrations to apply.
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/migrate.py", line 165, in handle
emit_post_migrate_signal(created_models, self.verbosity, self.interactive, connection.alias)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/sql.py", line 268, in emit_post_migrate_signal
using=db)
File "/usr/local/lib/python2.7/dist-packages/django/dispatch/dispatcher.py", line 198, in send
response = receiver(signal=self, sender=sender, **named)
File "/usr/local/lib/python2.7/dist-packages/django/contrib/auth/management/__init__.py", line 83, in create_permissions
ctype = ContentType.objects.db_manager(using).get_for_model(klass)
File "/usr/local/lib/python2.7/dist-packages/django/contrib/contenttypes/models.py", line 58, in get_for_model
" is migrated before trying to migrate apps individually."
RuntimeError: Error creating new content types. Please make sure contenttypes is migrated before trying to migrate apps individually.
```
update 1: I am also using django 1.7.8 for this.
update 2: I changed my version of Django to 1.8.1, the makemigrations and migrate commands worked. Then I ran the server again, and now I get this error in my log: ( I also deleted the apache settings since they were irrelevant!)
```
Traceback (most recent call last):
File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/handlers.py", line 63, in __call__
return self.application(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/wsgi.py", line 170, in __call__
self.load_middleware()
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 50, in load_middleware
mw_class = import_string(middleware_path)
File "/usr/local/lib/python2.7/dist-packages/django/utils/module_loading.py", line 26, in import_string
module = import_module(module_path)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named transaction
``` | I could finally solve my problem. The problem was that I used django 1.8 to run manage.py migrate and the project was developed under django 1.7.1, so the migrations broke things up.
So what I did was that I installed django 1.8.1 again, ran mange.py migrate contenttypes 0001, then uninstalled django 1.8.1, installed django 1.7.8 and ran manage.py runserver and everything went back to normal and worked fine. |
"pip install --editable ./" vs "python setup.py develop" | 30,306,099 | 26 | 2015-05-18T14:37:47Z | 30,306,403 | 19 | 2015-05-18T14:51:04Z | [
"python",
"pip",
"setuptools"
] | Is there any significant difference between
```
pip install -e /path/to/mypackage
```
and the setuptools variant?
```
python /path/to/mypackage/setup.py develop
``` | There is no big difference.
> With `pip install -e` for local projects, the "SomeProject.egg-info" directory is created
> relative to the project path. This is one advantage over just using
> setup.py develop, which creates the "egg-info" directly relative the
> current working directory.
> More: [docs](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)
Also read the setuptools' [docs](https://pythonhosted.org/setuptools/setuptools.html#development-mode). |
"pip install --editable ./" vs "python setup.py develop" | 30,306,099 | 26 | 2015-05-18T14:37:47Z | 31,987,067 | 19 | 2015-08-13T11:34:54Z | [
"python",
"pip",
"setuptools"
] | Is there any significant difference between
```
pip install -e /path/to/mypackage
```
and the setuptools variant?
```
python /path/to/mypackage/setup.py develop
``` | One more difference: `pip install -e` uses wheel while
`python setup.py develop`
doesn't use it.
With `install`, you could achieve the same behavior by using
`pip install -e /path/to/package --no-use-wheel`
More info on wheels : [python wheels](http://pythonwheels.com/) |
Pass Python list to embedded Rust function | 30,312,885 | 9 | 2015-05-18T21:11:07Z | 30,313,295 | 12 | 2015-05-18T21:44:00Z | [
"python",
"rust",
"ctypes"
] | I am learning how to embed Rust functions in Python, and everything works fine if my inputs are `int`s, but not list.
If my `lib.rs` file is:
```
#[no_mangle]
pub extern fn my_func(x: i32, y: i32) -> i32 {
return x + y;
}
```
I can use this as follows:
```
In [1]: from ctypes import cdll
In [2]: lib = cdll.LoadLibrary("/home/user/RustStuff/embed/target/release/libembed.so")
In [3]: lib.my_func(5,6)
Out[3]: 11
```
However if I change my `lib.rs` to:
```
#[no_mangle]
pub extern fn my_func(my_vec: Vec<i32>) -> i32 {
let mut my_sum = 0;
for i in my_vec {
my_sum += i;
}
return my_sum;
}
```
I can no longer use it in Python (this compiled fine):
```
In [1]: from ctypes import cdll
In [2]: lib = cdll.LoadLibrary("/home/user/RustStuff/embed/target/release/libembed.so")
In [3]: lib.my_func([2,3,4])
---------------------------------------------------------------------------
ArgumentError Traceback (most recent call last)
<ipython-input-3-454ffc5ba9dd> in <module>()
----> 1 lib.my_func([2,3,4])
ArgumentError: argument 1: <type 'exceptions.TypeError'>: Don't know how to convert parameter 1
```
The reason, I though this could work is that Python's `list` and Rust's `Vec` are the both *dynamic arrays*, but apparently I am missing something here...
Why does my attempt not work? What should I do to fix it? | Don't do this:
```
#[no_mangle]
pub extern fn my_func(my_vec: Vec<i32>) -> i32 { ... }
```
You basically *never* want to accept or return an arbitrary Rust object in an `extern` function, only ones that are [`Repr`](http://doc.rust-lang.org/std/raw/trait.Repr.html). Instead, you should accept something that is representable by C. As [6502 says](http://stackoverflow.com/a/30312961/155423), the best idea for this particular case would be to accept a pointer and a length.
Rust's `Vec` is conceptually a pointer to data, a count, **and a capacity**. You are able to modify a `Vec` by adding or removing objects, which can cause reallocation to happen. This is doubly bad because it is likely that Python and Rust use different allocators that are not compatible with each other. Segfaults lie this way! You really want a *slice*.
Instead, do something like this on the Rust side:
```
extern crate libc;
use libc::{size_t,int32_t};
use std::slice;
#[no_mangle]
pub extern fn my_func(data: *const int32_t, length: size_t) -> int32_t {
let nums = unsafe { slice::from_raw_parts(data, length as usize) };
nums.iter().fold(0, |acc, i| acc + i)
}
```
Namely, you are using the guaranteed-to-match C types, and then converting the pointer and length to something Rust knows how to deal with.
I'm no Pythonista, but this cobbled-together code (with help from [How do I convert a Python list into a C array by using ctypes?](http://stackoverflow.com/questions/4145775/how-do-i-convert-a-python-list-into-a-c-array-by-using-ctypes)) seems to work with the Rust I have above:
```
import ctypes
lib = ctypes.cdll.LoadLibrary("./target/debug/libpython.dylib")
lib.my_func.argtypes = (ctypes.POINTER(ctypes.c_int32), ctypes.c_size_t)
list_to_sum = [1,2,3,4]
c_array = (ctypes.c_int32 * len(list_to_sum))(*list_to_sum)
print lib.my_func(c_array, len(list_to_sum))
```
Of course, you probably want to wrap that to make it nicer for the caller of your code. |
How to remove single space between text | 30,318,807 | 4 | 2015-05-19T07:03:35Z | 30,318,989 | 9 | 2015-05-19T07:13:15Z | [
"python"
] | This is the input given `"S H A N N O N B R A D L E Y"`
(single space between each letter but 2 space between SHANNON and BRADLEY)
I want output to be in this format (given below)
`SHANNON BRADLEY`
Any way to do this in R or Python | try this in `R`
```
text <- c("S H A N N O N B R A D L E Y")
gsub(" (?! )" , text , replacement = "" , perl = T)
```
this is another simpler one
```
gsub("\\b " , text , replacement = "" , perl = T)
``` |
Deploying a minimal flask app in docker - server connection issues | 30,323,224 | 12 | 2015-05-19T10:36:26Z | 30,329,547 | 23 | 2015-05-19T15:13:17Z | [
"python",
"deployment",
"flask",
"docker",
"dockerfile"
] | I have an app who's only dependency is flask, which runs fine outside docker and binds to the default port `5000`. Here is the full source:
```
from flask import Flask
app = Flask(__name__)
app.debug = True
@app.route('/')
def main():
return 'hi'
if __name__ == '__main__':
app.run()
```
The problem is that when I deploy this in docker, the server is running but is unreachable from outside the container.
Below is my Dockerfile. The image is ubuntu with flask installed. The tar just contains the `index.py` listed above;
```
# Dockerfile
FROM dreen/flask
MAINTAINER dreen
WORKDIR /srv
# Get source
RUN mkdir -p /srv
COPY perfektimprezy.tar.gz /srv/perfektimprezy.tar.gz
RUN tar x -f perfektimprezy.tar.gz
RUN rm perfektimprezy.tar.gz
# Run server
EXPOSE 5000
CMD ["python", "index.py"]
```
Here are the steps I am doing to deploy
`$> sudo docker build -t perfektimprezy .`
As far as I know the above runs fine, the image has the contents of the tar in `/srv`. Now, let's start the server in a container:
```
$> sudo docker run -i -p 5000:5000 -d perfektimprezy
1c50b67d45b1a4feade72276394811c8399b1b95692e0914ee72b103ff54c769
```
Is it actually running?
```
$> sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1c50b67d45b1 perfektimprezy:latest "python index.py" 5 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp loving_wozniak
$> sudo docker logs 1c50b67d45b1
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
```
Yep, seems like the flask server is running. Here is where it gets weird. Lets make a request to the server:
```
$> curl 127.0.0.1:5000 -v
* Rebuilt URL to: 127.0.0.1:5000/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 127.0.0.1:5000
> Accept: */*
>
* Empty reply from server
* Connection #0 to host 127.0.0.1 left intact
curl: (52) Empty reply from server
```
Empty reply... But is the process running?
```
$> sudo docker top 1c50b67d45b1
UID PID PPID C STIME TTY TIME CMD
root 2084 812 0 10:26 ? 00:00:00 python index.py
root 2117 2084 0 10:26 ? 00:00:00 /usr/bin/python index.py
```
Now let's ssh into the server and check...
```
$> sudo docker exec -it 1c50b67d45b1 bash
root@1c50b67d45b1:/srv# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:47677 127.0.0.1:5000 TIME_WAIT
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
root@1c50b67d45b1:/srv# curl -I 127.0.0.1:5000
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 5447
Server: Werkzeug/0.10.4 Python/2.7.6
Date: Tue, 19 May 2015 12:18:14 GMT
```
It's fine... but not from the outside :( What am I doing wrong? | The problem is you are only binding to the localhost interface, you should be binding to `0.0.0.0` if you want the container to be accessible from outside. If you change:
```
if __name__ == '__main__':
app.run()
```
to
```
if __name__ == '__main__':
app.run(host='0.0.0.0')
```
It should work. |
Is there a way to see if a string contains any symbols in Python? | 30,324,204 | 2 | 2015-05-19T11:22:09Z | 30,324,294 | 7 | 2015-05-19T11:26:28Z | [
"python",
"python-3.x"
] | I don't mean specific characters, I just mean anything that isn't alphanumeric. I've tried asking if the string contains only alphabetic and numeric characters like so:
```
if userInput.isalpha() and userInput.isdigit() == False:
print ("Not valid, contains symbols or spaces")
```
but this doesn't work and and denies all passwords I put in. | Firstly, you can't be only alpha and only numeric, so your expression will always be false.
Secondly the methods `isalpha()` and `isdigit()` equate to `True` or `False`, so no need to use `== False`.
I would suggest using `.isalnum()`.
If that doesn't satisfy you're requirements you should use a regex.
`alnum` looks for both: <https://docs.python.org/2/library/stdtypes.html#str.isalnum> and works in Python 3.x and 2.
Example:
```
>>> 'sometest'.isalnum()
True
>>> 'some test'.isalnum()
False
>>> 'sometest231'.isalnum()
True
>>> 'sometest%231'.isalnum()
False
>>> '231'.isalnum()
True
``` |
Is there a way to see if a string contains any symbols in Python? | 30,324,204 | 2 | 2015-05-19T11:22:09Z | 30,324,335 | 7 | 2015-05-19T11:28:01Z | [
"python",
"python-3.x"
] | I don't mean specific characters, I just mean anything that isn't alphanumeric. I've tried asking if the string contains only alphabetic and numeric characters like so:
```
if userInput.isalpha() and userInput.isdigit() == False:
print ("Not valid, contains symbols or spaces")
```
but this doesn't work and and denies all passwords I put in. | You have three problems:
* `if a and b == False` is **not** the same is `if a == False and b == False`;
* `if b == False` should be written `if not b`; and
* You aren't using [`str.isalnum`](https://docs.python.org/3/library/stdtypes.html#str.isalnum), which saves you from the problem anyway.
So it should be:
```
if not userInput.isalnum():
``` |
Python create new column in pandas df with random numbers from range | 30,327,417 | 4 | 2015-05-19T13:41:41Z | 30,327,470 | 10 | 2015-05-19T13:43:59Z | [
"python",
"random"
] | I have a pandas data frame with 50k rows. I'm trying to add a new column that is a randomly generated number from 1 to 5.
If I want 50k random numbers I'd use:
```
df1['randNumCol'] = random.sample(xrange(50000), len(df1))
```
but for this I'm not sure how to do it.
Side note in R, I'd do:
```
sample(1:5, 50000, replace = TRUE)
```
Any suggestions? | One solution is to use `np.random.choice`:
```
import numpy as np
df1['randomNumCol'] = np.random.choice(range(1, 6), df1.shape[0])
```
In order to make the results reproducible you can set the seed with `np.random.seed(42)`. |
How to draw rounded line ends using matplotlib | 30,327,732 | 3 | 2015-05-19T13:54:48Z | 30,328,238 | 7 | 2015-05-19T14:17:09Z | [
"python",
"numpy",
"matplotlib",
"polar-coordinates"
] | Say I am plotting a complex value like this:
```
a=-0.49+1j*1.14
plt.polar([0,angle(x)],[0,abs(x)],linewidth=5)
```
Giving

Is there a setting I can use to get rounded line ends, like the red line in the following example (drawn in paint)?
 | The line proprty `solid_capstyle` ([docs](http://matplotlib.org/api/lines_api.html#matplotlib.lines.Line2D.set_solid_capstyle)). There is also a `dash_capstyle` which controls the line ends on every dash.
```
import matplotlib.pyplot as plt
import numpy as np
x = y = np.arange(5)
fig, ax = plt. subplots()
ln, = ax.plot(x, y, lw=10, solid_capstyle='round')
ln2, = ax.plot(x, 4-y, lw=10)
ln2.set_solid_capstyle('round')
ax.margins(.2)
```

This will work equally will with `plt.polar`, which is a convenience method for creating a polar axes and calling `plot` on it, and the the `Line2D` object returned by it. |
Python Pandas : group by in group by and average? | 30,328,646 | 9 | 2015-05-19T14:33:49Z | 30,328,738 | 10 | 2015-05-19T14:37:03Z | [
"python",
"pandas",
"group-by",
"mean"
] | I have a df like this:
```
cluster org time
1 a 8
1 a 6
2 h 34
1 c 23
2 d 74
3 w 6
```
I would like to calculate the average of time per org per cluster.
Expected result:
```
cluster mean(time)
1 15 ((8+6/2)+23)/2
2 54 (74+34)/2
3 6
```
I do not know how to do it in Pandas, can anybody help? | If you want to first take mean on `['cluster', 'org']` combination and then again take mean on `cluster` groups
```
In [59]: (df.groupby(['cluster', 'org'], as_index=False).mean()
.groupby('cluster')['time'].mean())
Out[59]:
cluster
1 15
2 54
3 6
Name: time, dtype: int64
```
If you wan't mean values by `cluster` only, then you could
```
In [58]: df.groupby(['cluster']).mean()
Out[58]:
time
cluster
1 12.333333
2 54.000000
3 6.000000
```
You could `groupby` on `['cluster', 'org']` and then take `mean()`
```
In [57]: df.groupby(['cluster', 'org']).mean()
Out[57]:
time
cluster org
1 a 438886
c 23
2 d 9874
h 34
3 w 6
``` |
Fastest save and load options for a numpy array | 30,329,726 | 4 | 2015-05-19T15:20:34Z | 30,330,699 | 7 | 2015-05-19T16:05:07Z | [
"python",
"arrays",
"performance",
"numpy",
"io"
] | I have a script that generates two-dimensional `numpy` `array`s with `dtype=float` and shape on the order of `(1e3, 1e6)`. Right now I'm using `np.save` and `np.load` to perform IO operations with the arrays. However, these functions take several seconds for each array. Are there faster methods for saving and loading the entire arrays (i.e., without making assumptions about their contents and reducing them)? I'm open to converting the `array`s to another type before saving as long as the data are retained exactly. | For really big arrays, I've heard about several solutions, and they mostly on being lazy on the I/O :
* [NumPy.memmap](http://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html), maps big arrays to binary form
+ Pros :
- No dependency other than Numpy
- Transparent replacement of `ndarray` (Any class accepting ndarray accepts `memmap` )
+ Cons :
- Chunks of your array are limited to 2.5G
- Still limited by Numpy throughput
* Use Python bindings for HDF5, a bigdata-ready file format, like [PyTables](http://www.pytables.org/index.html) or [h5py](http://www.h5py.org/)
+ Pros :
- Format supports compression, indexing, and other super nice features
- Apparently the ultimate PetaByte-large file format
+ Cons :
- Learning curve of having a hierarchical format ?
- Have to define what your performance needs are (see later)
* [Python's pickling](https://docs.python.org/2/library/pickle.html) system (out of the race, mentioned for Pythonicity rather than speed)
+ Pros:
- It's Pythonic ! (haha)
- Supports all sorts of objects
+ Cons:
- Probably slower than others (because aimed at any objects not arrays)
---
# Numpy.memmap
From the docs of [NumPy.memmap](http://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html) :
> Create a memory-map to an array stored in a binary file on disk.
>
> Memory-mapped files are used for accessing small segments of large files on disk, without reading the entire file into memory
>
> The memmap object can be used anywhere an ndarray is accepted. Given any memmap `fp` , `isinstance(fp, numpy.ndarray)` returns True.
---
# HDF5 arrays
From the [h5py doc](http://www.h5py.org/)
> Lets you store huge amounts of numerical data, and easily manipulate that data from NumPy. For example, you can slice into multi-terabyte datasets stored on disk, as if they were real NumPy arrays. Thousands of datasets can be stored in a single file, categorized and tagged however you want.
The format supports compression of data in various ways (more bits loaded for same I/O read), but this means that the data becomes less easy to query individually, but in your case (purely loading / dumping arrays) it might be efficient |
List comprehension, check if item is unique | 30,331,907 | 24 | 2015-05-19T17:07:15Z | 30,332,002 | 25 | 2015-05-19T17:12:59Z | [
"python",
"list",
"dictionary",
"list-comprehension"
] | I am trying to write a list comprehension statement that will only add an item if it's not currently contained in the list. Is there a way to check the current items in the list that is currently being constructed? Here is a brief example:
**Input**
```
{
"Stefan" : ["running", "engineering", "dancing"],
"Bob" : ["dancing", "art", "theatre"],
"Julia" : ["running", "music", "art"]
}
```
**Output**
```
["running", "engineering", "dancing", "art", "theatre", "music"]
```
**Code without using a list comprehension**
```
output = []
for name, hobbies in input.items():
for hobby in hobbies:
if hobby not in output:
output.append(hobby)
```
**My Attempt**
```
[hobby for name, hobbies in input.items() for hobby in hobbies if hobby not in ???]
``` | You can use `set` and set comprehension:
```
{hobby for name, hobbies in input.items() for hobby in hobbies}
```
As [m.wasowski mentioned](http://stackoverflow.com/questions/30331907/list-comprehension-check-if-item-is-unique#comment48760965_30332002), we don't use the `name` here, so we can use `item.values()` instead:
```
{hobby for hobbies in input.values() for hobby in hobbies}
```
If you really need a list as the result, you can do this (but notice that usually you can work with sets without any problem):
```
list({hobby for hobbies in input.values() for hobby in hobbies})
``` |
List comprehension, check if item is unique | 30,331,907 | 24 | 2015-05-19T17:07:15Z | 30,332,029 | 13 | 2015-05-19T17:14:29Z | [
"python",
"list",
"dictionary",
"list-comprehension"
] | I am trying to write a list comprehension statement that will only add an item if it's not currently contained in the list. Is there a way to check the current items in the list that is currently being constructed? Here is a brief example:
**Input**
```
{
"Stefan" : ["running", "engineering", "dancing"],
"Bob" : ["dancing", "art", "theatre"],
"Julia" : ["running", "music", "art"]
}
```
**Output**
```
["running", "engineering", "dancing", "art", "theatre", "music"]
```
**Code without using a list comprehension**
```
output = []
for name, hobbies in input.items():
for hobby in hobbies:
if hobby not in output:
output.append(hobby)
```
**My Attempt**
```
[hobby for name, hobbies in input.items() for hobby in hobbies if hobby not in ???]
``` | As [this answer](http://stackoverflow.com/a/480227/67579) suggests: you can use a uniqueness filter:
```
def f7(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
```
and call with:
```
>>> f7(hobby for name, hobbies in input.items() for hobby in hobbies)
['running', 'engineering', 'dancing', 'art', 'theatre', 'music']
```
I would implement the *uniqueness filter* separately since a design rule says *"different things should be handled by different classes/methods/components/whatever"*. Furthermore you can simply reuse this method if necessary.
Another advantage is - as is written at the [linked answer](http://stackoverflow.com/a/480227/67579) - that the *order* of the items is preserved. For some applications, this might be necessary. |
List comprehension, check if item is unique | 30,331,907 | 24 | 2015-05-19T17:07:15Z | 30,332,080 | 7 | 2015-05-19T17:17:22Z | [
"python",
"list",
"dictionary",
"list-comprehension"
] | I am trying to write a list comprehension statement that will only add an item if it's not currently contained in the list. Is there a way to check the current items in the list that is currently being constructed? Here is a brief example:
**Input**
```
{
"Stefan" : ["running", "engineering", "dancing"],
"Bob" : ["dancing", "art", "theatre"],
"Julia" : ["running", "music", "art"]
}
```
**Output**
```
["running", "engineering", "dancing", "art", "theatre", "music"]
```
**Code without using a list comprehension**
```
output = []
for name, hobbies in input.items():
for hobby in hobbies:
if hobby not in output:
output.append(hobby)
```
**My Attempt**
```
[hobby for name, hobbies in input.items() for hobby in hobbies if hobby not in ???]
``` | sets and dictionaries are your friends here:
```
from collections import OrderedDict
from itertools import chain # 'flattens' collection of iterables
data = {
"Stefan" : ["running", "engineering", "dancing"],
"Bob" : ["dancing", "art", "theatre"],
"Julia" : ["running", "music", "art"]
}
# using set is the easiest way, but sets are unordered:
print {hobby for hobby in chain.from_iterable(data.values())}
# output:
# set(['art', 'theatre', 'dancing', 'engineering', 'running', 'music'])
# or use OrderedDict if you care about ordering:
print OrderedDict(
(hobby, None) for hobby in chain.from_iterable(data.values())
).keys()
# output:
# ['dancing', 'art', 'theatre', 'running', 'engineering', 'music']
``` |
Finding red color using Python & OpenCV | 30,331,944 | 11 | 2015-05-19T17:09:43Z | 30,333,177 | 7 | 2015-05-19T18:17:00Z | [
"python",
"image",
"opencv",
"hsv"
] | I am trying to extract red color from an image. I have code that applies threshold to leave only values from specified range:
```
img=cv2.imread('img.bmp')
img_hsv=cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_red = np.array([0,50,50]) #example value
upper_red = np.array([10,255,255]) #example value
mask = cv2.inRange(img_hsv, lower_red, upper_red)
img_result = cv2.bitwise_and(img, img, mask=mask)
```
But, as i checked, red can have Hue value in range, let's say from 0 to 10, as well as in range from 170 to 180. Therefore, i would like to leave values from any of those two ranges. I tried setting threshold from 10 to 170 and using cv2.bitwise\_not function, but then i get all the white color as well. I think the best option would be to create a mask for each range and use them both, so I somehow have to join them together before proceeding.
Is there a way I could join two masks using OpenCV? Or is there some other way I could achieve my goal?
Edit. I came with not much elegant, but working solution:
```
image_result = np.zeros((image_height,image_width,3),np.uint8)
for i in range(image_height): #those are set elsewhere
for j in range(image_width): #those are set elsewhere
if img_hsv[i][j][1]>=50 \
and img_hsv[i][j][2]>=50 \
and (img_hsv[i][j][0] <= 10 or img_hsv[i][j][0]>=170):
image_result[i][j]=img_hsv[i][j]
```
It pretty much satisfies my needs, and OpenCV's functions probably do pretty much the same, but if there's a better way to do that(using some dedicated function and writing less code) please share it with me. :) | I would just add the masks together, and use `np.where` to mask the original image.
```
img=cv2.imread("img.bmp")
img_hsv=cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# lower mask (0-10)
lower_red = np.array([0,50,50])
upper_red = np.array([10,255,255])
mask0 = cv2.inRange(img_hsv, lower_red, upper_red)
# upper mask (170-180)
lower_red = np.array([170,50,50])
upper_red = np.array([180,255,255])
mask1 = cv2.inRange(img_hsv, lower_red, upper_red)
# join my masks
mask = mask0+mask1
# set my output img to zero everywhere except my mask
output_img = img.copy()
output_img[np.where(mask==0)] = 0
# or your HSV image, which I *believe* is what you want
output_hsv = img_hsv.copy()
output_hsv[np.where(mask==0)] = 0
```
This should be much faster and much more readable than looping through each pixel of your image. |
Pip.exe from Python on Windows 10 | 30,335,122 | 4 | 2015-05-19T20:04:05Z | 31,065,211 | 16 | 2015-06-26T04:55:37Z | [
"python",
"windows",
"python-3.x",
"windows-10"
] | I installed the latest version of Python from www.python.org. Python 3.4.3. I then go to run pip and I get
"Fatal error in launcher: Job information querying failed"
Searching for this error message points me to similar issues running Python under wine. If you grab a pre-compiled pip.exe it will work but it seems that when you install, the pip.exe is generated as part of the installer and this pip.exe does not work.
Further I am dealing with a build script that creates a virtual python environment that uses pip.exe and results in the same error. Not sure how to fix this. Also not sure how pip.exe is generated. | You can use `python -m pip install package` |
Seaborn load_dataset | 30,336,324 | 9 | 2015-05-19T21:16:07Z | 30,337,377 | 15 | 2015-05-19T22:40:26Z | [
"python",
"boxplot",
"seaborn"
] | I am trying to get a grouped boxplot working using Seaborn as per the [example](http://stanford.edu/~mwaskom/software/seaborn/examples/grouped_boxplot.html)
I can get the above example working, however the line:
```
tips = sns.load_dataset("tips")
```
is not explained at all. I have located the tips.csv file, but I can't seem to find adequate documentation on what load\_dataset specifically does. I tried to create my own csv and load this, but to no avail. I also renamed the tips file and it still worked...
My question is thus:
Where is `load_dataset` actually looking for files? Can I actually use this for my own boxplots?
EDIT: I managed to get my own boxplots working using my own `DataFrame`, but I am still wondering whether `load_dataset` is used for anything more than mysterious tutorial examples. | `load_dataset` looks for online csv files on <https://github.com/mwaskom/seaborn-data>. Here's the docstring:
> Load a dataset from the online repository (requires internet).
>
> Parameters
>
> ---
>
> name : str
> Name of the dataset (`name`.csv on
> <https://github.com/mwaskom/seaborn-data>). You can obtain list of
> available datasets using :func:`get_dataset_names`
>
> kws : dict, optional
> Passed to pandas.read\_csv
If you want to modify that online dataset or bring in your own data, you likely have to use [pandas](http://pandas.pydata.org). `load_dataset` actually returns a pandas `DataFrame` object, which you can confirm with `type(tips)`.
If you already created your own data in a csv file called, say, tips2.csv, and saved it in the same location as your script, use this (after installing pandas) to load it in:
```
import pandas as pd
tips2 = pd.read_csv('tips2.csv')
``` |
django 1.7.8 not sending emails with password reset | 30,336,617 | 15 | 2015-05-19T21:39:04Z | 30,762,182 | 12 | 2015-06-10T16:30:08Z | [
"python",
"django",
"email"
] | Relevant part of `urls.py` for the project:
```
from django.conf.urls import include, url, patterns
urlpatterns = patterns('',
# other ones ...
url(r'^accounts/password/reset/$',
'django.contrib.auth.views.password_reset',
{'post_reset_redirect' : '/accounts/password/reset/done/'}),
url(r'^accounts/password/reset/done/$',
'django.contrib.auth.views.password_reset_done'),
url(r'^accounts/password/reset/(?P<uidb64>[0-9A-Za-z]+)-(?P<token>.+)/$',
'django.contrib.auth.views.password_reset_confirm',
{'post_reset_redirect' : '/accounts/password/done/'}),
url(r'^accounts/password/done/$',
'django.contrib.auth.views.password_reset_complete'),
)
```
And by request, here's the password reset form:
```
{% extends "site_base.html" %}
{% block title %}Reset Password{% endblock %}
{% block content %}
<p>Please specify your email address to receive instructions for resetting it.</p>
<form action="" method="post">
<div style="display:none">
<input type="hidden" value="{{ csrf_token }}" name="csrfmiddlewaretoken">
</div>
{{ form.email.errors }}
<p><label for="id_email">E-mail address:</label> {{ form.email }} <input type="submit" value="Reset password" /></p>
</form>
{% endblock %}
```
But whenever I navigate to the `/accounts/password/reset/` page and fill in email and click enter the page immediately redirects to `/accounts/password/reset/done/` and no email is sent.
My relevant `settings.py` variables:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = '[email protected]'
EMAIL_HOST_PASSWORD = 'XXXXXX'
EMAIL_PORT = 587
DEFAULT_FROM_EMAIL = EMAIL_HOST_USER
SERVER_EMAIL = EMAIL_HOST_USER
```
And I know email works because my registration flow with `django-registration-redux` works flawlessly.
Any ideas? | I tried to recreate your situation and I faced the following scenarios:
1. Mail is only sent to active users. Email associated with no user will not get any email(obviously).
2. I got an error form's save method in line 270 for `email = loader.render_to_string(email_template_name, c)`:
> NoReverseMatch at /accounts/password/reset/ Reverse for
> 'password\_reset\_confirm' with arguments '()' and keyword arguments
> '{'token': '42h-4e68c02f920d69a82fbf', 'uidb64': b'Mg'}' not found. 0
> pattern(s) tried: []
It seems that your urls.py doesn't contain any url named 'password\_reset\_confirm'. So you should change your url:
```
url(r'^accounts/password/reset/(?P<uidb64>[0-9A-Za-z]+)-(?P<token>.+)/$',
'django.contrib.auth.views.password_reset_confirm',
{'post_reset_redirect': '/accounts/password/done/'},),
```
To:
```
url(r'^accounts/password/reset/(?P<uidb64>[0-9A-Za-z]+)-(?P<token>.+)/$',
'django.contrib.auth.views.password_reset_confirm',
{'post_reset_redirect': '/accounts/password/done/'}, name='password_reset_confirm'),
```
If you have set your email configuration perfectly the you should get emails with no problem. If still you are facing this issue, please use a debugger to check where its getting exceptions.
PS: I have tested with django 1.7.8 and templates resides in: `Python34\Lib\site-packages\django\contrib\admin\templates\registration`. Urls and views are used as you have written in the question. |
Scraping dynamic content using python-Scrapy | 30,345,623 | 7 | 2015-05-20T09:27:12Z | 30,378,765 | 15 | 2015-05-21T15:56:24Z | [
"python",
"web-scraping",
"scrapy"
] | Disclaimer: I've seen numerous other similar posts on StackOverflow and tried to do it the same way but was they don't seem to work on this website.
I'm using Python-Scrapy for getting data from koovs.com.
However, I'm not able to get the product size, which is dynamically generated. Specifically, if someone could guide me a little on getting the 'Not available' size tag from the drop-down menu on [this](http://www.koovs.com/only-onlall-stripe-ls-shirt-59554.html?from=category-651&skuid=236376) link, I'd be grateful.
I am able to get the size list statically, but doing that I only get the list of sizes but not which of them are available. | You can also solve it with [`ScrapyJS`](https://github.com/scrapinghub/scrapyjs) (no need for `selenium` and a real browser):
> This library provides Scrapy+JavaScript integration using Splash.
Follow the installation instructions for [`Splash`](http://splash.readthedocs.org/en/latest/install.html) and [`ScrapyJS`](https://github.com/scrapinghub/scrapyjs#installation), start the splash docker container:
```
$ docker run -p 8050:8050 scrapinghub/splash
```
Put the following settings into `settings.py`:
```
SPLASH_URL = 'http://192.168.59.103:8050'
DOWNLOADER_MIDDLEWARES = {
'scrapyjs.SplashMiddleware': 725,
}
DUPEFILTER_CLASS = 'scrapyjs.SplashAwareDupeFilter'
```
And here is your sample spider that is able to see the size availability information:
```
# -*- coding: utf-8 -*-
import scrapy
class ExampleSpider(scrapy.Spider):
name = "example"
allowed_domains = ["koovs.com"]
start_urls = (
'http://www.koovs.com/only-onlall-stripe-ls-shirt-59554.html?from=category-651&skuid=236376',
)
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url, self.parse, meta={
'splash': {
'endpoint': 'render.html',
'args': {'wait': 0.5}
}
})
def parse(self, response):
for option in response.css("div.select-size select.sizeOptions option")[1:]:
print option.xpath("text()").extract()
```
Here is what is printed on the console:
```
[u'S / 34 -- Not Available']
[u'L / 40 -- Not Available']
[u'L / 42']
``` |
Python: '{:,}'.format() why is this working? | 30,346,934 | 4 | 2015-05-20T10:25:17Z | 30,347,049 | 8 | 2015-05-20T10:31:20Z | [
"python",
"string",
"string-formatting"
] | There is a kata in codewars where the task is to write a function that takes an integer in input and outputs a string with currency format. For example `123456` -> `"123,456"`.
I had a solution, but it was much uglier than this one with string formatting:
```
def to_currency(price):
return '{:,}'.format(price)
```
I've read the documentation, but I still don't know how is this working exactly? | You can use python's format language like:
```
'{name:format}'.format(...)
```
`name` is optional, and can be empty:
```
'{:format}'.format(...)
```
`format` is a format specifier. If it's not given, it's usually inferred from the type of the argument given to `format(...)`.
In this case, `format` is `,`, which instructs python to add group dividers, like demanded.
From <https://docs.python.org/2/library/string.html#formatspec> :
> The `,` option signals the use of a comma for a thousands separator.
> For a locale aware separator, use the `n` integer presentation type
> instead. |
python pandas: drop a df column if condition | 30,351,125 | 4 | 2015-05-20T13:28:48Z | 30,351,362 | 7 | 2015-05-20T13:37:57Z | [
"python",
"pandas",
"dataframe"
] | I would like to drop a given column from a pandas dataframe IF all the values in the column is "0%".
my df:
```
data = {'UK': ['11%', '16%', '7%', '52%', '2%', '5%', '3%', '3%'],
'US': ['0%', '0%', '0%', '0%', '0%', '0%', '0%', '0%'],
'DE': ['11%', '16%', '7%', '52%', '2%', '5%', '3%', '3%'],
'FR': ['11%', '16%', '7%', '52%', '2%', '5%', '3%', '3%']
}
dummy_df = pd.DataFrame(data,
index= ['cat1','cat2','cat3','cat4','cat5','cat6','cat7','cat8'],
columns=['UK', 'US', 'DE', 'FR'])
```
my code so far:
```
dummy_df.drop(dummy_df == '0%',inplace=True)
```
I get a value error:
```
ValueError: labels ['UK' 'US' 'DE' 'FR'] not contained in axis
``` | ```
In [186]: dummy_df.loc[:, ~(dummy_df == '0%').all()]
Out[186]:
UK DE FR
cat1 11% 11% 11%
cat2 16% 16% 16%
cat3 7% 7% 7%
cat4 52% 52% 52%
cat5 2% 2% 2%
cat6 5% 5% 5%
cat7 3% 3% 3%
cat8 3% 3% 3%
```
Explanation:
The comparison with '0%' you already got, this gives the following dataframe:
```
In [182]: dummy_df == '0%'
Out[182]:
UK US DE FR
cat1 False True False False
cat2 False True False False
cat3 False True False False
cat4 False True False False
cat5 False True False False
cat6 False True False False
cat7 False True False False
cat8 False True False False
```
Now we want to know which columns has all `True`s:
```
In [183]: (dummy_df == '0%').all()
Out[183]:
UK False
US True
DE False
FR False
dtype: bool
```
And finally, we can index with these boolean values (but taking the opposite with `~` as want *don't* want to select where this is `True`): `dummy_df.loc[:, ~(dummy_df == '0%').all()]`.
Similarly, you can also do: `dummy_df.loc[:, (dummy_df != '0%').any()]` (selects columns where at least one value is not equal to '0%') |
In my date time value I want to use regex to strip out the slash and colon from time and replace it with underscore | 30,352,394 | 4 | 2015-05-20T14:18:27Z | 30,352,443 | 8 | 2015-05-20T14:21:00Z | [
"python",
"regex",
"selenium",
"selenium-webdriver",
"webdriver"
] | I am using Python, Webdriver for my automated test. My scenario is on the Admin page of our website I click Add project button and i enter a project name.
Project Name I enter is in the format of `LADEMO_IE_05/20/1515:11:38`
It is a date and time at the end.
What I would like to do is using a regex I would like to find the / and :
and replace them with an underscore \_
I have worked out the regex expression:
```
[0-9]{2}[/][0-9]{2}[/][0-9]{4}:[0-9]{2}[:][0-9]{2}
```
This finds 2 digits then `/` followed by 2 digits then `/` and so on.
I would like to replace `/` and `:` with `_`.
Can I do this in Python using import re? I need some help with the syntax please.
My method which returns the date is:
```
def get_datetime_now(self):
dateTime_now = datetime.datetime.now().strftime("%x%X")
print dateTime_now #prints e.g. 05/20/1515:11:38
return dateTime_now
```
My code snippet for entering the project name into the text field is:
```
project_name_textfield.send_keys('LADEMO_IE_' + self.get_datetime_now())
```
The Output is e.g.
```
LADEMO_IE_05/20/1515:11:38
```
I would like the Output to be:
```
LADEMO_IE_05_20_1515_11_38
``` | Just format the datetime using `strftime()` into the [desired format](https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior):
```
>>> datetime.datetime.now().strftime("%m_%d_%y%H_%M_%S")
'05_20_1517_20_16'
``` |
Python Multiple Inheritance: call super on all | 30,353,498 | 9 | 2015-05-20T15:02:00Z | 30,354,798 | 9 | 2015-05-20T16:00:23Z | [
"python"
] | I have a the following two superclasses:
```
class Parent1(object):
def on_start(self):
print('do something')
class Parent2(object):
def on_start(self):
print('do something else')
```
I would like to have a child class that inherits from both be able to call super for both parents.
```
class Child(Parent1, Parent2):
def on_start(self):
# super call on both parents
```
What is the Pythonic way to do this? Thanks. | Exec summary:
[Super](https://docs.python.org/2/library/functions.html#super) only executes one method based on the class hierarchy's [`__mro__`](http://python-history.blogspot.com/2010/06/method-resolution-order.html). If you want to execute more than one method by the same name, your parent classes need to written to cooperatively do that (by calling `super` implicitly or explicitly) or you need to loop over [`__bases__`](https://docs.python.org/2/library/stdtypes.html#class.__bases__) or the [`__mro__`](https://docs.python.org/2/library/stdtypes.html#class.__mro__) values of the child classes.
The job of `super` is to delegate part or all of a method call to some existing method in the classes ancestor tree. *The delegation may go well outside of classes that you control. The method name delegated needs to exist in the group of base classes.*
The method presented below using `__bases__` with `try/except` is closest to a complete answer to your question of how to call each parent's method of the same name.
---
`super` is useful in the situation where you want to call one of your parent's methods, but you don't know which parent:
```
class Parent1(object):
pass
class Parent2(object):
# if Parent 2 had on_start - it would be called instead
# because Parent 2 is left of Parent 3 in definition of Child class
pass
class Parent3(object):
def on_start(self):
print('the ONLY class that has on_start')
class Child(Parent1, Parent2, Parent3):
def on_start(self):
super(Child, self).on_start()
```
In this case, `Child` has three immediate parents. Only one, Parent3, has an `on_start` method. Calling `super` resolves that only `Parent3` has `on_start` and that is the method that is called.
If `Child` inherits from more than one class that has an `on_start` method, the order is resolved left to right (as listed in the class definition) and bottom to top (as logical inheritance). ***Only one of the methods is called and the other methods of the same name in the hierarchy of classes have been superseded.***
So, more commonly:
```
class GreatGrandParent(object):
pass
class GrandParent(GreatGrandParent):
def on_start(self):
print('the ONLY class that has on_start')
class Parent(GrandParent):
# if Parent had on_start, it would be used instead
pass
class Child(Parent):
def on_start(self):
super(Child, self).on_start()
```
---
If you want to call multiple parents methods by method name, you can use `__bases__` instead of super in this case and iterate over the base classes of `Child` without knowing the classes by name:
```
class Parent1(object):
def on_start(self):
print('do something')
class Parent2(object):
def on_start(self):
print('do something else')
class Child(Parent1, Parent2):
def on_start(self):
for base in Child.__bases__:
base.on_start(self)
>>> Child().on_start()
do something
do something else
```
If there is a possibility one of the base classes does not have `on_start` you can use `try/except:`
```
class Parent1(object):
def on_start(self):
print('do something')
class Parent2(object):
def on_start(self):
print('do something else')
class Parent3(object):
pass
class Child(Parent1, Parent2, Parent3):
def on_start(self):
for base in Child.__bases__:
try:
base.on_start(self)
except AttributeError:
# handle that one of those does not have that method
print('"{}" does not have an "on_start"'.format(base.__name__))
>>> Child().on_start()
do something
do something else
"Parent3" does not have an "on_start"
```
Using `__bases__` will act similar to `super` but for each class hierarchy defined in the `Child` definition. ie, it will go though each forbearer class until `on_start` is satisfied **once** for each parent of the class:
```
class GGP1(object):
def on_start(self):
print('GGP1 do something')
class GP1(GGP1):
def on_start(self):
print('GP1 do something else')
class Parent1(GP1):
pass
class GGP2(object):
def on_start(self):
print('GGP2 do something')
class GP2(GGP2):
pass
class Parent2(GP2):
pass
class Child(Parent1, Parent2):
def on_start(self):
for base in Child.__bases__:
try:
base.on_start(self)
except AttributeError:
# handle that one of those does not have that method
print('"{}" does not have an "on_start"'.format(base.__name__))
>>> Child().on_start()
GP1 do something else
GGP2 do something
# Note that 'GGP1 do something' is NOT printed since on_start was satisfied by
# a descendant class L to R, bottom to top
```
Now imagine a more complex inheritance structure:

If you want each and every forbearer's `on_start` method, you could use `__mro__` and filter out the classes that do not have `on_start` as part of their `__dict__` for that class. Otherwise, you will potentially get a forbearer's `on_start` method. In other words, `hassattr(c, 'on_start')` is `True` for every class that `Child` is a descendant from (except `object` in this case) since `Ghengis` has an `on_start` attribute and all classes are descendant classes from Ghengis.
\*\* Warning -- Demo Only \*\*
```
class Ghengis(object):
def on_start(self):
print('Khan -- father to all')
class GGP1(Ghengis):
def on_start(self):
print('GGP1 do something')
class GP1(GGP1):
pass
class Parent1(GP1):
pass
class GGP2(Ghengis):
pass
class GP2(GGP2):
pass
class Parent2(GP2):
def on_start(self):
print('Parent2 do something')
class Child(Parent1, Parent2):
def on_start(self):
for c in Child.__mro__[1:]:
if 'on_start' in c.__dict__.keys():
c.on_start(self)
>>> Child().on_start()
GGP1 do something
Parent2 do something
Khan -- father to all
```
But this also has a problem -- if `Child` is further subclassed, then the child of Child will also loop over the same `__mro__` chain.
As stated by Raymond Hettinger:
> super() is in the business of delegating method calls to some class in
> the instanceâs ancestor tree. For reorderable method calls to work,
> the classes need to be designed cooperatively. This presents three
> easily solved practical issues:
>
> 1) the method being called by super() needs to exist
>
> 2) the caller and callee need to have a matching argument signature and
>
> 3) every occurrence of the method needs to use super()
The solution is to write cooperative classes that uniformly use `super` through the ancestor list or creative use of the [adapter pattern](http://en.wikipedia.org/wiki/Adapter_pattern) to adapt classes you cannot control. These methods are discussed more completely in the article [Pythonâs super() considered super!](https://rhettinger.wordpress.com/2011/05/26/super-considered-super/) by Raymond Hettinger. |
defaultdict with default value 1? | 30,356,892 | 3 | 2015-05-20T17:48:30Z | 30,357,006 | 7 | 2015-05-20T17:53:54Z | [
"python",
"defaultdict"
] | I am new to python, and i read some code snippet from some place. It's an implementation of counting sort.
The code is as below:
```
from collections import defaultdict
def sort_colors(A):
ht = {} # a hash map
ht = defaultdict(lambda:0, ht) # with default value 1
for i in A:
ht[i] += 1
ret = []
for k in [0, 1, 2]:
ret.extend([k]*ht[k])
return ret
```
As in the first two lines of the func, it's
```
ht = {}
ht = defaultdict(lambda:0, ht)
```
I am not quite clear about this initialization.Could you kindly help me figure it out? and also, Shall we just replace these two lines with following?
```
ht = defaultdict(int) # default value 0
``` | ```
ht = {}
ht = defaultdict(lambda:0, ht)
```
`defaultdict`s are different from `dict` in that when you try to access a regular `dict` with a key that does not exists, it raises a `KeyError`.
`defaultdict`, however, doesn't raise an error: it creates the key for you instead. With which value? With the return of the `callabe` you passed as an argument. In this case, every new keys will be created with value `0` (which is the return of the simple `lambda` function `lambda:0`), which also happens to be the same return of `int()` , so in **this case**, there would be no difference in changing the default function to `int()`.
Breaking down this line in more detail: `ht = defaultdict(lambda:0, ht)`
The first argument is a function, which is a callable object. This is the function that will be called to create a new value for an inexistent key. The second argument, `ht`is optional and refers to the base dictionary that the new `defaultdict` will be built on. Therefore, if `ht`had some keys and values, the `defaultdict` would also have these keys with the corresponding values. If you tried to access these keys, you would get the old values.
However, if you did not pass the base dictionary, a brand new `defaultdict` would be created, and thus, all new keys accessed would get the default value returned from the callable.
(In this case, as `ht` is initially an empty `dict`, there would be no difference at all in doing `ht = defaultdict(lambda:0)` , `ht = defaultdict(int)` or `ht = defaultdict(lambda:0, ht)` : they would all build the same `defaultdict`. |
Pandas - FillNa with another column | 30,357,276 | 7 | 2015-05-20T18:08:33Z | 30,357,382 | 19 | 2015-05-20T18:14:27Z | [
"python",
"pandas"
] | I would like to fill missing value in one column with the value of another column.
I read that looping through each row would be very bad practice and that it would be better to do everything in one go but I could not find out how to do it with the `fillna` method.
Data Before
```
Day Cat1 Cat2
1 cat mouse
2 dog elephant
3 cat giraf
4 NaN ant
```
Data After
```
Day Cat1 Cat2
1 cat mouse
2 dog elephant
3 cat giraf
4 ant ant
``` | You can provide this column to `fillna` (see [docs](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html)), it will use those values on matching indexes to fill:
```
In [17]: df['Cat1'].fillna(df['Cat2'])
Out[17]:
0 cat
1 dog
2 cat
3 ant
Name: Cat1, dtype: object
``` |
How to get one number specific times in an array python | 30,358,227 | 14 | 2015-05-20T19:00:33Z | 30,358,261 | 17 | 2015-05-20T19:02:37Z | [
"python",
"python-2.7"
] | I have one array like `A = [1,2,3]` and another array `B = [4,5,6]`. Now, I need another array C so that the elements in C should be the same elements of B having occurrence in the order of element A. Like,
`C = [4, 5, 5, 6, 6, 6]` | ```
A = [1,2,3]
B = [4,5,6]
C = [b_item for a_item, b_item in zip(A,B) for _ in range(a_item)]
print C
```
Result:
```
[4, 5, 5, 6, 6, 6]
```
This is a one-line equivalent to:
```
C = []
for a_item, b_item in zip(A,B):
for _ in range(a_item):
C.append(b_item)
```
... Which is roughly equivalent to
```
C = []
for i in range(min(len(A), len(B))):
a_item = A[i]
b_item = B[i]
for _ in range(a_item):
C.append(b_item)
```
(N.B. Don't get tripped up by the underscore. [It's an ordinary variable](http://stackoverflow.com/questions/5893163/what-is-the-purpose-of-the-single-underscore-variable-in-python). It is conventionally used when you don't actually have to refer to the variable's value, as in this example) |
Python Dictionary/Loop Output | 30,358,341 | 3 | 2015-05-20T19:06:51Z | 30,358,533 | 7 | 2015-05-20T19:17:00Z | [
"python"
] | * Make a dictionary for nine Tampa Bay Rays that are given. Use the
player names as keys and a list for each value.
* Each value list should hold the position played by the player, the
batting order, and current batting average. See above.
* When the dictionary is complete, use a for loop to display the
dictionary keys and values. This is what I got for this
* Next, use loop(s) to print the "lineup" (the dictionary in batting
order). **This is the step I need some help with, not sure how I go
about doing an order for a dictionary like so. A list made more sense
to me but that is not the question.**
```
main():
rays_players = { 'DeJesus': ['DH', 6, 299],
'Loney': ['1B', 4, 222],
'Rivera': ['C', 9, 194],
'Forsythe': ['2B', 5, 304],
'Souza Jr': ['RF', 2, 229],
'Longoria': ['3B', 3, 282],
'Cabrera': ['SS', 7, 214],
'Kiermaier': ['CF', 1, 240],
'Guyer': ['LF', 8, 274] }
for key in rays_players:
print(key, rays_players[key])
main()
```
**This is what I have been trying, but it is not working, I am very new at this:**
```
for key in sorted(rays_players.items(), key=lambda v: (v)):
print ("%s: %s" % (key))
```
**Step 4 is supposed to look like this:**
Batting 1: CF Kiermaier, current avg: 240
Batting 2: RF Souza Jr, current avg: 229
Batting 3: 3B Longoria, current avg: 282
Batting 4: 1B Loney, current avg: 222
Batting 5: 2B Forsythe, current avg: 304
Batting 6: DH DeJesus, current avg: 299
Batting 7: SS Cabrera, current avg: 214
Batting 8: LF Guyer, current avg: 274
Batting 9: C Rivera, current avg: 194 | Hope this helps:
```
rays_players = {'DeJesus': ['DH', 6, 299],
'Loney': ['1B', 4, 222],
'Rivera': ['C', 9, 194],
'Forsythe': ['2B', 5, 304],
'Souza Jr': ['RF', 2, 229],
'Longoria': ['3B', 3, 282],
'Cabrera': ['SS', 7, 214],
'Kiermaier': ['CF', 1, 240],
'Guyer': ['LF', 8, 274]}
for key, value in sorted(rays_players.items(), key=lambda v: v[1][1]):
print("Batting {}: {} {}, current avg: {}".format(value[1], value[0], key, value[2]))
``` |
Asynchronous exception handling in Python | 30,361,824 | 16 | 2015-05-20T23:04:46Z | 30,364,143 | 17 | 2015-05-21T03:39:19Z | [
"python",
"python-3.x",
"python-asyncio"
] | I've the following code using `asyncio` and `aiohttp` to make asynchronous HTTP requests.
```
import sys
import asyncio
import aiohttp
@asyncio.coroutine
def get(url):
try:
print('GET %s' % url)
resp = yield from aiohttp.request('GET', url)
except Exception as e:
raise Exception("%s has error '%s'" % (url, e))
else:
if resp.status >= 400:
raise Exception("%s has error '%s: %s'" % (url, resp.status, resp.reason))
return (yield from resp.text())
@asyncio.coroutine
def fill_data(run):
url = 'http://www.google.com/%s' % run['name']
run['data'] = yield from get(url)
def get_runs():
runs = [ {'name': 'one'}, {'name': 'two'} ]
loop = asyncio.get_event_loop()
task = asyncio.wait([fill_data(r) for r in runs])
loop.run_until_complete(task)
return runs
try:
get_runs()
except Exception as e:
print(repr(e))
sys.exit(1)
```
For some reason, exceptions raised inside the `get` function are not caught:
```
Future/Task exception was never retrieved
Traceback (most recent call last):
File "site-packages/asyncio/tasks.py", line 236, in _step
result = coro.send(value)
File "mwe.py", line 25, in fill_data
run['data'] = yield from get(url)
File "mwe.py", line 17, in get
raise Exception("%s has error '%s: %s'" % (url, resp.status, resp.reason))
Exception: http://www.google.com/two has error '404: Not Found'
```
So, what is correct way to handle exceptions raised by couroutines? | `asyncio.wait` doesn't actually consume the `Futures` passed to it, it just waits for them to complete, and then returns the `Future` objects:
> *coroutine* **`asyncio.wait(futures, *, loop=None, timeout=None,
> return_when=ALL_COMPLETED)`**
>
> Wait for the Futures and coroutine objects
> given by the sequence futures to complete. Coroutines will be wrapped
> in Tasks. Returns two sets of `Future`: (done, pending).
Until you actually `yield from` the items in the `done` list, they'll remain unconsumed. Since your program exits without consuming the futures, you see the "exception was never retrieved" messages.
For your use-case, it probably makes more sense to use [`asyncio.gather`](https://docs.python.org/3/library/asyncio-task.html#asyncio.gather), which will actually consume each `Future`, and then return a single `Future` that aggregates all their results (or raises the first `Exception` thrown by a future in the input list).
```
def get_runs():
runs = [ {'name': 'one'}, {'name': 'two'} ]
loop = asyncio.get_event_loop()
tasks = asyncio.gather(*[fill_data(r) for r in runs])
loop.run_until_complete(tasks)
return runs
```
Output:
```
GET http://www.google.com/two
GET http://www.google.com/one
Exception("http://www.google.com/one has error '404: Not Found'",)
```
Note that `asyncio.gather` actually lets you customize its behavior when one of the futures raises an exception; the default behavior is to raise the first exception it hits, but it can also just return each exception object in the output list:
> **`asyncio.gather(*coros_or_futures, loop=None, return_exceptions=False)`**
>
> Return a future aggregating results from the given coroutine objects
> or futures.
>
> All futures must share the same event loop. If all the tasks are done
> successfully, the returned futureâs result is the list of results (in
> the order of the original sequence, not necessarily the order of
> results arrival). **If `return_exceptions` is `True`, exceptions in the
> tasks are treated the same as successful results, and gathered in the
> result list; otherwise, the first raised exception will be immediately
> propagated to the returned future.** |
find out if the `in` operator can be used | 30,362,511 | 2 | 2015-05-21T00:20:09Z | 30,362,563 | 7 | 2015-05-21T00:25:41Z | [
"python",
"iterator",
"containers"
] | What is the simplest (and most elegant) way, to find out if the `in` operator can be used in python?
If I open a python shell and type in:
```
"" in 2
```
it prints:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: argument of type 'int' is not iterable
```
according to the [python-docs](https://docs.python.org/2/library/stdtypes.html#index-20) an iterable is:
> container.**iter**()
>
> Return an iterator object. The object is required to support the iterator protocol described below. If a container supports different
> types of iteration, additional methods can be provided to specifically
> request iterators for those iteration types. (An example of an object
> supporting multiple forms of iteration would be a tree structure which
> supports both breadth-first and depth-first traversal.) This method
> corresponds to the tp\_iter slot of the type structure for Python
> objects in the Python/C API.
so
```
hasattr([], "__iter__") and hasattr({}, "__iter__")
```
return `true` as expected, but
```
hasattr("test_string", "__iter__")
```
returns false. But I can use
```
"test" in "test_string"
```
without any problems.
By elegant I refer to NOT use a try-except solution | **Try except *is* the correct and elegant way.**
First of all, whether `a in b` will raise exception or not depends on *both* a and b, not just on b alone.
The other problem is there are multiple ways that `in` works. Here is an example of an object which supports `in`, but doesn't support iteration:
```
>>> class EvenNumbers(object):
... def __contains__(self, n):
... return n % 2 == 0
...
>>> even_numbers = EvenNumbers()
>>> 4 in even_numbers
True
>>> 5 in even_numbers
False
>>> for even_number in even_numbers:
... pass
...
TypeError: 'EvenNumbers' object is not iterable
```
And here is an example of an object which supports iteration, but doesn't define `__contains__`:
```
>>> import itertools as it
>>> even_numbers = (2*n for n in it.count())
>>> 4 in even_numbers
True
>>> even_numbers.__contains__
AttributeError: 'generator' object has no attribute '__contains__'
```
So to have a working LBYL implementation, you will have to take into account every possible method in which `a in b` can work (or not). I have only listed a couple here, there are several others. You will find your code becoming very long and ugly, and will eventually realise that try/except was the path of least resistance all along! |
find out if the `in` operator can be used | 30,362,511 | 2 | 2015-05-21T00:20:09Z | 30,362,592 | 9 | 2015-05-21T00:29:26Z | [
"python",
"iterator",
"containers"
] | What is the simplest (and most elegant) way, to find out if the `in` operator can be used in python?
If I open a python shell and type in:
```
"" in 2
```
it prints:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: argument of type 'int' is not iterable
```
according to the [python-docs](https://docs.python.org/2/library/stdtypes.html#index-20) an iterable is:
> container.**iter**()
>
> Return an iterator object. The object is required to support the iterator protocol described below. If a container supports different
> types of iteration, additional methods can be provided to specifically
> request iterators for those iteration types. (An example of an object
> supporting multiple forms of iteration would be a tree structure which
> supports both breadth-first and depth-first traversal.) This method
> corresponds to the tp\_iter slot of the type structure for Python
> objects in the Python/C API.
so
```
hasattr([], "__iter__") and hasattr({}, "__iter__")
```
return `true` as expected, but
```
hasattr("test_string", "__iter__")
```
returns false. But I can use
```
"test" in "test_string"
```
without any problems.
By elegant I refer to NOT use a try-except solution | The iterator protocol doesn't actually require a type to support `__iter__`. It requires a type to *either* support `__iter__`, or `__getitem__` with sequential integer arguments starting from 0. See the [`iter`](https://docs.python.org/2/library/functions.html#iter) function for the best explanation of this in the docs.
So, `hasattr(x, "__iter__")` will give you false negatives if testing whether something is iterable.
So, how *can* you do this? Well, the right way, even if you don't like it, is:
```
try:
i = iter(x)
except TypeError:
# not iterable
```
---
Also, note that, as the docs for [`hasattr`](https://docs.python.org/2/library/functions.html#hasattr) explain:
> This is implemented by calling `getattr(object, name)` and seeing whether it raises an exception or not.
So, really, you're not avoiding exceptions at all; you're just coming up with a more convoluted way to raise an exception and hide that fact from yourself.
---
But meanwhile, iteration is a red herring in the first place. The `in` operator is implemented with the [`__contains__`](https://docs.python.org/2/reference/datamodel.html#object.__contains__) method. Container types that don't define a `__contains__` method will *fall back to* iterating and comparing, but types aren't *required* to implement it that way. You can have a `__contains__` that's much faster than iterating could be (as with `dict` and `set`); you can even be a container without being an iterable. (Note that the [`collections` module ABCs](https://docs.python.org/2/library/collections.html#collections-abstract-base-classes) have separate `Container` and `Iterable` bases; neither one depends on the other.)
---
So, if you really wanted to do this without any exception handling, how could you?
Well, you have to check that at least one of the following is true:
* `x` has a `__contains__` method.
* `x` has an `__iter__` method.
* `x` has a `__getitem__` method that, when called with the number `0`, either returns successfully or raises `IndexError`.
Even if you accept that the last one can't possibly be tested without actually trying to call it with the number `0` and just assume that having `__getitem__` is "close enough", how can you test for this without relying on exceptions?
You really can't. You could, e.g., iterate over `dir(x)`, but that won't work for classes that define `__contains__` dynamically, e.g., in a `__getattr__` method that delegates to `self.real_sequence`.
And, even if you could, what happens if you have, say, a class that defines `__contains__` as taking no arguments? The attribute is there, but `in` is still going to raise a `TypeError`.
And all of this is ignoring the (implementation-dependent) rules on [which special methods are looked up on the object and which on the type itself](https://docs.python.org/2/reference/datamodel.html#new-style-special-lookup). For example, in CPython 2.7:
```
>>> class C(object): pass
>>> c = C()
>>> c.__contains__ = lambda self, x: return True
>>> hasattr(c, '__contains__')
True
>>> c.__contains__(2)
True
>>> 2 in c
TypeError: argument of type 'C' is not iterable
``` |
How do I extend, mimic, or emulate the range function? | 30,362,799 | 22 | 2015-05-21T00:55:40Z | 30,362,874 | 17 | 2015-05-21T01:06:25Z | [
"python"
] | I made a little generator function for character ranges:
```
>>> def crange(start, end):
... for i in range(ord(start), ord(end)+1):
... yield chr(i)
...
```
And then I can do this:
```
>>> print(*crange('a','e'))
a b c d e
```
Yay! But this doesn't work:
```
>>> crange('a','e')[::2]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'generator' object is not subscriptable
```
And this works, but is O(n), unlike `range`'s O(1):
```
>>> 'y' in crange('a','z')
True
```
That means it takes about 0.35 seconds to search for character number 109,999 out of the maximum of 110,000. `109999 in range(110000)` is, of course, fast.
At that point, my first thought was to simply subclass range. Unfortunately:
```
>>> class A(range):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: type 'range' is not an acceptable base type
```
So I guess I would have to mimic it in some way that allows me to pass characters as arguments, works like `range` internally, and produces characters. Unfortunately, I'm not sure how to proceed. I tried a `dir()`:
```
>>> print(*dir(range), sep='\n')
__class__
__contains__
__delattr__
__dir__
__doc__
__eq__
__format__
__ge__
__getattribute__
__getitem__
__gt__
__hash__
__init__
__iter__
__le__
__len__
__lt__
__ne__
__new__
__reduce__
__reduce_ex__
__repr__
__reversed__
__setattr__
__sizeof__
__str__
__subclasshook__
count
index
start
step
stop
```
which lets me see what functions are in there, but I'm not sure what they're doing, or how `range` uses them. I looked for the source for `range`, but it's in C, and I don't know where to find its Python wrapper (it does have one, right?).
Where do I go from here, and should I even go there? | > At that point, my first thought was to simply subclass range.
`range` was a function in Python2 and a "final" class in Python3 ([more info here](http://stackoverflow.com/a/10114382/276949)) - in both cases not something you can sub-class. You will need to create a class `crange` that extends from an `object` as the base type.
```
class crange(object):
```
> And this works, but is O(n), unlike range's O(1)
In Python 3, there is a [`__contains__`](https://docs.python.org/3/reference/datamodel.html#object.__contains__) method that you will define for your object.
> For objects that donât define **contains**(), the membership test first tries iteration via **iter**(), then the old sequence iteration protocol via **getitem**(), see this section in the language reference.
This allows Python to determine if the value is in your range without actually enumerating the range.
For a simple example, if your range is 1 to 1,000,000, it is trivial to determine whether 23546 is in that range (`1 < 23546 < 1000000`). Of course the actual implementation is a bit more complex and adds ability to handle step increments etc.
Regarding:
> Yay! But this doesn't work: `>>> crange('a','e')[::2]`
In this case you need to define `__getitem__` on your object. Here's an example of some of the methods required:
```
class crange(object):
def __init__(self, start, end, step=1):
# initialize your range object
self.start = start
self.end = end
self.step = step
def __iter__(self):
# enable iteration over your object
# (assume step size is 1)
for i in range(ord(self.start), ord(self.end)+1):
yield chr(i)
def __getitem__(self, i):
# enable accessing items in your range by index
# also enable crange('a','e')[::2]
# (assuming step size of 1)
if isinstance( i, slice ):
# implement slicing
else:
return chr(ord(self.start) + i)
def __contains__(self, char):
# enable O(1) determination of whether a value is in your range
# (assume step size is 1)
return ord(self.start) <= ord(char) < ord(self.end)
def __len__(self):
# return length (assuming step size of 1)
return ord(self.end) - ord(self.start)
``` |
How do I extend, mimic, or emulate the range function? | 30,362,799 | 22 | 2015-05-21T00:55:40Z | 30,380,956 | 7 | 2015-05-21T17:50:32Z | [
"python"
] | I made a little generator function for character ranges:
```
>>> def crange(start, end):
... for i in range(ord(start), ord(end)+1):
... yield chr(i)
...
```
And then I can do this:
```
>>> print(*crange('a','e'))
a b c d e
```
Yay! But this doesn't work:
```
>>> crange('a','e')[::2]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'generator' object is not subscriptable
```
And this works, but is O(n), unlike `range`'s O(1):
```
>>> 'y' in crange('a','z')
True
```
That means it takes about 0.35 seconds to search for character number 109,999 out of the maximum of 110,000. `109999 in range(110000)` is, of course, fast.
At that point, my first thought was to simply subclass range. Unfortunately:
```
>>> class A(range):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: type 'range' is not an acceptable base type
```
So I guess I would have to mimic it in some way that allows me to pass characters as arguments, works like `range` internally, and produces characters. Unfortunately, I'm not sure how to proceed. I tried a `dir()`:
```
>>> print(*dir(range), sep='\n')
__class__
__contains__
__delattr__
__dir__
__doc__
__eq__
__format__
__ge__
__getattribute__
__getitem__
__gt__
__hash__
__init__
__iter__
__le__
__len__
__lt__
__ne__
__new__
__reduce__
__reduce_ex__
__repr__
__reversed__
__setattr__
__sizeof__
__str__
__subclasshook__
count
index
start
step
stop
```
which lets me see what functions are in there, but I'm not sure what they're doing, or how `range` uses them. I looked for the source for `range`, but it's in C, and I don't know where to find its Python wrapper (it does have one, right?).
Where do I go from here, and should I even go there? | To add to Martin Konecny's answer. You probably want to use an internal range for everything and convert between chr and ord.
```
class crange:
def __init__(self, *args, **kwargs):
args = [ord(arg) for arg in args]
kwargs = {key: ord(val) for key, val in kwargs.items()}
self.range = range(*args, **kwargs)
def __iter__(self):
for n in self.range:
yield chr(n)
def __contains__(self, c):
return ord(c) in self.range
def __getitem__(self, i):
if isinstance(i, slice):
ret = crange('\x00')
ret.range = self.range[i]
return ret
else:
return chr(self.range[i])
def __repr__(self):
return "crange({}, {})".format(
repr(chr(self.range.start)), repr(chr(self.range.stop)))
r = crange('a', 'f')
print(list(r))
print('b' in r)
print('f' in r)
print(r[:2])
```
In other words: if we can't subclass it we can use [object composition](https://en.wikipedia.org/wiki/Object_composition#Aggregation). |
Is it possible to use Angular with the Jinja2 template engine? | 30,362,950 | 30 | 2015-05-21T01:15:43Z | 30,362,956 | 46 | 2015-05-21T01:16:13Z | [
"python",
"angularjs",
"flask",
"jinja2"
] | I have a Flask site and I want to use the Angular JavaScript framework. Unfortunately, it seems as if the delimiters overlap.
How do I use Angular with Jinja2 if both rely on double curly braces (`{{ expr }}`)? Is it even possible? | You have some options.
1) Change the delimiter notation for Angular:
```
var app = angular.module('Application', []);
app.config(['$interpolateProvider', function($interpolateProvider) {
$interpolateProvider.startSymbol('{a');
$interpolateProvider.endSymbol('a}');
}]);
```
Whatever is chosen for the start and end symbols will act as the new delimiters. In this case, you would express a variable to Angular using `{a some_variable a}`.
This approach has the advantage of only needed to be set once and being explicit.
2) Change the delimiter notation for Jinja2.
Override or subclass `Flask.jinja_options.update` on the `Flask` object that you bind to your application (relevant vars: `block_start_string`, `block_end_string`, `variable_start_string`, `variable_end_string`, `comment_start_string`, `comment_end_string`):
```
jinja_options = app.jinja_options.copy()
jinja_options.update(dict(
block_start_string='<%',
block_end_string='%>',
variable_start_string='%%',
variable_end_string='%%',
comment_start_string='<#',
comment_end_string='#>'
))
app.jinja_options = jinja_options
```
As there's a higher risk of sensitive data coming un-expanded from from the server-side, I suggest instead changing the syntax on the front-end (i.e. Angular) on any project in which you're not the sole developer.
3) Output a [raw block](http://jinja.pocoo.org/docs/dev/templates/#escaping) in Jinja2 using `{% raw %}` or `{% verbatim %}`:
```
<ul>
{% raw %}
{% for item in seq %}
<li>{{ some_var }}</li>
{% endfor %}
{% endraw %}
</ul>
```
4) Use Jinja2 to write the curly braces in the template:
```
{{ '{{ some_var }}' }}
```
this will be output as `{{ some_var }}` in the HTML.
My preference for approach #1 is apparent, but any of the above will work. |
Open a new scratch file in PyCharm? | 30,363,062 | 7 | 2015-05-21T01:29:33Z | 30,363,097 | 7 | 2015-05-21T01:33:21Z | [
"python",
"pycharm",
"scratch-file"
] | In PyCharm 4.5, one of the new features is listed as "Temporary Python Scratch Files". How do I create these?
I've searched through the "New" menu and haven't been able to find it. When I select "New...", I am given the option to create permanent files. | You can create a scratch file by going to "Tools" -> "New Scratch File..."

This presents you with the "New Scratch" dialog

If you are a fan of keyboard short cuts, the default for this is `Ctrl`+`Alt`+`Shift`+`Insert`.
This key combination can be modified by going to "File" -> "Settings" -> "Keymap" and searching for "Scratch"
 |
Optimizing a reed-solomon encoder (polynomial division) | 30,363,903 | 20 | 2015-05-21T03:11:39Z | 30,367,332 | 14 | 2015-05-21T07:33:18Z | [
"python",
"numpy",
"optimization",
"cython",
"pypy"
] | I am trying to optimize a Reed-Solomon encoder, which is in fact simply a polynomial division operation over Galois Fields 2^8 (which simply means that values wrap-around over 255). The code is in fact very very similar to what can be found here for Go: <http://research.swtch.com/field>
The algorithm for polynomial division used here is a [synthetic division](http://en.wikipedia.org/wiki/Synthetic_division) (also called Horner's method).
I tried everything: numpy, pypy, cython. The best performance I get is by using pypy with this simple nested loop:
```
def rsenc(msg_in, nsym, gen):
'''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field'''
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
for i in xrange(len(msg_in)):
coef = msg_out[i]
# coef = gf_mul(msg_out[i], gf_inverse(gen[0])) // for general polynomial division (when polynomials are non-monic), we need to compute: coef = msg_out[i] / gen[0]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
lcoef = gf_log[coef] # precaching
for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
msg_out[i + j] ^= gf_exp[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] += msg_out[i] * gen[j]
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
```
Can a Python optimization wizard guide me to some clues on how to get a speedup? My goal is to get at least a speedup of 3x, but more would be awesome. Any approach or tool is accepted, as long as it is cross-platform (works at least on Linux and Windows).
Here is a small test script with some of the other alternatives I tried (the cython attempt is not included since it was slower than native python!):
```
import random
from operator import xor
numpy_enabled = False
try:
import numpy as np
numpy_enabled = True
except ImportError:
pass
# Exponent table for 3, a generator for GF(256)
gf_exp = bytearray([1, 3, 5, 15, 17, 51, 85, 255, 26, 46, 114, 150, 161, 248, 19,
53, 95, 225, 56, 72, 216, 115, 149, 164, 247, 2, 6, 10, 30, 34,
102, 170, 229, 52, 92, 228, 55, 89, 235, 38, 106, 190, 217, 112,
144, 171, 230, 49, 83, 245, 4, 12, 20, 60, 68, 204, 79, 209, 104,
184, 211, 110, 178, 205, 76, 212, 103, 169, 224, 59, 77, 215, 98,
166, 241, 8, 24, 40, 120, 136, 131, 158, 185, 208, 107, 189, 220,
127, 129, 152, 179, 206, 73, 219, 118, 154, 181, 196, 87, 249, 16,
48, 80, 240, 11, 29, 39, 105, 187, 214, 97, 163, 254, 25, 43, 125,
135, 146, 173, 236, 47, 113, 147, 174, 233, 32, 96, 160, 251, 22,
58, 78, 210, 109, 183, 194, 93, 231, 50, 86, 250, 21, 63, 65, 195,
94, 226, 61, 71, 201, 64, 192, 91, 237, 44, 116, 156, 191, 218,
117, 159, 186, 213, 100, 172, 239, 42, 126, 130, 157, 188, 223,
122, 142, 137, 128, 155, 182, 193, 88, 232, 35, 101, 175, 234, 37,
111, 177, 200, 67, 197, 84, 252, 31, 33, 99, 165, 244, 7, 9, 27,
45, 119, 153, 176, 203, 70, 202, 69, 207, 74, 222, 121, 139, 134,
145, 168, 227, 62, 66, 198, 81, 243, 14, 18, 54, 90, 238, 41, 123,
141, 140, 143, 138, 133, 148, 167, 242, 13, 23, 57, 75, 221, 124,
132, 151, 162, 253, 28, 36, 108, 180, 199, 82, 246] * 2 + [1])
# Logarithm table, base 3
gf_log = bytearray([0, 0, 25, 1, 50, 2, 26, 198, 75, 199, 27, 104, 51, 238, 223, # BEWARE: the first entry should be None instead of 0 because it's undefined, but for a bytearray we can't set such a value
3, 100, 4, 224, 14, 52, 141, 129, 239, 76, 113, 8, 200, 248, 105,
28, 193, 125, 194, 29, 181, 249, 185, 39, 106, 77, 228, 166, 114,
154, 201, 9, 120, 101, 47, 138, 5, 33, 15, 225, 36, 18, 240, 130,
69, 53, 147, 218, 142, 150, 143, 219, 189, 54, 208, 206, 148, 19,
92, 210, 241, 64, 70, 131, 56, 102, 221, 253, 48, 191, 6, 139, 98,
179, 37, 226, 152, 34, 136, 145, 16, 126, 110, 72, 195, 163, 182,
30, 66, 58, 107, 40, 84, 250, 133, 61, 186, 43, 121, 10, 21, 155,
159, 94, 202, 78, 212, 172, 229, 243, 115, 167, 87, 175, 88, 168,
80, 244, 234, 214, 116, 79, 174, 233, 213, 231, 230, 173, 232, 44,
215, 117, 122, 235, 22, 11, 245, 89, 203, 95, 176, 156, 169, 81,
160, 127, 12, 246, 111, 23, 196, 73, 236, 216, 67, 31, 45, 164,
118, 123, 183, 204, 187, 62, 90, 251, 96, 177, 134, 59, 82, 161,
108, 170, 85, 41, 157, 151, 178, 135, 144, 97, 190, 220, 252, 188,
149, 207, 205, 55, 63, 91, 209, 83, 57, 132, 60, 65, 162, 109, 71,
20, 42, 158, 93, 86, 242, 211, 171, 68, 17, 146, 217, 35, 32, 46,
137, 180, 124, 184, 38, 119, 153, 227, 165, 103, 74, 237, 222, 197,
49, 254, 24, 13, 99, 140, 128, 192, 247, 112, 7])
if numpy_enabled:
np_gf_exp = np.array(gf_exp)
np_gf_log = np.array(gf_log)
def gf_pow(x, power):
return gf_exp[(gf_log[x] * power) % 255]
def gf_poly_mul(p, q):
r = [0] * (len(p) + len(q) - 1)
lp = [gf_log[p[i]] for i in xrange(len(p))]
for j in range(len(q)):
lq = gf_log[q[j]]
for i in range(len(p)):
r[i + j] ^= gf_exp[lp[i] + lq]
return r
def rs_generator_poly_base3(nsize, fcr=0):
g_all = {}
g = [1]
g_all[0] = g_all[1] = g
for i in range(fcr+1, fcr+nsize+1):
g = gf_poly_mul(g, [1, gf_pow(3, i)])
g_all[nsize-i] = g
return g_all
# Fastest way with pypy
def rsenc(msg_in, nsym, gen):
'''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field'''
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
for i in xrange(len(msg_in)):
coef = msg_out[i]
# coef = gf_mul(msg_out[i], gf_inverse(gen[0])) # for general polynomial division (when polynomials are non-monic), the usual way of using synthetic division is to divide the divisor g(x) with its leading coefficient (call it a). In this implementation, this means:we need to compute: coef = msg_out[i] / gen[0]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
lcoef = gf_log[coef] # precaching
for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
msg_out[i + j] ^= gf_exp[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] += msg_out[i] * gen[j]
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
# Alternative 1: the loops were completely changed, instead of fixing msg_out[i] and updating all subsequent i+j items, we now fixate msg_out[i+j] and compute it at once using all couples msg_out[i] * gen[j] - msg_out[i+1] * gen[j-1] - ... since when we fixate msg_out[i+j], all previous msg_out[k] with k < i+j are already fully computed.
def rsenc_alt1(msg_in, nsym, gen):
msg_in = bytearray(msg_in)
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
# Alternative 1
jlist = range(1, len(gen))
for k in xrange(1, len(msg_out)):
for x in xrange(max(k-len(msg_in),0), len(gen)-1):
if k-x-1 < 0: break
msg_out[k] ^= gf_exp[msg_out[k-x-1] + lgen[jlist[x]]]
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
# Alternative 2: a rewrite of alternative 1 with generators and reduce
def rsenc_alt2(msg_in, nsym, gen):
msg_in = bytearray(msg_in)
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
# Alternative 1
jlist = range(1, len(gen))
for k in xrange(1, len(msg_out)):
items_gen = ( gf_exp[msg_out[k-x-1] + lgen[jlist[x]]] if k-x-1 >= 0 else next(iter(())) for x in xrange(max(k-len(msg_in),0), len(gen)-1) )
msg_out[k] ^= reduce(xor, items_gen)
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
# Alternative with Numpy
def rsenc_numpy(msg_in, nsym, gen):
msg_in = np.array(bytearray(msg_in))
msg_out = np.pad(msg_in, (0, nsym), 'constant')
lgen = np_gf_log[gen]
for i in xrange(msg_in.size):
msg_out[i+1:i+lgen.size] ^= np_gf_exp[np.add(lgen[1:], msg_out[i])]
msg_out[:len(msg_in)] = msg_in
return msg_out
gf_mul_arr = [bytearray(256) for _ in xrange(256)]
gf_add_arr = [bytearray(256) for _ in xrange(256)]
# Precompute multiplication and addition tables
def gf_precomp_tables(gf_exp=gf_exp, gf_log=gf_log):
global gf_mul_arr, gf_add_arr
for i in xrange(256):
for j in xrange(256):
gf_mul_arr[i][j] = gf_exp[gf_log[i] + gf_log[j]]
gf_add_arr[i][j] = i ^ j
return gf_mul_arr, gf_add_arr
# Alternative with precomputation of multiplication and addition tables, inspired by zfec: https://hackage.haskell.org/package/fec-0.1.1/src/zfec/fec.c
def rsenc_precomp(msg_in, nsym, gen=None):
msg_in = bytearray(msg_in)
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
for i in xrange(len(msg_in)): # [i for i in xrange(len(msg_in)) if msg_in[i] != 0]
coef = msg_out[i]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
mula = gf_mul_arr[coef]
for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
#msg_out[i + j] = gf_add_arr[msg_out[i+j]][gf_mul_arr[coef][gen[j]]] # slower...
#msg_out[i + j] ^= gf_mul_arr[coef][gen[j]] # faster
msg_out[i + j] ^= mula[gen[j]] # fastest
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in # equivalent to c = mprime - b, where mprime is msg_in padded with [0]*nsym
return msg_out
def randstr(n, size):
'''Generate very fastly a random hexadecimal string. Kudos to jcdryer http://stackoverflow.com/users/131084/jcdyer'''
hexstr = '%0'+str(size)+'x'
for _ in xrange(n):
yield hexstr % random.randrange(16**size)
# Simple test case
if __name__ == "__main__":
# Setup functions to test
funcs = [rsenc, rsenc_precomp, rsenc_alt1, rsenc_alt2]
if numpy_enabled: funcs.append(rsenc_numpy)
gf_precomp_tables()
# Setup RS vars
n = 255
k = 213
import time
# Init the generator polynomial
g = rs_generator_poly_base3(n)
# Init the ground truth
mes = 'hello world'
mesecc_correct = rsenc(mes, n-11, g[k])
# Test the functions
for func in funcs:
# Sanity check
if func(mes, n-11, g[k]) != mesecc_correct: print func.__name__, ": output is incorrect!"
# Time the function
total_time = 0
for m in randstr(1000, n):
start = time.clock()
func(m, n-k, g[k])
total_time += time.clock() - start
print func.__name__, ": total time elapsed %f seconds." % total_time
```
And here is the result on my machine:
```
With PyPy:
rsenc : total time elapsed 0.108183 seconds.
rsenc_alt1 : output is incorrect!
rsenc_alt1 : total time elapsed 0.164084 seconds.
rsenc_alt2 : output is incorrect!
rsenc_alt2 : total time elapsed 0.557697 seconds.
Without PyPy:
rsenc : total time elapsed 3.518857 seconds.
rsenc_alt1 : output is incorrect!
rsenc_alt1 : total time elapsed 5.630897 seconds.
rsenc_alt2 : output is incorrect!
rsenc_alt2 : total time elapsed 6.100434 seconds.
rsenc_numpy : output is incorrect!
rsenc_numpy : total time elapsed 1.631373 seconds
```
(Note: the alternatives should be correct, some index must be a bit off, but since they are slower anyway I did not try to fix them)
/UPDATE and goal of the bounty: I found a very interesting optimization trick that promises to speed up computations a lot: to [precompute the multiplication table](https://hackage.haskell.org/package/fec-0.1.1/src/zfec/fec.c). I updated the code above with the new function rsenc\_precomp(). However, there's no gain at all in my implementation, it's even a bit slower:
```
rsenc : total time elapsed 0.107170 seconds.
rsenc_precomp : total time elapsed 0.108788 seconds.
```
**How can it be that arrays lookups cost more than operations like additions or xor? Why does it work in ZFEC and not in Python?**
I will attribute the bounty to whoever can show me how to make this multiplication/addition lookup-tables optimization work (faster than the xor and addition operations) or who can explain to me with references or analysis why this optimization cannot work here (using Python/PyPy/Cython/Numpy etc.. I tried them all). | The following is 3x faster than pypy on my machine (0.04s vs 0.15s). Using Cython:
```
ctypedef unsigned char uint8_t # does not work with Microsoft's C Compiler: from libc.stdint cimport uint8_t
cimport cpython.array as array
cdef uint8_t[::1] gf_exp = bytearray([1, 3, 5, 15, 17, 51, 85, 255, 26, 46, 114, 150, 161, 248, 19,
lots of numbers omitted for space reasons
...])
cdef uint8_t[::1] gf_log = bytearray([0, 0, 25, 1, 50, 2, 26, 198, 75, 199, 27, 104,
more numbers omitted for space reasons
...])
import cython
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.initializedcheck(False)
def rsenc(msg_in_r, nsym, gen_t):
'''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field'''
cdef uint8_t[::1] msg_in = bytearray(msg_in_r) # have to copy, unfortunately - can't make a memory view from a read only object
cdef int[::1] gen = array.array('i',gen_t) # convert list to array
cdef uint8_t[::1] msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
cdef int j
cdef uint8_t[::1] lgen = bytearray(gen.shape[0])
for j in xrange(gen.shape[0]):
lgen[j] = gf_log[gen[j]]
cdef uint8_t coef,lcoef
cdef int i
for i in xrange(msg_in.shape[0]):
coef = msg_out[i]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
lcoef = gf_log[coef] # precaching
for j in xrange(1, gen.shape[0]): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
msg_out[i + j] ^= gf_exp[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] -= msg_out[i] * gen[j]
# Recopy the original message bytes
msg_out[:msg_in.shape[0]] = msg_in
return msg_out
```
It is just your fastest version with static types (and checking the html from `cython -a` until the loops aren't highlighted in yellow).
A few brief notes:
* Cython prefers `x.shape[0]` to `len(shape)`
* Defining the memoryviews as `[::1]` promises they are continuous in memory, which helps
* `initializedcheck(False)` is necesary for avoiding lots of existance checks on the globally defined `gf_exp` and `gf_log`. (You might find you can speed up your basic Python/PyPy code by creating a local variable reference for these and using that istead)
* I had to copy a couple of the input arguments. Cython can't make a memoryview from a readonly object (in this case `msg_in`, a string. I could probably have just made it a char\* though). Also `gen` (a list) needs to be in something with fast element access.
Other than that it's all fairly straight-forward. (I haven't tried any variations of it having got it faster). I'm really quite impressed at how well PyPy does. |
Optimizing a reed-solomon encoder (polynomial division) | 30,363,903 | 20 | 2015-05-21T03:11:39Z | 30,367,963 | 7 | 2015-05-21T08:05:37Z | [
"python",
"numpy",
"optimization",
"cython",
"pypy"
] | I am trying to optimize a Reed-Solomon encoder, which is in fact simply a polynomial division operation over Galois Fields 2^8 (which simply means that values wrap-around over 255). The code is in fact very very similar to what can be found here for Go: <http://research.swtch.com/field>
The algorithm for polynomial division used here is a [synthetic division](http://en.wikipedia.org/wiki/Synthetic_division) (also called Horner's method).
I tried everything: numpy, pypy, cython. The best performance I get is by using pypy with this simple nested loop:
```
def rsenc(msg_in, nsym, gen):
'''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field'''
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
for i in xrange(len(msg_in)):
coef = msg_out[i]
# coef = gf_mul(msg_out[i], gf_inverse(gen[0])) // for general polynomial division (when polynomials are non-monic), we need to compute: coef = msg_out[i] / gen[0]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
lcoef = gf_log[coef] # precaching
for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
msg_out[i + j] ^= gf_exp[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] += msg_out[i] * gen[j]
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
```
Can a Python optimization wizard guide me to some clues on how to get a speedup? My goal is to get at least a speedup of 3x, but more would be awesome. Any approach or tool is accepted, as long as it is cross-platform (works at least on Linux and Windows).
Here is a small test script with some of the other alternatives I tried (the cython attempt is not included since it was slower than native python!):
```
import random
from operator import xor
numpy_enabled = False
try:
import numpy as np
numpy_enabled = True
except ImportError:
pass
# Exponent table for 3, a generator for GF(256)
gf_exp = bytearray([1, 3, 5, 15, 17, 51, 85, 255, 26, 46, 114, 150, 161, 248, 19,
53, 95, 225, 56, 72, 216, 115, 149, 164, 247, 2, 6, 10, 30, 34,
102, 170, 229, 52, 92, 228, 55, 89, 235, 38, 106, 190, 217, 112,
144, 171, 230, 49, 83, 245, 4, 12, 20, 60, 68, 204, 79, 209, 104,
184, 211, 110, 178, 205, 76, 212, 103, 169, 224, 59, 77, 215, 98,
166, 241, 8, 24, 40, 120, 136, 131, 158, 185, 208, 107, 189, 220,
127, 129, 152, 179, 206, 73, 219, 118, 154, 181, 196, 87, 249, 16,
48, 80, 240, 11, 29, 39, 105, 187, 214, 97, 163, 254, 25, 43, 125,
135, 146, 173, 236, 47, 113, 147, 174, 233, 32, 96, 160, 251, 22,
58, 78, 210, 109, 183, 194, 93, 231, 50, 86, 250, 21, 63, 65, 195,
94, 226, 61, 71, 201, 64, 192, 91, 237, 44, 116, 156, 191, 218,
117, 159, 186, 213, 100, 172, 239, 42, 126, 130, 157, 188, 223,
122, 142, 137, 128, 155, 182, 193, 88, 232, 35, 101, 175, 234, 37,
111, 177, 200, 67, 197, 84, 252, 31, 33, 99, 165, 244, 7, 9, 27,
45, 119, 153, 176, 203, 70, 202, 69, 207, 74, 222, 121, 139, 134,
145, 168, 227, 62, 66, 198, 81, 243, 14, 18, 54, 90, 238, 41, 123,
141, 140, 143, 138, 133, 148, 167, 242, 13, 23, 57, 75, 221, 124,
132, 151, 162, 253, 28, 36, 108, 180, 199, 82, 246] * 2 + [1])
# Logarithm table, base 3
gf_log = bytearray([0, 0, 25, 1, 50, 2, 26, 198, 75, 199, 27, 104, 51, 238, 223, # BEWARE: the first entry should be None instead of 0 because it's undefined, but for a bytearray we can't set such a value
3, 100, 4, 224, 14, 52, 141, 129, 239, 76, 113, 8, 200, 248, 105,
28, 193, 125, 194, 29, 181, 249, 185, 39, 106, 77, 228, 166, 114,
154, 201, 9, 120, 101, 47, 138, 5, 33, 15, 225, 36, 18, 240, 130,
69, 53, 147, 218, 142, 150, 143, 219, 189, 54, 208, 206, 148, 19,
92, 210, 241, 64, 70, 131, 56, 102, 221, 253, 48, 191, 6, 139, 98,
179, 37, 226, 152, 34, 136, 145, 16, 126, 110, 72, 195, 163, 182,
30, 66, 58, 107, 40, 84, 250, 133, 61, 186, 43, 121, 10, 21, 155,
159, 94, 202, 78, 212, 172, 229, 243, 115, 167, 87, 175, 88, 168,
80, 244, 234, 214, 116, 79, 174, 233, 213, 231, 230, 173, 232, 44,
215, 117, 122, 235, 22, 11, 245, 89, 203, 95, 176, 156, 169, 81,
160, 127, 12, 246, 111, 23, 196, 73, 236, 216, 67, 31, 45, 164,
118, 123, 183, 204, 187, 62, 90, 251, 96, 177, 134, 59, 82, 161,
108, 170, 85, 41, 157, 151, 178, 135, 144, 97, 190, 220, 252, 188,
149, 207, 205, 55, 63, 91, 209, 83, 57, 132, 60, 65, 162, 109, 71,
20, 42, 158, 93, 86, 242, 211, 171, 68, 17, 146, 217, 35, 32, 46,
137, 180, 124, 184, 38, 119, 153, 227, 165, 103, 74, 237, 222, 197,
49, 254, 24, 13, 99, 140, 128, 192, 247, 112, 7])
if numpy_enabled:
np_gf_exp = np.array(gf_exp)
np_gf_log = np.array(gf_log)
def gf_pow(x, power):
return gf_exp[(gf_log[x] * power) % 255]
def gf_poly_mul(p, q):
r = [0] * (len(p) + len(q) - 1)
lp = [gf_log[p[i]] for i in xrange(len(p))]
for j in range(len(q)):
lq = gf_log[q[j]]
for i in range(len(p)):
r[i + j] ^= gf_exp[lp[i] + lq]
return r
def rs_generator_poly_base3(nsize, fcr=0):
g_all = {}
g = [1]
g_all[0] = g_all[1] = g
for i in range(fcr+1, fcr+nsize+1):
g = gf_poly_mul(g, [1, gf_pow(3, i)])
g_all[nsize-i] = g
return g_all
# Fastest way with pypy
def rsenc(msg_in, nsym, gen):
'''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field'''
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
for i in xrange(len(msg_in)):
coef = msg_out[i]
# coef = gf_mul(msg_out[i], gf_inverse(gen[0])) # for general polynomial division (when polynomials are non-monic), the usual way of using synthetic division is to divide the divisor g(x) with its leading coefficient (call it a). In this implementation, this means:we need to compute: coef = msg_out[i] / gen[0]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
lcoef = gf_log[coef] # precaching
for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
msg_out[i + j] ^= gf_exp[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] += msg_out[i] * gen[j]
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
# Alternative 1: the loops were completely changed, instead of fixing msg_out[i] and updating all subsequent i+j items, we now fixate msg_out[i+j] and compute it at once using all couples msg_out[i] * gen[j] - msg_out[i+1] * gen[j-1] - ... since when we fixate msg_out[i+j], all previous msg_out[k] with k < i+j are already fully computed.
def rsenc_alt1(msg_in, nsym, gen):
msg_in = bytearray(msg_in)
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
# Alternative 1
jlist = range(1, len(gen))
for k in xrange(1, len(msg_out)):
for x in xrange(max(k-len(msg_in),0), len(gen)-1):
if k-x-1 < 0: break
msg_out[k] ^= gf_exp[msg_out[k-x-1] + lgen[jlist[x]]]
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
# Alternative 2: a rewrite of alternative 1 with generators and reduce
def rsenc_alt2(msg_in, nsym, gen):
msg_in = bytearray(msg_in)
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
# Alternative 1
jlist = range(1, len(gen))
for k in xrange(1, len(msg_out)):
items_gen = ( gf_exp[msg_out[k-x-1] + lgen[jlist[x]]] if k-x-1 >= 0 else next(iter(())) for x in xrange(max(k-len(msg_in),0), len(gen)-1) )
msg_out[k] ^= reduce(xor, items_gen)
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
# Alternative with Numpy
def rsenc_numpy(msg_in, nsym, gen):
msg_in = np.array(bytearray(msg_in))
msg_out = np.pad(msg_in, (0, nsym), 'constant')
lgen = np_gf_log[gen]
for i in xrange(msg_in.size):
msg_out[i+1:i+lgen.size] ^= np_gf_exp[np.add(lgen[1:], msg_out[i])]
msg_out[:len(msg_in)] = msg_in
return msg_out
gf_mul_arr = [bytearray(256) for _ in xrange(256)]
gf_add_arr = [bytearray(256) for _ in xrange(256)]
# Precompute multiplication and addition tables
def gf_precomp_tables(gf_exp=gf_exp, gf_log=gf_log):
global gf_mul_arr, gf_add_arr
for i in xrange(256):
for j in xrange(256):
gf_mul_arr[i][j] = gf_exp[gf_log[i] + gf_log[j]]
gf_add_arr[i][j] = i ^ j
return gf_mul_arr, gf_add_arr
# Alternative with precomputation of multiplication and addition tables, inspired by zfec: https://hackage.haskell.org/package/fec-0.1.1/src/zfec/fec.c
def rsenc_precomp(msg_in, nsym, gen=None):
msg_in = bytearray(msg_in)
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
for i in xrange(len(msg_in)): # [i for i in xrange(len(msg_in)) if msg_in[i] != 0]
coef = msg_out[i]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
mula = gf_mul_arr[coef]
for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
#msg_out[i + j] = gf_add_arr[msg_out[i+j]][gf_mul_arr[coef][gen[j]]] # slower...
#msg_out[i + j] ^= gf_mul_arr[coef][gen[j]] # faster
msg_out[i + j] ^= mula[gen[j]] # fastest
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in # equivalent to c = mprime - b, where mprime is msg_in padded with [0]*nsym
return msg_out
def randstr(n, size):
'''Generate very fastly a random hexadecimal string. Kudos to jcdryer http://stackoverflow.com/users/131084/jcdyer'''
hexstr = '%0'+str(size)+'x'
for _ in xrange(n):
yield hexstr % random.randrange(16**size)
# Simple test case
if __name__ == "__main__":
# Setup functions to test
funcs = [rsenc, rsenc_precomp, rsenc_alt1, rsenc_alt2]
if numpy_enabled: funcs.append(rsenc_numpy)
gf_precomp_tables()
# Setup RS vars
n = 255
k = 213
import time
# Init the generator polynomial
g = rs_generator_poly_base3(n)
# Init the ground truth
mes = 'hello world'
mesecc_correct = rsenc(mes, n-11, g[k])
# Test the functions
for func in funcs:
# Sanity check
if func(mes, n-11, g[k]) != mesecc_correct: print func.__name__, ": output is incorrect!"
# Time the function
total_time = 0
for m in randstr(1000, n):
start = time.clock()
func(m, n-k, g[k])
total_time += time.clock() - start
print func.__name__, ": total time elapsed %f seconds." % total_time
```
And here is the result on my machine:
```
With PyPy:
rsenc : total time elapsed 0.108183 seconds.
rsenc_alt1 : output is incorrect!
rsenc_alt1 : total time elapsed 0.164084 seconds.
rsenc_alt2 : output is incorrect!
rsenc_alt2 : total time elapsed 0.557697 seconds.
Without PyPy:
rsenc : total time elapsed 3.518857 seconds.
rsenc_alt1 : output is incorrect!
rsenc_alt1 : total time elapsed 5.630897 seconds.
rsenc_alt2 : output is incorrect!
rsenc_alt2 : total time elapsed 6.100434 seconds.
rsenc_numpy : output is incorrect!
rsenc_numpy : total time elapsed 1.631373 seconds
```
(Note: the alternatives should be correct, some index must be a bit off, but since they are slower anyway I did not try to fix them)
/UPDATE and goal of the bounty: I found a very interesting optimization trick that promises to speed up computations a lot: to [precompute the multiplication table](https://hackage.haskell.org/package/fec-0.1.1/src/zfec/fec.c). I updated the code above with the new function rsenc\_precomp(). However, there's no gain at all in my implementation, it's even a bit slower:
```
rsenc : total time elapsed 0.107170 seconds.
rsenc_precomp : total time elapsed 0.108788 seconds.
```
**How can it be that arrays lookups cost more than operations like additions or xor? Why does it work in ZFEC and not in Python?**
I will attribute the bounty to whoever can show me how to make this multiplication/addition lookup-tables optimization work (faster than the xor and addition operations) or who can explain to me with references or analysis why this optimization cannot work here (using Python/PyPy/Cython/Numpy etc.. I tried them all). | Alternatively, if you know C, I would recommend to rewrite this Python function in plain C and call it (say with CFFI). At least you know you reach the top performance in the inner loops of your functions without needing to be aware of either PyPy or Cython tricks.
See: <http://cffi.readthedocs.org/en/latest/overview.html#performance> |
Optimizing a reed-solomon encoder (polynomial division) | 30,363,903 | 20 | 2015-05-21T03:11:39Z | 30,460,874 | 8 | 2015-05-26T14:04:26Z | [
"python",
"numpy",
"optimization",
"cython",
"pypy"
] | I am trying to optimize a Reed-Solomon encoder, which is in fact simply a polynomial division operation over Galois Fields 2^8 (which simply means that values wrap-around over 255). The code is in fact very very similar to what can be found here for Go: <http://research.swtch.com/field>
The algorithm for polynomial division used here is a [synthetic division](http://en.wikipedia.org/wiki/Synthetic_division) (also called Horner's method).
I tried everything: numpy, pypy, cython. The best performance I get is by using pypy with this simple nested loop:
```
def rsenc(msg_in, nsym, gen):
'''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field'''
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
for i in xrange(len(msg_in)):
coef = msg_out[i]
# coef = gf_mul(msg_out[i], gf_inverse(gen[0])) // for general polynomial division (when polynomials are non-monic), we need to compute: coef = msg_out[i] / gen[0]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
lcoef = gf_log[coef] # precaching
for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
msg_out[i + j] ^= gf_exp[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] += msg_out[i] * gen[j]
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
```
Can a Python optimization wizard guide me to some clues on how to get a speedup? My goal is to get at least a speedup of 3x, but more would be awesome. Any approach or tool is accepted, as long as it is cross-platform (works at least on Linux and Windows).
Here is a small test script with some of the other alternatives I tried (the cython attempt is not included since it was slower than native python!):
```
import random
from operator import xor
numpy_enabled = False
try:
import numpy as np
numpy_enabled = True
except ImportError:
pass
# Exponent table for 3, a generator for GF(256)
gf_exp = bytearray([1, 3, 5, 15, 17, 51, 85, 255, 26, 46, 114, 150, 161, 248, 19,
53, 95, 225, 56, 72, 216, 115, 149, 164, 247, 2, 6, 10, 30, 34,
102, 170, 229, 52, 92, 228, 55, 89, 235, 38, 106, 190, 217, 112,
144, 171, 230, 49, 83, 245, 4, 12, 20, 60, 68, 204, 79, 209, 104,
184, 211, 110, 178, 205, 76, 212, 103, 169, 224, 59, 77, 215, 98,
166, 241, 8, 24, 40, 120, 136, 131, 158, 185, 208, 107, 189, 220,
127, 129, 152, 179, 206, 73, 219, 118, 154, 181, 196, 87, 249, 16,
48, 80, 240, 11, 29, 39, 105, 187, 214, 97, 163, 254, 25, 43, 125,
135, 146, 173, 236, 47, 113, 147, 174, 233, 32, 96, 160, 251, 22,
58, 78, 210, 109, 183, 194, 93, 231, 50, 86, 250, 21, 63, 65, 195,
94, 226, 61, 71, 201, 64, 192, 91, 237, 44, 116, 156, 191, 218,
117, 159, 186, 213, 100, 172, 239, 42, 126, 130, 157, 188, 223,
122, 142, 137, 128, 155, 182, 193, 88, 232, 35, 101, 175, 234, 37,
111, 177, 200, 67, 197, 84, 252, 31, 33, 99, 165, 244, 7, 9, 27,
45, 119, 153, 176, 203, 70, 202, 69, 207, 74, 222, 121, 139, 134,
145, 168, 227, 62, 66, 198, 81, 243, 14, 18, 54, 90, 238, 41, 123,
141, 140, 143, 138, 133, 148, 167, 242, 13, 23, 57, 75, 221, 124,
132, 151, 162, 253, 28, 36, 108, 180, 199, 82, 246] * 2 + [1])
# Logarithm table, base 3
gf_log = bytearray([0, 0, 25, 1, 50, 2, 26, 198, 75, 199, 27, 104, 51, 238, 223, # BEWARE: the first entry should be None instead of 0 because it's undefined, but for a bytearray we can't set such a value
3, 100, 4, 224, 14, 52, 141, 129, 239, 76, 113, 8, 200, 248, 105,
28, 193, 125, 194, 29, 181, 249, 185, 39, 106, 77, 228, 166, 114,
154, 201, 9, 120, 101, 47, 138, 5, 33, 15, 225, 36, 18, 240, 130,
69, 53, 147, 218, 142, 150, 143, 219, 189, 54, 208, 206, 148, 19,
92, 210, 241, 64, 70, 131, 56, 102, 221, 253, 48, 191, 6, 139, 98,
179, 37, 226, 152, 34, 136, 145, 16, 126, 110, 72, 195, 163, 182,
30, 66, 58, 107, 40, 84, 250, 133, 61, 186, 43, 121, 10, 21, 155,
159, 94, 202, 78, 212, 172, 229, 243, 115, 167, 87, 175, 88, 168,
80, 244, 234, 214, 116, 79, 174, 233, 213, 231, 230, 173, 232, 44,
215, 117, 122, 235, 22, 11, 245, 89, 203, 95, 176, 156, 169, 81,
160, 127, 12, 246, 111, 23, 196, 73, 236, 216, 67, 31, 45, 164,
118, 123, 183, 204, 187, 62, 90, 251, 96, 177, 134, 59, 82, 161,
108, 170, 85, 41, 157, 151, 178, 135, 144, 97, 190, 220, 252, 188,
149, 207, 205, 55, 63, 91, 209, 83, 57, 132, 60, 65, 162, 109, 71,
20, 42, 158, 93, 86, 242, 211, 171, 68, 17, 146, 217, 35, 32, 46,
137, 180, 124, 184, 38, 119, 153, 227, 165, 103, 74, 237, 222, 197,
49, 254, 24, 13, 99, 140, 128, 192, 247, 112, 7])
if numpy_enabled:
np_gf_exp = np.array(gf_exp)
np_gf_log = np.array(gf_log)
def gf_pow(x, power):
return gf_exp[(gf_log[x] * power) % 255]
def gf_poly_mul(p, q):
r = [0] * (len(p) + len(q) - 1)
lp = [gf_log[p[i]] for i in xrange(len(p))]
for j in range(len(q)):
lq = gf_log[q[j]]
for i in range(len(p)):
r[i + j] ^= gf_exp[lp[i] + lq]
return r
def rs_generator_poly_base3(nsize, fcr=0):
g_all = {}
g = [1]
g_all[0] = g_all[1] = g
for i in range(fcr+1, fcr+nsize+1):
g = gf_poly_mul(g, [1, gf_pow(3, i)])
g_all[nsize-i] = g
return g_all
# Fastest way with pypy
def rsenc(msg_in, nsym, gen):
'''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field'''
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
for i in xrange(len(msg_in)):
coef = msg_out[i]
# coef = gf_mul(msg_out[i], gf_inverse(gen[0])) # for general polynomial division (when polynomials are non-monic), the usual way of using synthetic division is to divide the divisor g(x) with its leading coefficient (call it a). In this implementation, this means:we need to compute: coef = msg_out[i] / gen[0]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
lcoef = gf_log[coef] # precaching
for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
msg_out[i + j] ^= gf_exp[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] += msg_out[i] * gen[j]
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
# Alternative 1: the loops were completely changed, instead of fixing msg_out[i] and updating all subsequent i+j items, we now fixate msg_out[i+j] and compute it at once using all couples msg_out[i] * gen[j] - msg_out[i+1] * gen[j-1] - ... since when we fixate msg_out[i+j], all previous msg_out[k] with k < i+j are already fully computed.
def rsenc_alt1(msg_in, nsym, gen):
msg_in = bytearray(msg_in)
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
# Alternative 1
jlist = range(1, len(gen))
for k in xrange(1, len(msg_out)):
for x in xrange(max(k-len(msg_in),0), len(gen)-1):
if k-x-1 < 0: break
msg_out[k] ^= gf_exp[msg_out[k-x-1] + lgen[jlist[x]]]
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
# Alternative 2: a rewrite of alternative 1 with generators and reduce
def rsenc_alt2(msg_in, nsym, gen):
msg_in = bytearray(msg_in)
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
# Alternative 1
jlist = range(1, len(gen))
for k in xrange(1, len(msg_out)):
items_gen = ( gf_exp[msg_out[k-x-1] + lgen[jlist[x]]] if k-x-1 >= 0 else next(iter(())) for x in xrange(max(k-len(msg_in),0), len(gen)-1) )
msg_out[k] ^= reduce(xor, items_gen)
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
# Alternative with Numpy
def rsenc_numpy(msg_in, nsym, gen):
msg_in = np.array(bytearray(msg_in))
msg_out = np.pad(msg_in, (0, nsym), 'constant')
lgen = np_gf_log[gen]
for i in xrange(msg_in.size):
msg_out[i+1:i+lgen.size] ^= np_gf_exp[np.add(lgen[1:], msg_out[i])]
msg_out[:len(msg_in)] = msg_in
return msg_out
gf_mul_arr = [bytearray(256) for _ in xrange(256)]
gf_add_arr = [bytearray(256) for _ in xrange(256)]
# Precompute multiplication and addition tables
def gf_precomp_tables(gf_exp=gf_exp, gf_log=gf_log):
global gf_mul_arr, gf_add_arr
for i in xrange(256):
for j in xrange(256):
gf_mul_arr[i][j] = gf_exp[gf_log[i] + gf_log[j]]
gf_add_arr[i][j] = i ^ j
return gf_mul_arr, gf_add_arr
# Alternative with precomputation of multiplication and addition tables, inspired by zfec: https://hackage.haskell.org/package/fec-0.1.1/src/zfec/fec.c
def rsenc_precomp(msg_in, nsym, gen=None):
msg_in = bytearray(msg_in)
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
for i in xrange(len(msg_in)): # [i for i in xrange(len(msg_in)) if msg_in[i] != 0]
coef = msg_out[i]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
mula = gf_mul_arr[coef]
for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
#msg_out[i + j] = gf_add_arr[msg_out[i+j]][gf_mul_arr[coef][gen[j]]] # slower...
#msg_out[i + j] ^= gf_mul_arr[coef][gen[j]] # faster
msg_out[i + j] ^= mula[gen[j]] # fastest
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in # equivalent to c = mprime - b, where mprime is msg_in padded with [0]*nsym
return msg_out
def randstr(n, size):
'''Generate very fastly a random hexadecimal string. Kudos to jcdryer http://stackoverflow.com/users/131084/jcdyer'''
hexstr = '%0'+str(size)+'x'
for _ in xrange(n):
yield hexstr % random.randrange(16**size)
# Simple test case
if __name__ == "__main__":
# Setup functions to test
funcs = [rsenc, rsenc_precomp, rsenc_alt1, rsenc_alt2]
if numpy_enabled: funcs.append(rsenc_numpy)
gf_precomp_tables()
# Setup RS vars
n = 255
k = 213
import time
# Init the generator polynomial
g = rs_generator_poly_base3(n)
# Init the ground truth
mes = 'hello world'
mesecc_correct = rsenc(mes, n-11, g[k])
# Test the functions
for func in funcs:
# Sanity check
if func(mes, n-11, g[k]) != mesecc_correct: print func.__name__, ": output is incorrect!"
# Time the function
total_time = 0
for m in randstr(1000, n):
start = time.clock()
func(m, n-k, g[k])
total_time += time.clock() - start
print func.__name__, ": total time elapsed %f seconds." % total_time
```
And here is the result on my machine:
```
With PyPy:
rsenc : total time elapsed 0.108183 seconds.
rsenc_alt1 : output is incorrect!
rsenc_alt1 : total time elapsed 0.164084 seconds.
rsenc_alt2 : output is incorrect!
rsenc_alt2 : total time elapsed 0.557697 seconds.
Without PyPy:
rsenc : total time elapsed 3.518857 seconds.
rsenc_alt1 : output is incorrect!
rsenc_alt1 : total time elapsed 5.630897 seconds.
rsenc_alt2 : output is incorrect!
rsenc_alt2 : total time elapsed 6.100434 seconds.
rsenc_numpy : output is incorrect!
rsenc_numpy : total time elapsed 1.631373 seconds
```
(Note: the alternatives should be correct, some index must be a bit off, but since they are slower anyway I did not try to fix them)
/UPDATE and goal of the bounty: I found a very interesting optimization trick that promises to speed up computations a lot: to [precompute the multiplication table](https://hackage.haskell.org/package/fec-0.1.1/src/zfec/fec.c). I updated the code above with the new function rsenc\_precomp(). However, there's no gain at all in my implementation, it's even a bit slower:
```
rsenc : total time elapsed 0.107170 seconds.
rsenc_precomp : total time elapsed 0.108788 seconds.
```
**How can it be that arrays lookups cost more than operations like additions or xor? Why does it work in ZFEC and not in Python?**
I will attribute the bounty to whoever can show me how to make this multiplication/addition lookup-tables optimization work (faster than the xor and addition operations) or who can explain to me with references or analysis why this optimization cannot work here (using Python/PyPy/Cython/Numpy etc.. I tried them all). | Building on DavidW's answer, here's the implementation I am currently using, which is about 20% faster by using nogil and parallel computation:
```
from cython.parallel import parallel, prange
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.initializedcheck(False)
cdef rsenc_cython(msg_in_r, nsym, gen_t) :
'''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field'''
cdef uint8_t[::1] msg_in = bytearray(msg_in_r) # have to copy, unfortunately - can't make a memory view from a read only object
#cdef int[::1] gen = array.array('i',gen_t) # convert list to array
cdef uint8_t[::1] gen = gen_t
cdef uint8_t[::1] msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
cdef int i, j
cdef uint8_t[::1] lgen = bytearray(gen.shape[0])
for j in xrange(gen.shape[0]):
lgen[j] = gf_log_c[gen[j]]
cdef uint8_t coef,lcoef
with nogil:
for i in xrange(msg_in.shape[0]):
coef = msg_out[i]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
lcoef = gf_log_c[coef] # precaching
for j in prange(1, gen.shape[0]): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
msg_out[i + j] ^= gf_exp_c[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] -= msg_out[i] * gen[j]
# Recopy the original message bytes
msg_out[:msg_in.shape[0]] = msg_in
return msg_out
```
I would still like it to be faster (on a real implementation, data is encoded at about 6.4 MB/s with n=255, n being the size of the message+codeword).
The main lead to a faster implementation that I have found is to use a LUT (LookUp Table) approach, by precomputing the multiplication and addition arrays. However, in my Python and Cython implementations, the LUT approach is slower than calculating XOR and addition operations.
There are other approaches to implement a faster RS encoder, but I don't have the abilities nor the time to try them out. I will leave them as references for other interested readers:
> * "Fast software implementation of finite field operations", Cheng Huang and Lihao Xu, Washington University in St. Louis, Tech. Rep (2003). [link](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.99.6595&rep=rep1&type=pdf) and a correct code implementation [here](http://catid.mechafetus.com/news/news.php?view=295).
> * Luo, Jianqiang, et al. "Efficient software implementations of large finite fields GF (2 n) for secure storage applications." ACM Transactions on Storage (TOS) 8.1 (2012): 2.
> * "A Performance Evaluation and Examination of Open-Source Erasure Coding Libraries for Storage.", Plank, J. S. and Luo, J. and Schuman, C. D. and Xu, L., and Wilcox-O'Hearn, Z, FAST. Vol. 9. 2009. [link](https://www.usenix.org/legacy/events/fast09/tech/full_papers/plank/plank_html/)
> Or also the non extended version: "A Performance Comparison of Open-Source Erasure Coding Libraries for Storage Applications", Plank and Schuman.
> * Sourcecode of the ZFEC library, with multiplication LUT optimization [link](https://hackage.haskell.org/package/fec-0.1.1/src/zfec/fec.c).
> * "Optimized Arithmetic for Reed-Solomon Encoders", Christof Paar (1997, June). In IEEE International Symposium on Information Theory (pp. 250-250). INSTITUTE OF ELECTRICAL ENGINEERS INC (IEEE). [link](https://www.emsec.rub.de/media/crypto/veroeffentlichungen/2011/01/19/cnst.ps)
> * "A Fast Algorithm for Encoding the (255,233) Reed-Solomon Code Over GF(2^8)", R.L. Miller and T.K. Truong, I.S. Reed. [link](http://ipnpr.jpl.nasa.gov/progress_report2/42-56/56P.PDF)
> * "Optimizing Galois Field arithmetic for diverse processor architectures and applications", Greenan, Kevin and M., Ethan and L. Miller and Thomas JE Schwarz, Modeling, Analysis and Simulation of Computers and Telecommunication Systems, 2008. MASCOTS 2008. IEEE International Symposium on. IEEE, 2008. [link](http://www.ssrc.ucsc.edu/Papers/greenan-mascots08.pdf)
> * Anvin, H. Peter. "The mathematics of RAID-6." (2007). [link](https://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf) and [link](https://www.kernel.org/doc/Documentation/crc32.txt)
> * [Wirehair library](https://github.com/catid/wirehair/), one of the only few implementations of Cauchy Reed-Solomon, which is said to be very fast.
> * "A logarithmic Boolean time algorithm for parallel polynomial division", Bini, D. and Pan, V. Y. (1987), Information processing letters, 24(4), 233-237. See also Bini, D., and V. Pan. "Fast parallel algorithms for polynomial division over an arbitrary field of constants." Computers & Mathematics with Applications 12.11 (1986): 1105-1118. [link](https://www.researchgate.net/profile/Dario_Bini/publication/250727103_Fast_parallel_algorithms_for_polynomial_division_over_an_arbitrary_field_of_constants/links/53f1ad9d0cf23733e815c755.pdf)
> * Kung, H.T. "Fast evaluation and interpolation." (1973). [link](http://www.eecs.harvard.edu/~htk/publication/1973-cmu-cs-technical-report-kung.pdf)
> * Cao, Zhengjun, and Hanyue Cao. "Note on fast division algorithm for polynomials using Newton iteration." arXiv preprint arXiv:1112.4014 (2011). [link](http://arxiv.org/pdf/1112.4014)
> * "An Introduction to Galois Fields and Reed-Solomon Coding", James Westall and James Martin, 2010. [link](http://people.cs.clemson.edu/~westall/851/rs-code.pdf)
> * Mamidi, Suman, et al. "Instruction set extensions for Reed-Solomon encoding and decoding." Application-Specific Systems, Architecture Processors, 2005. ASAP 2005. 16th IEEE International Conference on. IEEE, 2005. [link](http://glossner.org/john/papers/2005_07_asap_reed_solomon.pdf)
> * Dumas, Jean-Guillaume, Laurent Fousse, and Bruno Salvy. "Simultaneous modular reduction and Kronecker substitution for small finite fields." Journal of Symbolic Computation 46.7 (2011): 823-840.
> * Greenan, Kevin M., Ethan L. Miller, and Thomas Schwarz. Analysis and construction of galois fields for efficient storage reliability. Vol. 9. Technical Report UCSC-SSRC-07, 2007. [link](http://www.ssrc.ucsc.edu/Papers/ssrctr-07-09.pdf)
However, I think the best lead is to use an efficient **polynomial modular reduction** instead of polynomial division:
> * "Modular Reduction in GF (2 n) without Pre-computational Phase". Kneževic, M., et al. Arithmetic of Finite Fields. Springer Berlin Heidelberg, 2008. 77-87.
> * "On computation of polynomial modular reduction". Wu, Huapeng. Technical report, Univ. of Waterloo, The Centre for applied cryptographic research, 2000.
> * "A fast software implementation for arithmetic operations in GF (2n)". De Win, E., Bosselaers, A., Vandenberghe, S., De Gersem, P., & Vandewalle, J. (1996, January). In Advances in CryptologyâAsiacrypt'96 (pp. 65-76). Springer Berlin Heidelberg. [link](https://www.cosic.esat.kuleuven.be/publications/article-300.pdf)
> * [Barnett reduction](http://en.wikipedia.org/wiki/Barrett_reduction)
/EDIT: in fact it seems "On computation of polynomial modular reduction" just uses the same approach as I did with the variants rsenc\_alt1() and rsenc\_alt2() (the main idea being that we precompute the couples of coefficients we will need, and reduce them all at once), and unluckily it's not faster (it's actually slower because the precomputation cannot be done once for all since it depends on the message input).
/EDIT: I found a library with really interesting optimizations, lots that are not even found in any academic papers (which the author stated he has read btw), and which is probably the fastest software implementation of Reed-Solomon: the [wirehair project](https://github.com/catid/wirehair/blob/master/wirehair-mobile/wirehair_codec_8.cpp) and the [related blog](http://catid.mechafetus.com/news/news.php) for more details. Worth of noting, the author also made a [Cauchy-Reed-Solomon codec called longhair](https://github.com/catid/longhair) with similar optimization tricks.
/FINAL EDIT: it seems the fastest implementation available is based on this paper:
> Plank, James S., Kevin M. Greenan, and Ethan L. Miller. "Screaming
> fast Galois field arithmetic using intel SIMD instructions." FAST.
> 2013. [link](http://www.kaymgee.com/Kevin_Greenan/Publications_files/plank-fast2013.pdf)
The [implementation, in pure Go, is available here and is authored by Klaus Post](https://github.com/klauspost/reedsolomon). It's the fastest implementation I have ever read about, both in single thread and parallelized (it supports both). It claims over 1GB/s in single thread and over 4 GB/s with 8 threads. However, it relies on optimized SIMD instructions and various low-level optimizations on matrix operations (because here the RS codec is matrix oriented instead of the polynomial approach I have in my question).
So, if you are an interested reader and want to find the fastest Reed-Solomon codec available, that's the one. |
Alternative to `any` that returns the last evaluated object? | 30,364,222 | 12 | 2015-05-21T03:47:39Z | 30,364,425 | 12 | 2015-05-21T04:09:50Z | [
"python"
] | I just wrote a bit of code where I wanted to do:
```
def foo(container)
return any((some_obj.attr <= 0 for some_obj in container))
```
where `foo` would return the first `some_obj` where `some_obj.attr` is zero or less. The alternative, I suppose, would be
```
def foo(container):
return next((some_obj for some_obj in container if some_obj.attr <= 0), False)
```
but that feels very hacky.
I ended up writing it out, but I don't like how deeply nested it got.
```
def foo(container):
for some_obj in container:
if some_obj.attr <= 0:
return some_obj
return False
```
To clarify: `container` in this case is likely no more than 6 objects (and often 2 objects), though a general case is more useful. I'm also trying to optimize for ease of reading, not for performance.
Is there some better construct than this? | The docs for [`any`](https://docs.python.org/3/library/functions.html#any) explain that it's equivalent to:
```
def any(iterable):
for element in iterable:
if element:
return True
return False
```
So, I don't think your code is too deeply nested if it has exactly the same structure as code that's used to illustrate the functionality of `any` itself.
Still, I would probably wrap this up in a function:
```
def first(iterable, predicate):
for element in iterable:
if predicate(element):
return element
return False
```
So now you can do this:
```
def foo(container):
return first(container, lambda obj: obj.attr <= 0)
```
Or, alternatively, just use a generator expression, and pass it to the two-argument [`next`](https://docs.python.org/3/library/functions.html#next), as you're already doing:
```
def foo(container):
return next((obj for obj in container if obj.attr <= 0), False)
```
That has the same "depth", it's just all horizontal instead of vertical.
Or, maybe, pull out the genexpr and name it:
```
def foo(container):
nonpositives = (obj for obj in container if obj.attr <= 0)
return next(nonpositives, False)
```
How would you choose between them? I think if the predicate is too complicated to read as a `lambda` but not complicated enough to be worth abstracting out into an out-of-line function, I'd go with the genexpr. Otherwise, the wrapper function. But it's really a matter of taste. |
Remove spurious small islands of noise in an image - Python OpenCV | 30,369,031 | 12 | 2015-05-21T08:59:11Z | 30,380,543 | 20 | 2015-05-21T17:27:31Z | [
"python",
"image",
"opencv",
"image-processing",
"filtering"
] | I am trying to get rid of background noise from some of my images. This is the unfiltered image.

To filter, I used this code to generate a mask of what should remain in the image:
```
element = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2))
mask = cv2.erode(mask, element, iterations = 1)
mask = cv2.dilate(mask, element, iterations = 1)
mask = cv2.erode(mask, element)
```
With this code and when I mask out the unwanted pixels from the original image, what I get is:

As you can see, all the tiny dots in the middle area are gone, but a lot of those coming from the denser area are also gone. To reduce the filtering, I tried changing the second parameter of `getStructuringElement()` to be (1,1) but doing this gives me the first image as if nothing has been filtered.
Is there any way where I can apply a filter level that is between these 2 levels?
In addition, can anyone explain to me what exactly does `getStructuringElement()` do? What is a "structuring element"? What does it do and how does its size (the second parameter) affect the level of filtering? | A lot of your questions stem from the fact that you're not sure how morphological image processing works, but we can put your doubts to rest. You can interpret the structuring element as the "base shape" to compare to. 1 in the structuring element corresponds to a pixel that you want to look at in this shape and 0 is one you want to ignore. There are different shapes, such as rectangular (as you have figured out with `MORPH_RECT`), ellipse, circular, etc.
As such, `cv2.getStructuringElement` returns a structuring element for you. The first parameter specifies the type you want and the second parameter specifies the size you want. In your case, you want a 2 x 2 "rectangle"... which is really a square, but that's fine.
In a more bastardized sense, you use the structuring element and scan from left to right and top to bottom of your image and you grab pixel neighbourhoods. Each pixel neighbourhood has its centre exactly at the pixel of interest that you're looking at. The size of each pixel neighbourhood is the same size as the structuring element.
# Erosion
For an erosion, you examine all of the pixels in a pixel neighbourhood that are touching the structuring element. If **every non-zero pixel** is touching a structuring element pixel that is 1, then the output pixel in the corresponding centre position with respect to the input is 1. If there is at least one non-zero pixel that **does not** touch a structuring pixel that is 1, then the output is 0.
In terms of the rectangular structuring element, you need to make sure that every pixel in the structuring element is touching a non-zero pixel in your image for a pixel neighbourhood. If it isn't, then the output is 0, else 1. This effectively eliminates small spurious areas of noise and also decreases the area of objects slightly.
The size factors in where the larger the rectangle, the more shrinking is performed. The size of the structuring element is a baseline where any objects that are smaller than this rectangular structuring element, you can consider them as being filtered and not appearing in the output. Basically, choosing a 1 x 1 rectangular structuring element is the same as the input image itself because that structuring element fits all pixels inside it as the pixel is the smallest representation of information possible in an image.
# Dilation
Dilation is the opposite of erosion. If there is at least one non-zero pixel that touches a pixel in the structuring element that is 1, then the output is 1, else the output is 0. You can think of this as slightly enlarging object areas and making small islands bigger.
The implications with size here is that the larger the structuring element, the larger the areas of the objects will be and the larger the isolated islands become.
---
What you're doing is an erosion first followed by a dilation. This is what is known as an **opening** operation. The purpose of this operation is to remove small islands of noise while (trying to) maintain the areas of the larger objects in your image. The erosion removes those islands while the dilation grows back the larger objects to their original sizes.
You follow this with an erosion again for some reason, which I can't quite understand, but that's ok.
---
What I would personally do is perform a **closing** operation first which is a dilation followed by an erosion. Closing helps group areas that are close together into a single object. As such, you see that there are some larger areas that are close to each other that should probably be joined before we do anything else. As such, I would do a closing first, then do an **opening** after so that we can remove the isolated noisy areas. Take note that I'm going to make the closing structuring element size **larger** as I want to make sure I get nearby pixels and the opening structuring element size **smaller** so that I don't want to mistakenly remove any of the larger areas.
Once you do this, I would mask out any extra information with the original image so that you leave the larger areas intact while the small islands go away.
Instead of chaining an erosion followed by a dilation, or a dilation followed by an erosion, use [`cv2.morphologyEx`](http://docs.opencv.org/modules/imgproc/doc/filtering.html#morphologyex), where you can specify `MORPH_OPEN` and `MORPH_CLOSE` as the flags.
As such, I would personally do this, assuming your image is called `spots.png`:
```
import cv2
import numpy as np
img = cv2.imread('spots.png')
img_bw = 255*(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) > 5).astype('uint8')
se1 = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
se2 = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2))
mask = cv2.morphologyEx(img_bw, cv2.MORPH_CLOSE, se1)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, se2)
mask = np.dstack([mask, mask, mask]) / 255
out = img * mask
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite('output.png', out)
```
The above code is pretty self-explanatory. First, I read in the image and then I convert the image to grayscale and threshold with an intensity of 5 to create a mask of what is considered object pixels. This is a rather clean image and so anything larger than 5 seems to have worked. For the morphology routines, I need to convert the image to `uint8` and scale the mask to 255. Next, we create two structuring elements - one that is a 5 x 5 rectangle for the closing operation and another that is 2 x 2 for the opening operation. I run `cv2.morphologyEx` twice for the opening and closing operations respectively on the thresholded image.
Once I do that, I stack the mask so that it becomes a 3D matrix and divide by 255 so that it becomes a mask of `[0,1]` and then we multiply this mask with the original image so that we can grab the original pixels of the image back and maintaining what is considered a true object from the mask output.
The rest is just for illustration. I show the image in a window, and I also save the image to a file called `output.png`, and its purpose is to show you what the image looks like in this post.
I get this:

Bear in mind that it isn't perfect, but it's much better than how you had it before. You'll have to play around with the structuring element sizes to get something that you consider as a good output, but this is certainly enough to get you started. Good luck! |
Get previous object without len(list) | 30,370,935 | 8 | 2015-05-21T10:20:09Z | 30,370,987 | 12 | 2015-05-21T10:22:31Z | [
"python"
] | How to get previous or next object with this format of code?
```
alignment = [[a,b,c],[2,3,4],[q,w,e]]
for obj in alignment:
some code here to get previous object
```
I know how to do that with:
```
for i in range(0,len(alignment)):
alignment[i-1][objIndex]
``` | You can use [`enumerate`](https://docs.python.org/2/library/functions.html#enumerate) :
```
alignment = [[a,b,c],[2,3,4],[q,w,e]]
for index,obj in enumerate(alignment):
alignment[index-1] # pre
alignment[index+1] # next
```
*Note* that as a more efficient way for accessing to `next` items and refuse of multiple indexing you can use [`iter()`](https://docs.python.org/2/library/functions.html#iter) function to create an iterator object from your list (from second element to end) and access to next elements in each iteration with `next` :
```
>>> l=[1,2,3,4]
>>> it=iter(l[1:])
>>> for i in l :
... print i,next(it,None)
...
1 2
2 3
3 4
4 None
```
*Note* that if you don't pass the `None` as the second argument to `next()` function it will raise a `StopIteration` error.You can also handle it with a `try-except` statement.
Also for short lists you can use `zip` function and for long lists [`itertools.izip()` function](https://docs.python.org/3/library/itertools.html#itertools.zip_longest) (`zip` in python 3):
```
>>> for i,j in zip(l,l[1:]):
... print i,j
...
1 2
2 3
3 4
```
`zip(l,l[1:])` will give you the following pairs of items :
```
[(1, 2), (2, 3), (3, 4)]
```
and in the loop you can use `i` as the current item then `j` will be the next item or use `j` as the current then `i` will be the previous!:) |
Conditionally enumerating items in python | 30,374,941 | 6 | 2015-05-21T13:18:46Z | 30,375,011 | 13 | 2015-05-21T13:21:43Z | [
"python",
"iterable"
] | I'd like to enumerate those items in an iterable that satisfy a certain condition. I've tried something like
```
[(i,j) for i,j in enumerate(range(10)) if (3 < j) and (j < 8)]
```
(that tries to enumerate the numbers between 4 and 7 just for the sake of an example). From this, I get the result
```
[(4, 4), (5, 5), (6, 6), (7, 7)]
```
What I'd like to get is
```
[(0, 4), (1, 5), (2, 6), (3, 7)]
```
Is there a pythonic way to achieve the desired result?
Note that in the actual problem I'm working on, I don't know in advance how many items satisfy the condition. | Do the enumerate last so the indexes start from 0.
```
enumerate(j for j in range(10) if (3 < j) and (j < 8))
```
If you need the list rather than enumerate object, just wrap this all in `list()` |
Calculate curl of a vector field in Python and plot it with matplotlib | 30,378,676 | 6 | 2015-05-21T15:52:42Z | 31,121,616 | 7 | 2015-06-29T17:11:00Z | [
"python",
"matplotlib",
"sympy"
] | I need to calculate the curl of a vector field and plot it with matplotlib. A simple example of what I am looking for could be be put like that:
How can I calculate and plot the curl of the vector field in the [quiver3d\_demo.py](http://matplotlib.org/examples/mplot3d/quiver3d_demo.html) in the matplotlib gallery?
Please let me know if I can be more specific than that. | You can use [`sympy.curl()`](http://docs.sympy.org/dev/modules/physics/vector/api/fieldfunctions.html#curl) to calculate the curl of a vector field.
**Example**:
Suppose you have:
**F** = (y2z,-xy,z2) = y2z**x** - xy**y** + z2**z**, then `y` would be `R[1]`, `x` is `R[0]` and `z` is `R[2]` while the vectors of the 3 axes would be `R.x`, `R.y`, `R.z` and the code **to calculate the vector field curl** is:
```
from sympy.physics.vector import ReferenceFrame
from sympy.physics.vector import curl
R = ReferenceFrame('R')
F = R[1]**2 * R[2] * R.x - R[0]*R[1] * R.y + R[2]**2 * R.z
G = curl(F, R)
```
In that case G would be equal to `R_y**2*R.y + (-2*R_y*R_z - R_y)*R.z` or, in other words,
G = (0, y2, -2yz-y).
**To plot it** you need to convert the above result into 3 separate functions; u,v,w.
(example below adapted from the [matplotlib example on this link](http://matplotlib.org/examples/mplot3d/quiver3d_demo.html)):
```
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
x, y, z = np.meshgrid(np.arange(-0.8, 1, 0.2),
np.arange(-0.8, 1, 0.2),
np.arange(-0.8, 1, 0.8))
u = 0
v = y**2
w = -2*y*z - y
ax.quiver(x, y, z, u, v, w, length=0.1)
plt.show()
```
And the final result is this:
 |
Does Python really create all bound method for every new instance? | 30,378,912 | 5 | 2015-05-21T16:02:44Z | 30,379,001 | 14 | 2015-05-21T16:07:31Z | [
"java",
"python",
"oop",
"methods"
] | I am reading about classes in Python (3.4) and from what I understand it seems that every new object has its own bound methods instances.
```
class A:
def __init__(self, name):
self.name = name
def foo(self):
print(self.name)
a = A('One')
b = A('Two')
print(a.foo == b.foo)
```
The output of this is `False`.
This seems to me as a waste of memory. I thought that internally `a.foo` and `b.foo` would point somehow internally to one function in memory: `A.foo` where `self` as the class instance will be passed.
I assume this maybe cannot be implemented easily in the language.
Does each new instance contain also new instances of its bound methods?
If so, does not this hurt the performance or make case for creating new objects more cautiously than in other languages where methods are "shared" among objects like in `Java`? | Methods are bound *on demand*, each time you access one.
Accessing the name of a function invokes the [descriptor protocol](https://docs.python.org/3/howto/descriptor.html), which on function objects returns a bound method.
A bound method is a thin wrapper around a function object; it stores a reference to the original function and to the instance. When calling a method object, it in turn passes the call to the function, with instance inserted as a first argument.
Methods are not created when the instance is created, so there is no extra memory required a-priori.
You can re-create the steps manually:
```
>>> class A:
... def __init__(self, name):
... self.name = name
... def foo(self):
... print(self.name)
...
>>> a = A('One')
>>> a.foo
<bound method A.foo of <__main__.A object at 0x100a27978>>
>>> a.foo.__self__
<__main__.A object at 0x100a27978>
>>> a.foo.__func__
<function A.foo at 0x100a22598>
>>> A.__dict__['foo']
<function A.foo at 0x100a22598>
>>> A.__dict__['foo'].__get__(a, A)
<bound method A.foo of <__main__.A object at 0x100a27978>>
>>> A.__dict__['foo'].__get__(a, A)()
One
```
It is only the method object that is recreated each time; the underlying function remains stable:
```
>>> a.foo is a.foo
False
>>> b = A('Two')
>>> b.foo is a.foo
False
>>> b.foo.__func__ is a.foo.__func__
True
```
This architecture also makes [`classmethod`](https://docs.python.org/3/library/functions.html#classmethod), [`staticmethod`](https://docs.python.org/3/library/functions.html#staticmethod), and [`property`](https://docs.python.org/3/library/functions.html#property) objects work. You can create your own descriptors, creating a whole host of interesting binding behaviours. |
Python 2.7 concurrent.futures.ThreadPoolExecutor does not parallelize | 30,378,971 | 6 | 2015-05-21T16:05:19Z | 30,379,186 | 8 | 2015-05-21T16:16:40Z | [
"python",
"linux",
"ubuntu",
"parallel-processing"
] | I am running the following code on a Intel i3-based machine with 4 virtual cores (2 hyperthreads/physical core, 64bit) and Ubuntu 14.04 installed:
```
n = multiprocessing.cpu_count()
executor = ThreadPoolExecutor(n)
tuple_mapper = lambda i: (i, func(i))
results = dict(executor.map(tuple_mapper, range(10)))
```
The code does not seem to be executed in a parallel fashion, since the CPU is utilized only 25% constantly. On the utilization graph only one of the 4 virtual cores is used 100% at a time. The utilized cores are alternating every 10 seconds or so.
But the parallelization works well on a server machine with the same software setting. I don't know the exact number of cores nor the exact processor type, but I know for sure that it has several cores and the utilization is at 100% and that the calculations have a rapid speedup (10 times faster after using parallelization, made some experiments with it).
I would expect, that parallelization would work on my machine too, not only on the server.
Why does it not work? Does it have something to do with my operating system settings? Do I have to change them?
Thanks in advance!
**Update:**
For the background information see the correct answer below. For the sake of completeness, I want to give a sample code which solved the problem:
```
tuple_mapper = lambda i: (i, func(i))
n = multiprocessing.cpu_count()
with concurrent.futures.ProcessPoolExecutor(n) as executor:
results = dict(executor.map(tuple_mapper, range(10)))
```
Before you reuse this take care that all functions you are using are defined at the top-level of a module as described here:
[Python multiprocessing pickling error](http://stackoverflow.com/questions/8804830/python-multiprocessing-pickling-error) | It sounds like you're seeing the results of Python's [Global Interpreter Lock](https://wiki.python.org/moin/GlobalInterpreterLock) (a.k.a GIL).
> In CPython, the global interpreter lock, or GIL, is a mutex that
> **prevents multiple native threads** from executing Python bytecodes at
> once.
As all your threads are running pure Python code, only one of them can actually run in parallel. That should cause only one CPU to be active and matches your description of the problem.
You can get around it by using a multiple processes with `ProcessPoolExecutor` from the same module. Other solutions include switching to JPython or IronPython which don't have GIL.
> The ProcessPoolExecutor class is an Executor subclass that uses a pool
> of processes to execute calls asynchronously. ProcessPoolExecutor uses
> the multiprocessing module, **which allows it to side-step the Global
> Interpreter Lock** but also means that only picklable objects can be
> executed and returned. |
What can I do with a closed file object? | 30,379,488 | 7 | 2015-05-21T16:32:42Z | 30,379,517 | 12 | 2015-05-21T16:34:02Z | [
"python",
"file"
] | When you open a file, it's stored in an open file object which gives you access to various methods on it such as reading or writing.
```
>>> f = open("file0")
>>> f
<open file 'file0', mode 'r' at 0x0000000002E51660>
```
Of course when you're done you should close your file to prevent it taking up memory space.
```
>>> f.close()
>>> f
<closed file 'file0', mode 'r' at 0x0000000002E51660>
```
This leaves a closed file, so that the object still exists though it's no longer using space for the sake of being readable. But is there any practical application of this? It can't be read, or written. It can't be used to reopen the file again.
```
>>> f.open()
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
f.open()
AttributeError: 'file' object has no attribute 'open'
>>> open(f)
Traceback (most recent call last):
File "<pyshell#10>", line 1, in <module>
open(f)
TypeError: coercing to Unicode: need string or buffer, file found
```
Is there a practical use for this closed file object aside from identifying that a file object is being referenced but is closed? | One use is using the name to reopen the file:
```
open(f.name).read()
```
I use the name attribute when changing a file content using a [NamedTemporaryFile](https://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) to write the updated content to then replace the original file with shutil.move:
```
with open("foo.txt") as f, NamedTemporaryFile("w",dir=".", delete=False) as temp:
for line in f:
if stuff
temp.write("stuff")
shutil.move(temp.name,"foo.txt")
```
Also as commented you can use the `f.closed` to see if the file is *really* closed. |
Plotting Pandas groupby groups using subplots and loop | 30,379,645 | 5 | 2015-05-21T16:41:11Z | 30,380,431 | 7 | 2015-05-21T17:21:54Z | [
"python",
"pandas",
"plot",
"group",
"subplot"
] | I am trying to generate a grid of subplots based off of a Pandas groupby object. I would like each plot to be based off of two columns of data for one group of the groupby object. Fake data set:
```
C1,C2,C3,C4
1,12,125,25
2,13,25,25
3,15,98,25
4,12,77,25
5,15,889,25
6,13,56,25
7,12,256,25
8,12,158,25
9,13,158,25
10,15,1366,25
```
I have tried the following code:
```
import pandas as pd
import csv
import matplotlib as mpl
import matplotlib.pyplot as plt
import math
#Path to CSV File
path = "..\\fake_data.csv"
#Read CSV into pandas DataFrame
df = pd.read_csv(path)
#GroupBy C2
grouped = df.groupby('C2')
#Figure out number of rows needed for 2 column grid plot
#Also accounts for odd number of plots
nrows = int(math.ceil(len(grouped)/2.))
#Setup Subplots
fig, axs = plt.subplots(nrows,2)
for ax in axs.flatten():
for i,j in grouped:
j.plot(x='C1',y='C3', ax=ax)
plt.savefig("plot.png")
```
But it generates 4 identical subplots with all of the data plotted on each (see example output below):

I would like to do something like the following to fix this:
```
for i,j in grouped:
j.plot(x='C1',y='C3',ax=axs)
next(axs)
```
but i get an AttributeError: 'numpy.ndarray' object has no attribute 'get\_figure'
I will have a dynamic number of groups in the groupby object I want to plot, and many more elements than the fake data I have provided. This is why I need an elegant, dynamic solution and each group data set plotted on a separate subplot.
Any tips or ideas for a solution are appreciated!
Thanks | Sounds like you want to iterate over the groups and the axes *in parallel*, so rather than having nested `for` loops (which iterates over all groups for *each* axis), you want something like this:
```
for (i,j), ax in zip(grouped, axs.flat):
j.plot(x='C1',y='C3', ax=ax)
```

You have the right idea in your second code snippet, but you're getting an error because `axs` is an array of axes, but `plot` expects just a single axis. So it should also work to replace `next(axs)` in your example with `ax = axs.next()` and change the argument of `plot` to `ax=ax`. |
Why does Python crash while returning a C string? | 30,381,189 | 5 | 2015-05-21T18:03:22Z | 30,381,257 | 9 | 2015-05-21T18:08:03Z | [
"python",
"c",
"ctypes"
] | Here's my C code, I made a shared lib out of it. When I load the shared lib in Python and execute the Python code below, I crash. Why?
```
extern "C" {
PyObject* foo2(char* b)
{
return Py_BuildValue("s", b);
}
}
```
And here's what I am doing in Python:
```
from ctypes import *
d = cdll.LoadLibrary('./foo.so')
d.foo2.restype = c_char_p
v = d.foo2('hello world')
print pythonapi.PyString_Size(v)
```
If it helps, I am at Python2.6. | Your problem is that you're lying about the return type:
```
d.foo2.restype = c_char_p
```
The actual return type is `PyObject *`. But `ctypes` is going to see that `c_char_p`, cast the `PyObject *` to a `char *`, and then try to convert that `char *` to a string with `PyString_FromString`, which is going to read who knows what arbitrary bytes until it hits a NUL character.
The way to specify a `PyObject *` is with [`py_object`](https://docs.python.org/2/library/ctypes.html#ctypes.py_object).
Also, you probably want to set the `argtype`s. And this time, it really *is* a `c_char_p`:
```
d = cdll.LoadLibrary('./foo.so')
d.foo2.argtypes = [c_char_p]
d.foo2.restype = py_object
```
But, as interjay points out in a comment, it's a bit silly to be building a C library that uses the Python C API and then calling it via `ctypes`. This is *occasionally* useful, but normally, the solution is to just finish building a C extension module instead of doing 80% of the work for none of the benefit⦠|
SSLError: Can't connect to HTTPS URL because the SSL module is not available on google app engine | 30,381,581 | 11 | 2015-05-21T18:26:23Z | 30,624,304 | 9 | 2015-06-03T15:22:17Z | [
"python",
"google-app-engine",
"ssl",
"wechat"
] | Want to use [wechat sdk](http://wechat-python-sdk.readthedocs.org/zh_CN/master/basic.html) to create menu
```
WeChat.create_menu({
"button":[
{
"type":"click",
"name":"Daily Song",
"key":"V1001_TODAY_MUSIC"
},
{
"type":"click",
"name":" Artist Profile",
"key":"V1001_TODAY_SINGER"
},
{
"name":"Menu",
"sub_button":[
{
"type":"view",
"name":"Search",
"url":"http://www.soso.com/"
},
{
"type":"view",
"name":"Video",
"url":"http://v.qq.com/"
},
{
"type":"click",
"name":"Like us",
"key":"V1001_GOOD"
}]
}]
})
```
Currently not work because of this error:
```
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 267, in Handle
result = handler(dict(self._environ), self._StartResponse)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1519, in __call__
response = self._internal_error(e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1511, in __call__
rv = self.handle_exception(request, response, e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1077, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 547, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 545, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechatAPIHandler.py", line 72, in post
"key":"V1001_GOOD"
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 355, in create_menu
data=menu_data
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 949, in _post
**kwargs
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 907, in _request
"access_token": self.access_token,
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 849, in access_token
self.grant_token()
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 273, in grant_token
"secret": self.__appsecret,
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 935, in _get
**kwargs
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 917, in _request
**kwargs
File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/adapters.py", line 431, in send
raise SSLError(e, request=request)
SSLError: Can't connect to HTTPS URL because the SSL module is not available.
```
python request module is include in the app engine project. Using python 2.7. Being look for ways to solve this problem but have not find very clear way to solve the problem yet | [This blog post details a solution](http://bekt.github.io/p/gae-ssl/#sthash.bnL4iKEq.dpbs). From the blog post:
> The problem is GAE has a âwhitelistâ of select standard libraries. SSL
> (\_ssl, \_socket) is not one of them. So, we need to tweak the sandbox
> environment (dangerous) carefully. The below code uses the standard
> Python socket library instead of the GAE-provided in the development
> environment. Modify [or create] appengine\_config.py:
```
import os
# Workaround the dev-environment SSL
# http://stackoverflow.com/q/16192916/893652
if os.environ.get('SERVER_SOFTWARE', '').startswith('Development'):
import imp
import os.path
from google.appengine.tools.devappserver2.python import sandbox
sandbox._WHITE_LIST_C_MODULES += ['_ssl', '_socket']
# Use the system socket.
psocket = os.path.join(os.path.dirname(os.__file__), 'socket.py')
imp.load_source('socket', psocket)
``` |
SSLError: Can't connect to HTTPS URL because the SSL module is not available on google app engine | 30,381,581 | 11 | 2015-05-21T18:26:23Z | 34,135,758 | 16 | 2015-12-07T14:19:14Z | [
"python",
"google-app-engine",
"ssl",
"wechat"
] | Want to use [wechat sdk](http://wechat-python-sdk.readthedocs.org/zh_CN/master/basic.html) to create menu
```
WeChat.create_menu({
"button":[
{
"type":"click",
"name":"Daily Song",
"key":"V1001_TODAY_MUSIC"
},
{
"type":"click",
"name":" Artist Profile",
"key":"V1001_TODAY_SINGER"
},
{
"name":"Menu",
"sub_button":[
{
"type":"view",
"name":"Search",
"url":"http://www.soso.com/"
},
{
"type":"view",
"name":"Video",
"url":"http://v.qq.com/"
},
{
"type":"click",
"name":"Like us",
"key":"V1001_GOOD"
}]
}]
})
```
Currently not work because of this error:
```
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 267, in Handle
result = handler(dict(self._environ), self._StartResponse)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1519, in __call__
response = self._internal_error(e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1511, in __call__
rv = self.handle_exception(request, response, e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1077, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 547, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 545, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechatAPIHandler.py", line 72, in post
"key":"V1001_GOOD"
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 355, in create_menu
data=menu_data
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 949, in _post
**kwargs
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 907, in _request
"access_token": self.access_token,
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 849, in access_token
self.grant_token()
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 273, in grant_token
"secret": self.__appsecret,
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 935, in _get
**kwargs
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 917, in _request
**kwargs
File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/adapters.py", line 431, in send
raise SSLError(e, request=request)
SSLError: Can't connect to HTTPS URL because the SSL module is not available.
```
python request module is include in the app engine project. Using python 2.7. Being look for ways to solve this problem but have not find very clear way to solve the problem yet | If you're using GAE's Sockets, you can get SSL support without any hacks by simply loading the SSL library.
Simply add this to your app.yaml file:
```
libraries:
- name: ssl
version: latest
```
This is documented on [Google Cloud's OpenSSL Support documentation.](https://cloud.google.com/appengine/docs/python/sockets/ssl_support) |
Python Class Name as Class Variable | 30,382,005 | 2 | 2015-05-21T18:49:37Z | 30,382,043 | 7 | 2015-05-21T18:51:52Z | [
"python",
"python-3.x",
"class-variables"
] | I'm working as an application with classes and subclasses. For each class, both super and sub, there is a class variable called `label`. I would like the `label` variable for the super class to default to the class name. For example:
```
class Super():
label = 'Super'
class Sub(Super):
label = 'Sub'
```
Rather than manually type out the variable for each class, is it possible to derive the variable from the class name in the super class and have it automatically populated for the subclasses?
```
class Super():
label = # Code to get class name
class Sub(Super)
pass
# When inherited Sub.label == 'Sub'.
```
The reason for this is that this will be the default behavior. I'm also hoping that if I can get the default behavior, I can override it later by specifying an alternate `label`.
```
class SecondSub(Super):
label = 'Pie' # Override the default of SecondSub.label == 'SecondSub'
```
I've tried using `__name__`, but that's not working and just gives me `'__main__'`.
I would like to use the class variable `label` in `@classmethod` methods. So I would like to be able to reference the value without having to actually create a Super() or Sub() object, like below:
```
class Super():
label = # Magic
@classmethod
def do_something_with_label(cls):
print(cls.label)
``` | you can return `self.__class__.__name__` in label as a property
```
class Super:
@property
def label(self):
return self.__class__.__name__
class Sub(Super):
pass
print Sub().label
```
alternatively you could set it in the `__init__` method
```
def __init__(self):
self.label = self.__class__.__name__
```
this will obviously only work on instantiated classes
to access the class name inside of a class method you would need to just call `__name__` on the `cls`
```
class XYZ:
@classmethod
def my_label(cls):
return cls.__name__
print XYZ.my_label()
```
this solution might work too (snagged from <http://stackoverflow.com/a/13624858/541038>)
```
class classproperty(object):
def __init__(self, fget):
self.fget = fget
def __get__(self, owner_self, owner_cls):
return self.fget(owner_cls)
class Super(object):
@classproperty
def label(cls):
return cls.__name__
class Sub(Super):
pass
print Sub.label #works on class
print Sub().label #also works on an instance
class Sub2(Sub):
@classmethod
def some_classmethod(cls):
print cls.label
Sub2.some_classmethod()
``` |
How can I register a single view (not a viewset) on my router? | 30,389,248 | 17 | 2015-05-22T05:51:24Z | 30,441,337 | 25 | 2015-05-25T15:12:57Z | [
"python",
"django",
"django-rest-framework"
] | I am using Django REST framework and have been trying to create a view that returns a small bit of information, as well as register it on my router.
I have four models which store information, and all of them have a `created_time` field. I am trying to make a view that returns the most recent objects (based on the `created_time`) in a single view, where only the four creation times are returned.
So, a possible JSON output from the view would look like
```
{
"publish_updatetime": "2015.05.20 11:53",
"meeting_updatetime": "2015.05.20 11:32",
"training_updatetime": "2015.05.20 15:25",
"exhibiting_updatetime": "2015.05.19 16:23"
}
```
I am also hoping to register this view on my router, so it appears with the rest of my endpoints when the API root is loaded.
```
router.register(r'updatetime', views.UpdateTimeView)
```
Here are the four models that I am trying to work with
```
class Publish(models.Model):
user = models.ForeignKey(MyUser)
name = models.CharField(max_length=50)
created_time = models.DateTimeField( default=datetime.now)
class Meeting(models.Model):
user = models.ForeignKey(MyUser)
name = models.CharField(max_length=50)
file_addr = models.FileField(upload_to=get_file_path)
created_time = models.DateTimeField(default=datetime.now)
class Training(models.Model):
user = models.ForeignKey(MyUser)
name = models.CharField(max_length=50)
image = models.ImageField(upload_to=get_file_path, max_length=255)
created_time = models.DateTimeField(default=datetime.now)
class Exhibiting(models.Model):
user = models.ForeignKey(MyUser)
name = models.CharField(max_length=50)
file_addr = models.FileField(upload_to=get_file_path)
created_time = models.DateTimeField(default=datetime.now)
```
Is it possible to do this? And how would it be done? | Routers work [with a `ViewSet`](http://www.django-rest-framework.org/api-guide/viewsets/) and aren't designed for normal views, but that doesn't mean that you cannot use them with a normal view. Normally they are used with models (and a `ModelViewSet`), but they can be used without them using the `GenericViewSet` (if you would normally use a `GenericAPIView`) and `ViewSet` (if you would just use an `APIView`).
For a list view, the request methods are mapped to `ViewSet` methods like this
* `GET` -> `list(self, request, format=None)`
* `POST`- > `create(self, request, format=None)`
For detail views (with a primary key in the url), the request methods use the following map
* `GET` -> `retrieve(self, request, pk, format=None)`
* `PUT` -> `update(self, request, pk, format=None)`
* `PATCH` -> `partial_update(self, request, pk, format=None)`
* `DELETE` -> `destroy(self, request, pk, format=None)`
So if you want to use any of these request methods with your view on your router, you need to override the correct view method (so `list()` instead of `get()`).
---
Now, specifically in your case you would have normally use an `APIView` that looked like
```
class UpdateTimeView(APIView):
def get(self, request, format=None):
latest_publish = Publish.objects.latest('created_time')
latest_meeting = Meeting.objects.latest('created_time')
latest_training = Training.objects.latest('created_time')
latest_exhibiting = Exhibiting.objects.latest('created_time')
return Response({
"publish_updatetime": latest_publish.created_time,
"meeting_updatetime": latest_meeting.created_time,
"training_updatetime": latest_training.created_time,
"exhibiting_updatetime": latest_exhibiting.created_time,
})
```
The comparable `ViewSet` would be
```
class UpdateTimeViewSet(ViewSet):
def list(self, request, format=None):
latest_publish = Publish.objects.latest('created_time')
latest_meeting = Meeting.objects.latest('created_time')
latest_training = Training.objects.latest('created_time')
latest_exhibiting = Exhibiting.objects.latest('created_time')
return Response({
"publish_updatetime": latest_publish.created_time,
"meeting_updatetime": latest_meeting.created_time,
"training_updatetime": latest_training.created_time,
"exhibiting_updatetime": latest_exhibiting.created_time,
})
```
Notice the two required changes: `APIView` -> `ViewSet` and `get` -> `list`. I also updated the name to indicate that it was more than just a normal view (as a `ViewSet` cannot be initialized the same way), but that's not required.
So with this new view, you can just register it in the router the same way as any other. You need a `base_name` here so the url names can be generated (normally this would pull from the queryset).
```
router.register(r'updatetime', views.UpdateTimeViewSet, base_name='updatetime')
```
So now the `updatetime` endpoint will be made available in the API root and you can get the latest times by just calling the endpoint (a simple GET request). |
ImportError: No module named django.core.management when using manage.py | 30,389,771 | 14 | 2015-05-22T06:28:57Z | 30,390,103 | 17 | 2015-05-22T06:50:13Z | [
"python",
"django",
"python-2.7",
"importerror",
"django-manage.py"
] | I'm trying to run `python manage.py runserver` on a Django application I have and I get this error:
```
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
```
Here is the output of `pip freeze | grep -i django` to show I do in fact have Django installed:
```
Django==1.6.5
django-cached-authentication-middleware==0.2.0
django-cors-headers==1.1.0
django-htmlmin==0.7.0
django-static-precompiler==0.9
djangorestframework==2.3.14
```
Also, trying to run `/usr/local/bin/python2.7 manage.py runserver` yields the same error. | Possible issues that may cause your problem:
1. PYTHONPATH is not well configured, to configure it you should do:
```
export PYTHONPATH=/usr/local/lib/python2.7/site-packages
```
2. You forgot the line `#!/usr/bin/env python` at the beginning of manage.py
3. If you're working on virtualenv you forgot to activate the virtual env to execute manage.py commands (You may have installed Django on your system but not on your virtualenv)
```
source path/to/your/virtualenv/bin/activate
```
or
```
workon env_name
```
4. You have Python 2.7 and Python 3.4 messing with the package
5. You're using a very old Python 2.4 and you should tell the system to use your Python 2.7 with:
```
alias python=python2.7
```
Some times reinstalling/upgrading Django fix some of those issues.
You may want to execute
```
python -c "import django; print(django.get_version())"
```
to check if Django is installed on your PC or your virtualenv if you're using one
You can find some other solutions in other similar questions:
* [Django import error](http://stackoverflow.com/questions/6049933/django-import-error-no-module-named-core-management)
* [Django uwsgi error](http://stackoverflow.com/questions/30389771/importerror-no-module-named-django-core-management-when-using-manage-py)
* [Django module error](http://stackoverflow.com/questions/14013728/django-no-module-named-django-core-management) |
Why Flask-migrate cannot upgrade when drop column | 30,394,222 | 9 | 2015-05-22T10:23:16Z | 31,140,916 | 12 | 2015-06-30T14:21:22Z | [
"python",
"sqlalchemy",
"flask-migrate"
] | I am using SqlAlchemy and Flask-migrate for DB migration. I have successfully `init` the DB and `upgrade` once, but when I deleted one of my table column, I managed to `migrate` however `upgrade` gave me the following error:
```
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "DROP": syntax error [SQL: u'ALTER TABLE posts DROP COLUMN tags']
```
There is part of my models.py
```
class Post(db.Model):
__tabelname__ = 'posts'
id = db.Column(db.Integer, primary_key=True)
body = db.Column(db.UnicodeText)
# tags = db.Column(db.Unicode(32))
# I deleted this field, upgrade give me error
....
```
And I run **python manage.py db upgrade** again, the error changed!
```
(venv)ncp@ubuntu:~/manualscore$ python manage.py db upgrade
INFO [alembic.migration] Context impl SQLiteImpl.
INFO [alembic.migration] Will assume non-transactional DDL.
INFO [alembic.migration] Running upgrade 555b78ffd5f -> 2e063b1b3164, add tag table
Traceback (most recent call last):
File "manage.py", line 79, in <module>
manager.run()
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/flask_script/__init__.py", line 405, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/flask_script/__init__.py", line 384, in handle
return handle(app, *positional_args, **kwargs)
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/flask_script/commands.py", line 145, in handle
return self.run(*args, **kwargs)
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/flask_migrate/__init__.py", line 177, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/alembic/command.py", line 165, in upgrade
script.run_env()
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/alembic/script.py", line 390, in run_env
util.load_python_file(self.dir, 'env.py')
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/alembic/util.py", line 243, in load_python_file
module = load_module_py(module_id, path)
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/alembic/compat.py", line 79, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "migrations/env.py", line 72, in <module>
run_migrations_online()
File "migrations/env.py", line 65, in run_migrations_online
context.run_migrations()
File "<string>", line 7, in run_migrations
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/alembic/environment.py", line 738, in run_migrations
self.get_context().run_migrations(**kw)
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/alembic/migration.py", line 309, in run_migrations
step.migration_fn(**kw)
File "/home/ncp/manualscore/migrations/versions/2e063b1b3164_add_tag_table.py", line 24, in upgrade
sa.PrimaryKeyConstraint('id')
File "<string>", line 7, in create_table
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/alembic/operations.py", line 944, in create_table
self.impl.create_table(table)
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/alembic/ddl/impl.py", line 198, in create_table
self._exec(schema.CreateTable(table))
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/alembic/ddl/impl.py", line 122, in _exec
return conn.execute(construct, *multiparams, **params)
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute
return meth(self, multiparams, params)
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 68, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 968, in _execute_ddl
compiled
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
context)
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1339, in _handle_dbapi_exception
exc_info
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
context)
File "/home/ncp/manualscore/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 442, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) table tags already exists [SQL: u'\nCREATE TABLE tags (\n\tid INTEGER NOT NULL, \n\tname VARCHAR(32), \n\tpost_id INTEGER, \n\tPRIMARY KEY (id), \n\tFOREIGN KEY(post_id) REFERENCES posts (id)\n)\n\n']
``` | SQLite does not support dropping or altering columns. However, there is a way to work around this by making changes at the table level: <https://www.sqlite.org/lang_altertable.html>
And more usefully for Alembic/Flask-Migrate users, Alembic's batch\_alter\_table context manager lets you specify the changes in a natural way, and does a little "make new table - copy data - drop old table - rename new table" dance behind the scenes when using SQLite. See: <https://alembic.readthedocs.org/en/latest/ops.html#alembic.operations.Operations.batch_alter_table>
So the upgrade() function in your migration file should contain something like:
```
with op.batch_alter_table('posts') as batch_op:
batch_op.drop_column('tags')
```
I'm afraid I don't know why the error changed the second time you tried the upgrade.
As tkisme points out, you can also configure the `EnvironmentContext.configure.render_as_batch` flag in `env.py` so that autogenerated migration scripts will use `batch_alter_table` by default. See: <https://alembic.readthedocs.org/en/latest/batch.html#batch-mode-with-autogenerate> |
python all combinations of subsets of a string | 30,397,788 | 5 | 2015-05-22T13:19:01Z | 30,398,896 | 7 | 2015-05-22T14:07:48Z | [
"python",
"subset"
] | I need all combinations of subsets of a string. In addition, a subset of length 1 can only be followed by a subset with length > 1. E.g. for string `4824` the result should be:
```
[ [4, 824], [4, 82, 4], [48, 24], [482, 4], [4824] ]
```
So far I managed to retrieve all possible subsets with:
```
length = len(number)
ss = []
for i in xrange(length):
for j in xrange(i,length):
ss.append(number[i:j + 1])
```
which gives me:
```
['4', '48', '482', '4824', '8', '82', '824', '2', '24', '4']
```
But I don't know how to combine those now. | First, write a function for generating *all* the partitions of the string:
```
def partitions(s):
if s:
for i in range(1, len(s) + 1):
for p in partitions(s[i:]):
yield [s[:i]] + p
else:
yield []
```
This iterates all the possible first segments (one character, two characters, etc.) and combines those with all the partitions for the respective remainder of the string.
```
>>> list(partitions("4824"))
[['4', '8', '2', '4'], ['4', '8', '24'], ['4', '82', '4'], ['4', '824'], ['48', '2', '4'], ['48', '24'], ['482', '4'], ['4824']]
```
Now, you can just filter those that match your condition, i.e. those that have no two consecutive substrings of length one.
```
>>> [p for p in partitions("4824") if not any(len(x) == len(y) == 1 for x, y in zip(p, p[1:]))]
[['4', '82', '4'], ['4', '824'], ['48', '24'], ['482', '4'], ['4824']]
```
Here, `zip(p, p[1:])` is a common recipe for iterating over all pairs of consecutive items.
---
Update: Actually, incorporating your constraint directly into the `partition` function is not that hard, either. Just keep track of the last segment and set the minimum length accordingly.
```
def partitions(s, minLength=1):
if len(s) >= minLength:
for i in range(minLength, len(s) + 1):
for p in partitions(s[i:], 1 if i > 1 else 2):
yield [s[:i]] + p
elif not s:
yield []
```
Demo:
```
>>> print list(partitions("4824"))
[['4', '82', '4'], ['4', '824'], ['48', '24'], ['482', '4'], ['4824']]
``` |
Shift elements in a numpy array | 30,399,534 | 17 | 2015-05-22T14:36:42Z | 30,534,478 | 12 | 2015-05-29T16:22:31Z | [
"python",
"numpy"
] | Following-up from [this question](http://stackoverflow.com/q/5859144/478288) years ago, is there a canonical "shift" function in numpy? I don't see anything from [the documentation](http://docs.scipy.org/doc/numpy/reference/routines.array-manipulation.html).
Here's a simple version of what I'm looking for:
```
def shift(xs, n):
if n >= 0:
return np.r_[np.full(n, np.nan), xs[:-n]]
else:
return np.r_[xs[-n:], np.full(-n, np.nan)]
```
Using this is like:
```
In [76]: xs
Out[76]: array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
In [77]: shift(xs, 3)
Out[77]: array([ nan, nan, nan, 0., 1., 2., 3., 4., 5., 6.])
In [78]: shift(xs, -3)
Out[78]: array([ 3., 4., 5., 6., 7., 8., 9., nan, nan, nan])
```
This question came from my attempt to [write a fast rolling\_product](http://stackoverflow.com/a/30386409/478288) yesterday. I needed a way to "shift" a cumulative product and all I could think of was to replicate the logic in `np.roll()`.
---
So `np.concatnate()` is much faster than `np.r_[]`. This version of the function performs a lot better:
```
def shift(xs, n):
if n >= 0:
return np.concatenate((np.full(n, np.nan), xs[:-n]))
else:
return np.concatenate((xs[-n:], np.full(-n, np.nan)))
```
---
An even faster version simply pre-allocates the array:
```
def shift(xs, n):
e = np.empty_like(xs)
if n >= 0:
e[:n] = np.nan
e[n:] = xs[:-n]
else:
e[n:] = np.nan
e[:n] = xs[-n:]
return e
``` | Not numpy but scipy provides exactly the shift functionality you want,
```
import numpy as np
from scipy.ndimage.interpolation import shift
xs = np.array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
shift(xs, 3, cval=np.NaN)
```
where default is to bring in a constant value from outside the array with value `cval`, set here to `nan`. This gives the desired output,
```
array([ nan, nan, nan, 0., 1., 2., 3., 4., 5., 6.])
```
and the negative shift works similarly,
```
shift(xs, -3, cval=np.NaN)
```
Provides output
```
array([ 3., 4., 5., 6., 7., 8., 9., nan, nan, nan])
``` |
Python Class Attributes and Subclassing | 30,399,794 | 2 | 2015-05-22T14:48:45Z | 30,399,848 | 7 | 2015-05-22T14:51:40Z | [
"python",
"class",
"subclassing"
] | I've googled and played a while but with no results trying to do something like this:
```
class A(object):
cl_att = 'I am an A class attribute'
class B(A):
cl_att += ' modified for B type'
class C(A):
cl_att += ' modified for C type'
instA = A()
print insAt.cl_att
>>> I am an A class attribute
instB = B()
print instB.cl_att
>>> I am an A class attribute modified for B type
instC = C()
print instC.cl_att
>>> I am an A class attribute modified for C type
print instA.cl_att
>>> I am an A class attribute
```
In short words I want to be able to "use and then override" a class attribute from my parent class. | Reference the parent class attribute and concatenate to it:
```
class A(object):
cl_att = 'I am an A class attribute'
class B(A):
cl_att = A.cl_att + ' modified for B type'
class C(A):
cl_att = A.cl_att + ' modified for C type'
```
Class bodies are executed much like functions, with the local names forming the class attributes. `cl_att` doesn't exist in the new 'function' to create the bodies for `B` and `C`, so you need to reference the attribute on the base class directly instead.
Demo:
```
>>> class A(object):
... cl_att = 'I am an A class attribute'
...
>>> class B(A):
... cl_att = A.cl_att + ' modified for B type'
...
>>> class C(A):
... cl_att = A.cl_att + ' modified for C type'
...
>>> A.cl_att
'I am an A class attribute'
>>> B.cl_att
'I am an A class attribute modified for B type'
>>> C.cl_att
'I am an A class attribute modified for C type'
``` |
Why is for _ in range(n) slower than for _ in [""]*n? | 30,399,987 | 6 | 2015-05-22T14:57:55Z | 30,401,443 | 10 | 2015-05-22T16:14:00Z | [
"python",
"python-internals"
] | Testing alternatives to `for _ in range(n)` (to execute some action `n` times, even if the action does not depend on the value of `n`) I noticed that there is another formulation of this pattern that is faster, `for _ in [""] * n`.
For example:
```
timeit('for _ in range(10^1000): pass', number=1000000)
```
returns 16.4 seconds;
whereas,
```
timeit('for _ in [""]*(10^1000): pass', number=1000000)
```
takes 10.7 seconds.
Why is `[""] * 10^1000` so much faster than `range(10^1000)` in Python 3?
All testing done using Python 3.3 | Your problem is that you are incorrectly feeding `timeit`.
You need to give `timeit` strings containing Python statements. If you do
```
stmt = 'for _ in ['']*100: pass'
```
Look at the value of `stmt`. The quote characters inside the square brackets match the string delimiters, so they are interpreted as string delimiters by Python. Since Python concatenates adjacent string literals, you'll see that what you really have is the same as `'for _ in [' + ']*100: pass'`, which gives you `'for _ in []*100: pass'`.
So your "super-fast" loop is just looping over the empty list, not a list of 100 elements. Try your test with, for example,
```
stmt = 'for _ in [""]*100: pass'
``` |
Why is for _ in range(n) slower than for _ in [""]*n? | 30,399,987 | 6 | 2015-05-22T14:57:55Z | 30,401,721 | 10 | 2015-05-22T16:30:30Z | [
"python",
"python-internals"
] | Testing alternatives to `for _ in range(n)` (to execute some action `n` times, even if the action does not depend on the value of `n`) I noticed that there is another formulation of this pattern that is faster, `for _ in [""] * n`.
For example:
```
timeit('for _ in range(10^1000): pass', number=1000000)
```
returns 16.4 seconds;
whereas,
```
timeit('for _ in [""]*(10^1000): pass', number=1000000)
```
takes 10.7 seconds.
Why is `[""] * 10^1000` so much faster than `range(10^1000)` in Python 3?
All testing done using Python 3.3 | When iterating over `range()`, objects for all integers between 0 and `n` are produced; this takes a (small) amount of time, even with [small integers having been cached](http://stackoverflow.com/questions/306313/pythons-is-operator-behaves-unexpectedly-with-integers).
The loop over `[None] * n` on the other hand produces `n` references to 1 object, and creating that list is a little faster.
However, the `range()` object uses *far* less memory, *and* is more readable to boot, which is why people prefer using that. Most code doesn't have to squeeze every last drop from the performance.
If you need to have that speed, you can use a custom iterable that takes no memory, using [`itertools.repeat()`](https://docs.python.org/3/library/itertools.html#itertools.repeat) with a second argument:
```
from itertools import repeat
for _ in repeat(None, n):
```
As for your timing tests, there are some problems with those.
First of all, you made an error in your `['']*n` timing loop; you did not embed two quotes, you concatenated two strings and produced an *empty list*:
```
>>> '['']*n'
'[]*n'
>>> []*100
[]
```
That's going to be unbeatable in an iteration, as you iterated 0 times.
You also didn't use large numbers; `^` is the binary XOR operator, not the power operator:
```
>>> 10^1000
994
```
which means your test missed out on how long it'll take to create a *large* list of empty values.
Using better numbers and `None` gives you:
```
>>> from timeit import timeit
>>> 10 ** 6
1000000
>>> timeit("for _ in range(10 ** 6): pass", number=100)
3.0651066239806823
>>> timeit("for _ in [None] * (10 ** 6): pass", number=100)
1.9346517859958112
>>> timeit("for _ in repeat(None, 10 ** 6): pass", 'from itertools import repeat', number=100)
1.4315521717071533
``` |
python pandas extract year from datetime --- df['year'] = df['date'].year is not working | 30,405,413 | 2 | 2015-05-22T20:30:49Z | 30,405,634 | 8 | 2015-05-22T20:47:48Z | [
"python",
"datetime",
"pandas",
"extract",
"dataframe"
] | Sorry for this question that seems repetitive - I expect the answer will make me feel like a bonehead... but I have not had any luck using answers to the similar questions on SO.
I am importing data in through `read_csv`, but for some reason which I cannot figure out, I am not able to extract the year or month from the dataframe series `df['date']`.
```
date Count
6/30/2010 525
7/30/2010 136
8/31/2010 125
9/30/2010 84
10/29/2010 4469
df = pd.read_csv('sample_data.csv',parse_dates=True)
df['date'] = pd.to_datetime(df['date'])
df['year'] = df['date'].year
df['month'] = df['date'].month
```
But this returns:
> AttributeError: 'Series' object has no attribute 'year'
Thanks in advance.
UPDATE:
```
df = pd.read_csv('sample_data.csv',parse_dates=True)
df['date'] = pd.to_datetime(df['date'])
df['year'] = df['date'].dt.year
df['month'] = df['date'].dt.month
```
this generates the same "AttributeError: 'Series' object has no attribute 'dt' "
FOLLOW UP:
I am using Spyder 2.3.1 with Python 3.4.1 64bit, but cannot update pandas to a newer release (currently on 0.14.1). Each of the following generates an invalid syntax error:
```
conda update pandas
conda install pandas==0.15.2
conda install -f pandas
```
Any ideas? | If you're running a recent-ish version of pandas then you can use the datetime attribute [`dt`](http://pandas.pydata.org/pandas-docs/stable/api.html#datetimelike-properties) to access the datetime components:
```
In [6]:
df['date'] = pd.to_datetime(df['date'])
df['year'], df['month'] = df['date'].dt.year, df['date'].dt.month
df
Out[6]:
date Count year month
0 2010-06-30 525 2010 6
1 2010-07-30 136 2010 7
2 2010-08-31 125 2010 8
3 2010-09-30 84 2010 9
4 2010-10-29 4469 2010 10
```
**EDIT**
It looks like you're running an older version of pandas in which case the following would work:
```
In [18]:
df['date'] = pd.to_datetime(df['date'])
df['year'], df['month'] = df['date'].apply(lambda x: x.year), df['date'].apply(lambda x: x.month)
df
Out[18]:
date Count year month
0 2010-06-30 525 2010 6
1 2010-07-30 136 2010 7
2 2010-08-31 125 2010 8
3 2010-09-30 84 2010 9
4 2010-10-29 4469 2010 10
```
Regarding why it didn't parse this into a datetime in `read_csv` you need to pass the ordinal position of your column (`[0]`) because when `True` it tries to parse columns `[1,2,3]` see the [docs](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html#pandas.read_csv)
```
In [20]:
t="""date Count
6/30/2010 525
7/30/2010 136
8/31/2010 125
9/30/2010 84
10/29/2010 4469"""
df = pd.read_csv(io.StringIO(t), sep='\s+', parse_dates=[0])
df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 5 entries, 0 to 4
Data columns (total 2 columns):
date 5 non-null datetime64[ns]
Count 5 non-null int64
dtypes: datetime64[ns](1), int64(1)
memory usage: 120.0 bytes
```
So if you pass param `parse_dates=[0]` to `read_csv` there shouldn't be any need to call `to_datetime` on the 'date' column after loading. |
How to get Python Requests to Trust a Self Signed SSL Certificate? | 30,405,867 | 8 | 2015-05-22T21:06:03Z | 30,405,947 | 10 | 2015-05-22T21:12:24Z | [
"python",
"ssl",
"python-requests"
] | ```
import requests
data = {'foo':'bar'}
url = 'https://foo.com/bar'
r = requests.post(url, data=data)
```
If the URL uses a self signed certificate, this fails with
```
requests.exceptions.SSLError: [Errno 1] _ssl.c:507: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
```
I know that I can pass `False` to the `verify` parameter, like this:
```
r = requests.post(url, data=data, verify=False)
```
However, what I would like to do is point requests to a copy of the public key on disk and tell it to trust that certificate. | try:
```
r = requests.post(url, data=data, verify='/path/to/public_key.pem')
``` |
How to get Python Requests to Trust a Self Signed SSL Certificate? | 30,405,867 | 8 | 2015-05-22T21:06:03Z | 30,405,972 | 14 | 2015-05-22T21:14:12Z | [
"python",
"ssl",
"python-requests"
] | ```
import requests
data = {'foo':'bar'}
url = 'https://foo.com/bar'
r = requests.post(url, data=data)
```
If the URL uses a self signed certificate, this fails with
```
requests.exceptions.SSLError: [Errno 1] _ssl.c:507: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
```
I know that I can pass `False` to the `verify` parameter, like this:
```
r = requests.post(url, data=data, verify=False)
```
However, what I would like to do is point requests to a copy of the public key on disk and tell it to trust that certificate. | With the `verify` parameter you can provide a custom certificate authority bundle (<http://docs.python-requests.org/en/latest/user/advanced/>):
```
requests.get(url, verify=path_to_bundle)
```
> You can pass verify the path to a CA\_BUNDLE file with certificates of
> trusted CAs. This list of trusted CAs can also be specified through
> the REQUESTS\_CA\_BUNDLE environment variable.
>
> You can also specify a local cert to use as client side certificate,
> as a single file (containing the private key and the certificate) or
> as a tuple of both fileâs path:
```
>>> requests.get('https://kennethreitz.com', cert=('/path/server.crt', '/path/key'))
<Response [200]>
``` |
Python - Long string on multiple line | 30,408,450 | 5 | 2015-05-23T03:00:10Z | 30,408,474 | 8 | 2015-05-23T03:04:34Z | [
"python",
"file",
"directory"
] | Is there a correct way to show file paths (hard coded) longer than 79 characters (based on pep8) on multiply lines or is it best to keep the file path on a single line?
Eg
```
photos = "D:\MyWork\FieldWork\Year2015\January\MountainPass\Area1\Site1\Campsite2\Inspections\photos1"
```
Would the above example work best on multiple lines or a single line? | I personally use this method, and have seen it used in the PEP8 materials:
```
long_string = ('this is a really long string I want '
'to wrap over multiple lines')
```
You can also do:
```
long_string = 'this is a really long string I want '\
'to wrap over multiple lines'
```
According to PEP8 you should try to keep the maximum width of code to 79 characters, and generally docstrings and comments to 72.
I also recommend taking a look at `os.path`. |
How do you use python-daemon the way that it's documentation dictates? | 30,408,589 | 3 | 2015-05-23T03:23:33Z | 30,408,822 | 7 | 2015-05-23T04:13:00Z | [
"python",
"daemon",
"python-daemon"
] | I'm trying to make a daemon in python and I've come across the python-daemon package. The interesting thing about it is that the most common way I've seen it used isn't even what the [documentation](https://www.python.org/dev/peps/pep-3143/), which is very sparse, tells you to do
```
import os
import grp
import signal
import daemon
import lockfile
from spam import (
initial_program_setup,
do_main_program,
program_cleanup,
reload_program_config,
)
context = daemon.DaemonContext(
working_directory='/var/lib/foo',
umask=0o002,
pidfile=lockfile.FileLock('/var/run/spam.pid'),
)
context.signal_map = {
signal.SIGTERM: program_cleanup,
signal.SIGHUP: 'terminate',
signal.SIGUSR1: reload_program_config,
}
mail_gid = grp.getgrnam('mail').gr_gid
context.gid = mail_gid
important_file = open('spam.data', 'w')
interesting_file = open('eggs.data', 'w')
context.files_preserve = [important_file, interesting_file]
initial_program_setup()
with context:
do_main_program()
```
Instead, people use it like this:
```
#!/usr/bin/python
import time
from daemon import runner
class App():
def __init__(self):
self.stdin_path = '/dev/null'
self.stdout_path = '/dev/tty'
self.stderr_path = '/dev/tty'
self.pidfile_path = '/tmp/foo.pid'
self.pidfile_timeout = 5
def run(self):
while True:
print("Howdy! Gig'em! Whoop!")
time.sleep(10)
app = App()
daemon_runner = runner.DaemonRunner(app)
daemon_runner.do_action()
```
Examples [here](http://www.gavinj.net/2012/06/building-python-daemon-process.html) and in this thread [How do you create a daemon in Python?](http://stackoverflow.com/questions/473620/how-do-you-create-a-daemon-in-python)
So can anyone tell me how the package is supposed to be used as intended? There are 0 examples to be found that use it the way the documentation specifies. | First, the reason you can't find good documentation is that, to the best of my knowledge, nobody ever wrote it. When Ben Finney proposed the PEP, there was plenty of interest, but then when he asked for someone else to take over the project and champion it, nobody did, so⦠beyond the PEP, and the sparse documentation in the docs directory of the project, there's really nothing but the source to explain things.
A `DaemonContext` is the way you're meant to create a daemon. Its API was bikeshedded extensively, and was the only part that was proposed to be part of the public interface in the stdlib. People from the Debian, Ubuntu, and RedHat/Fedora projects were involved in the initial discussion, and changes have been incorporated based on their experiences moving their distros to `systemd`.
A `DaemonRunner` wraps up a `DaemonContext`-based daemon and a control tool (ala [`apachectl`](https://httpd.apache.org/docs/2.2/programs/apachectl.html)). This implements a âserviceâ, which is only one way of running a daemon out of many other different ways.
Often, you don't want thisâif you want to build a "service", you usually want to only implement the daemon using a `daemon.DaemonContext`, and let `systemd` or `launchd` or their older predecessors manage the service by invoking that daemon. So, the PEP, to keep things simple, explicitly said that a service is outside the scope of what the `daemon` module should attempt.
But there is code for services in the `python-daemon` distribution. It isn't fully documented, because it is only an example of one way to use a daemon.
It does appear to work, and it's definitely been maintained and updated over the years. So, if you want an `apachectl`-type tool, I think it makes sense to use a `DaemonRunner`; just make sure you read the docstrings and write some tests to make sure it's doing what you wanted. |
When are create and update called in djangorestframework serializer? | 30,409,076 | 6 | 2015-05-23T05:05:24Z | 30,412,336 | 16 | 2015-05-23T11:53:19Z | [
"python",
"django",
"django-rest-framework"
] | I'm currently implementing djangorestframework for my app RESTful API. After playing around with it, I still do not clearly understand what `.create(self, validated_data)` and `.update(self, validated_data)` used for in the serializer. As I understand, **CRUD** only calls the 4 main methods in `viewsets.ModelViewSet`: `create()`, `retrive()`, `update()`, and `destroy()`.
I also have already tried to debug and print out stuff to see when the `.create()` and `.update()` methods are called in both `ModelViewSet` and `ModelSerializer`. Apparently, only the methods in `ModelViewSet` are called when I do the HTTP verbs. However, for `ModelSerializer`, I don't see any calls in those 2 methods. I just want to know what are those methods used for in `ModelSerializer` since I see that people override those methods a lot in the serializer.
P/S: I'm a newbie in djangorestframework + sorry for my English since I'm not native.
Thanks :) | You really must split things between the views and the serializer.
# Serializers
The `Serializer` is a standalone object. It is used for converting a Django model (or any kind of python datastructure, actually) into a serialized form, and the other way around.
You may use it as such, wherever you want. It does not even need an actual HTTP request as long as you don't need URIs in your output.
The `ModelSerializer` subclass is a specialized kind of `Serializer` that adds "load-from-model" and "save-to-model" functionality.
The "save-to-model" entry point is the `save()` method. For easier overriding, its default implementation will delegate its work to either the `create()` or Ì`update()` method of the serializer, depending on whether it is creating a new model instance, or updating one.
The purpose of that is customization: it gives you, the developer, the option to override just the create method, just the update method, or common behavior.
For instance, it allows you to do this kind of things:
```
def save(self, **kwargs):
# Will be done on every save
kwargs['last_changed'] = timezone.now()
return super().save(**kwargs)
def create(self, instance, data):
#Â Will only be done if a new object is being created
data['initial_creation'] = timezone.now()
return super().create(instance, data)
```
That's a basic example. There, the `last_changed` field will be set every time an object is saved, be it a creation or an update.
As a sidenote, you probably do not want to do that. Things such as setting "last\_changed" fields should live in the view, not in the serializer.
# Viewsets
In a completely different place, Django REST framework supplies `Viewsets`. Those are an organized collection of views, revolving around implementing a CRUD API for a model.
As such, it structures it functionality into a set of methods, namely `create()`, `retrieve()`/`list()`, `update()` and `delete()`.
The main point being: **there is no connection whatsoever between the viewset's `create()` method and the serializer's `create()` method.**
It just happens that the default implementation of the viewset's methods uses a `ModelSerializer` and that the default implementation of that serializer's `save()` method delegates the job to methods that have the same name.
By the way, about the `last_changed` example, here is how you would do it in the view:
```
def perform_create(self, serializer):
now = timezone.now()
serializer.save(initial_creation=now, last_changed=now)
def perform_update(self, serializer):
serializer.save(last_changed=timezone.now())
```
That's functionally equivalent to the example above, but lives in the viewset.
# Conclusion
So back to your question, the specific thing you should override depends on which object is responsible for the task you want to add.
* If your custom behavior is part of the serialization process, that is, the process of converting raw data back into a proper Django model and saving it, then you should override the `Serializer`'s methods.
* If, on the other hand, your custom behavior is specific to your viewset, then you should override the `Viewset`'s methods.
As a hint, you may ask yourself the following question: if I use the same serializer in another place (maybe another viewset), should it always display that behavior? |
python install lxml on mac os 10.10.1 | 30,410,030 | 3 | 2015-05-23T07:14:38Z | 30,428,349 | 15 | 2015-05-24T21:09:30Z | [
"python",
"osx",
"python-2.7",
"scrapy",
"lxml"
] | I bought a new macbook and I am so new to mac os. However, I read a lot on internet about how to install scrap
I did everything, but i have a problem with installing lxml
I tried this on terminal
```
pip install lxml
```
and a lot of stuff started to be downloading and many text was written on the terminal, but i got this error message on red in the terminal
```
1 error generated.
error: command '/usr/bin/clang' failed with exit status 1
----------------------------------------
Cleaning up...
Command /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -c "import setuptools, tokenize;__file__='/private/var/folders/rd/fxgsy46j3l77f6l9h_hv2fjm0000gn/T/pip_build_mycomputername/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/rd/fxgsy46j3l77f6l9h_hv2fjm0000gn/T/pip-WvDTPQ-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/var/folders/rd/fxgsy46j3l77f6l9h_hv2fjm0000gn/T/pip_build_mycomputername/lxml
Storing debug log for failure in /Users/mycomputername/.pip/pip.log
```
and now when I call scrapy spider that words with lxml, i got this error message
```
ImportError: No module named lxml.html
```
what should I do please?
any suggestion is appreciated | I had that problem, and what I did is:
installed all xcode (2.8GB) from apple store.
to be sure that the installation is successfully finished: open terminal and typed
```
xcode-select -p
```
you should get something like this:
```
/Applications/Xcode.app/Contents/Developer
```
now you need to install command line tools. try to type `gcc` on the terminal, then there are two possibilities: either you get a window asking to install the tools and you have to click on `install`, or you will get an error message.
if you get an error message, then don't worry just try to install the tools using this command `xcode-select --install`
after that restart the mac and then re install lxml again like this :
```
pip install lxml
```
then try to run scrapy again
if you encounter any other error on any other library, easily just reinstall scrapy using **easy\_install** and **NOT** using pip |
How to read a text file and output the words in the reversed order? Python | 30,416,049 | 3 | 2015-05-23T18:12:21Z | 30,416,066 | 7 | 2015-05-23T18:14:43Z | [
"python"
] | So I'm working on a code that reads a text and outputs on the screen the words in a reversed order meaning if the original text was
```
hello world
how are you
```
to:
```
you are how
world hello
```
I get it to partially work, the problem is that it outputs it in a single column, but I want it to be in lines.
the code is
```
for a in reversed(list(open("text.txt"))):
for i in a:
a = i.split()
b = a[::-1]
final_string = ''
for i in b:
final_string += i + ' '
print(final_string)
``` | You have one loop too many:
```
for a in reversed(list(open("text.txt"))):
for i in a:
```
The first loop produces the lines in the file in reverse order, so `a` is bound to each line. The second `for` then loops over *each individual character* in that line. You then proceed to 'reverse' that character (or an empty list when that character is a space or newline).
You are already using `reversed` for the file, you can use it for the lines too; combine it with [`str.join()`](https://docs.python.org/2/library/stdtypes.html#str.join):
```
for line in reversed(list(open("text.txt"))):
words = line.split()
reversed_words = ' '.join(reversed(words))
print(reversed_words)
```
Or more concisely still:
```
print(*(' '.join(l.split()[::-1]) for l in reversed(list(open('text.txt')))), sep='\n')
```
Demo:
```
>>> with open('text.txt', 'w') as fo:
... fo.write('''\
... hello world
... how are you
... ''')
...
24
>>> for line in reversed(list(open("text.txt"))):
... words = line.split()
... reversed_words = ' '.join(reversed(words))
... print(reversed_words)
...
you are how
world hello
>>> print(*(' '.join(l.split()[::-1]) for l in reversed(list(open('text.txt')))), sep='\n')
you are how
world hello
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.