title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Merging Key-Value Pairings in Dictionary | 35,619,544 | 20 | 2016-02-25T06:01:23Z | 35,619,965 | 7 | 2016-02-25T06:30:27Z | [
"python",
"algorithm",
"dictionary"
] | I have a dictionary that consists of employee-manager as key-value pairs:
```
{'a': 'b', 'b': 'd', 'c': 'd', 'd': 'f'}
```
I want to show the relations between employee-manager at all levels (employee's boss, his boss's boss, his boss's boss's boss etc.) using a dictionary. The desired output is:
```
{'a': [b,d,f], 'b': [d,f], 'c': [d,f], 'd': [f] }
```
Here is my attempt which only shows the first level:
```
for key, value in data.items():
if (value in data.keys()):
data[key] = [value]
data[key].append(data[value])
```
I can do another conditional statement to add the next level but this would be the wrong way to go about it. I'm not very familiar with dictionaries so what would be a better approach? | You may use the concept of recursion as :
```
def get_linked_list(element, hierarchy, lst):
if element:
lst.append(element)
return get_linked_list(hierarchy.get(element, ""), hierarchy, lst)
else:
return lst
```
And then access the hierarchy as:
```
>>> d = {'a': 'b', 'b': 'd', 'c': 'd', 'd': 'f'}
>>> print {elem:get_linked_list(elem, d, [])[1:] for elem in d.keys()}
>>> {'a': ['b', 'd', 'f'], 'c': ['d', 'f'], 'b': ['d', 'f'], 'd': ['f']}
```
However care must be taken as this may get to an infinite loop if we have an item in the dictionary as `"a": "a"` |
Give me highest element | 35,622,052 | 3 | 2016-02-25T08:30:50Z | 35,622,314 | 11 | 2016-02-25T08:45:37Z | [
"python",
"list"
] | I got a list like these:
```
List1: [1, 5, 9, 1, 5, 9, 15, 21, 29, 1, 5, 9, 15]
```
I want a new list, which should content the highest number, before it starts again with 1.
```
List_new: [9, 29, 15]
```
I tried this:
```
List_new = []
for i in range(len(List1)):
j = List1[i]
if j + 1 == '1':
List_new += [j]
else:
continue
print(j)
```
But I got an empty list back. | Simply with built-in only libs:
```
from itertools import groupby
result = [max(group) for r, group in groupby(your_list, lambda x: x == 1) if not r]
``` |
__add__ method and negative numbers in Python | 35,624,449 | 3 | 2016-02-25T10:20:05Z | 35,624,515 | 7 | 2016-02-25T10:22:27Z | [
"python",
"integer",
"add"
] | I always though that using the "+" operator in Python (3.5) calls the \_\_add\_\_ method under the hood and return the sum. However, I noticed some quirky behavior when negative numbers are involved.
Naturally,
```
>>>-3 + 7
```
returns 4
But(!)
```
>>>-3 .__add__(7)
```
returns -10 and
```
>>>-3 .__add__(-7)
4
>>>3 .__add__(7)
10
```
Is there a reason why \_\_add\_\_ signs the arguments if the object is signed. Also, what changes in the method so that when I use "+", the "correct" value comes out? | `-` is an operator too, an unary one. You called `__add__` on `3`, not on the result of `-` applied to `3`, because [attribute access binds more tightly than the `-` operator](https://docs.python.org/2/reference/expressions.html#operator-precedence).
Use parentheses:
```
>>> (-3).__add__(7)
4
```
Your code applies the `-` unary operator to the result of `3 + 7` instead. |
"InvalidRequirement: Invalid requirement, parse error" error after updating a python package | 35,625,488 | 10 | 2016-02-25T11:05:11Z | 35,625,489 | 12 | 2016-02-25T11:05:11Z | [
"python",
"pip",
"setuptools"
] | After updating a package (IPython in my case) using `pip install -U ipython` running any Python script that uses entry points fails with this error:
```
Traceback (most recent call last):
File "/home/adrian/dev/indico/env/bin/indico", line 5, in <module>
from pkg_resources import load_entry_point
...
File "/home/adrian/dev/indico/env/lib/python2.7/site-packages/pkg_resources/_vendor/packaging/requirements.py", line 94, in __init__
requirement_string[e.loc:e.loc + 8], requirement_string))
pkg_resources._vendor.packaging.requirements.InvalidRequirement: Invalid requirement, parse error at "'< 2.0'"
```
Nothing else changed, I did not update any other libraries. | This is caused by an issue in `setuptools==20.2.1` which is pulled in by IPython (`setuptools>..`), so a `pip install -U` updated it.
Until a fixed version is released or the broken version is pulled from PyPI there is a simple workaround (but note that it will break again if something updates setuptools):
* `pip install -U pip`
* `pip uninstall setuptools`
* `pip install 'setuptools<20.2'`
The `pip` update is needed since older versions of `pip` will not work without `setuptools` being installed
---
See these IRC logs and BitBucket issue for details:
* <http://chat-logs.dcpython.org/day/pypa/2016-02-25>
* <https://bitbucket.org/pypa/setuptools/issues/502/packaging-164-does-not-allow-whitepace> |
"InvalidRequirement: Invalid requirement, parse error" error after updating a python package | 35,625,488 | 10 | 2016-02-25T11:05:11Z | 37,314,629 | 8 | 2016-05-19T05:34:27Z | [
"python",
"pip",
"setuptools"
] | After updating a package (IPython in my case) using `pip install -U ipython` running any Python script that uses entry points fails with this error:
```
Traceback (most recent call last):
File "/home/adrian/dev/indico/env/bin/indico", line 5, in <module>
from pkg_resources import load_entry_point
...
File "/home/adrian/dev/indico/env/lib/python2.7/site-packages/pkg_resources/_vendor/packaging/requirements.py", line 94, in __init__
requirement_string[e.loc:e.loc + 8], requirement_string))
pkg_resources._vendor.packaging.requirements.InvalidRequirement: Invalid requirement, parse error at "'< 2.0'"
```
Nothing else changed, I did not update any other libraries. | Try downgrading your pip to `8.1.1`:
```
pip install pip==8.1.1
```
That solved it for me. |
An Object is created twice in Python | 35,626,923 | 11 | 2016-02-25T12:10:51Z | 35,627,114 | 8 | 2016-02-25T12:18:57Z | [
"python",
"oop",
"inheritance"
] | I have read Expert Python Programming which has an example for multi-inheritance. The book author has explained but I did not understand it, so I would like to have another view.
The example shows that object `B` is created two times!
Could you please give me an intuitive explanation.
```
In [1]: class A(object):
...: def __init__(self):
...: print "A"
...: super(A, self).__init__()
In [2]: class B(object):
...: def __init__(self):
...: print "B"
...: super(B, self).__init__()
In [3]: class C(A,B):
...: def __init__(self):
...: print "C"
...: A.__init__(self)
...: B.__init__(self)
In [4]: print "MRO:", [x.__name__ for x in C.__mro__]
MRO: ['C', 'A', 'B', 'object']
In [5]: C()
C
A
B
B
Out[5]: <__main__.C at 0x3efceb8>
```
The book author said:
> This happens due to the `A.__init__(self)` call, which is made with the
> C instance, thus making `super(A, self).__init__()` call `B`'s constructor
The point from which I didn't get its idea is how `A.__init__(self)` call will make `super(A, self).__init__()` call `B`'s constructor | The `super()` just means "next in line", where the line is the [mro](http://stackoverflow.com/questions/2010692/what-does-mro-do-in-python) `['C', 'A', 'B', 'object']`. So next in line for `A` is `B`.
The mro is calculated according to an algorithm called [C3 linearization](https://en.wikipedia.org/wiki/C3_linearization).
When you use `super()`, Python just goes along this order. When you write your class `A` you don't know yet which class will be next in line. Only after you create your class `C` with multiple inheritance and run your program you will get the mro and "know" what will be next for `A`.
For your example it means:
`C()` calls the `__init__()` of `C`, in which it calls the `__init__()` of `A`. Now, `A` uses `super()` and finds `B` in the mro, hence it calls the `__init__()` of `B`. Next, the `__init__()` of `C` calls the `__init__()` of `B` again.
Calling `super()` in the `__init__()` creates a different mro and avoids the double call to the `__init__()` of `B`.
```
from __future__ import print_function
class A(object):
def __init__(self):
print("A")
super(A, self).__init__()
class B(object):
def __init__(self):
print("B")
super(B, self).__init__()
class C(A,B):
def __init__(self):
print("C")
super(C, self).__init__()
```
Use:
```
>>> C.mro()
[__main__.C, __main__.A, __main__.B, object]
>> C()
C
A
B
``` |
How do I remove words from a List in a case-insensitive manner? | 35,630,691 | 3 | 2016-02-25T14:53:47Z | 35,630,736 | 8 | 2016-02-25T14:55:13Z | [
"python",
"string",
"list"
] | I have a list called `words` containing words which may be in upper or lower case, or some combination of them. Then I have another list called `stopwords` which contains only lowercase words. Now I want to go through each word in `stopwords` and remove all instances of that word from `words` in a case-insensitive manner, but I don't know how to do this. Suggestions?
Example:
```
words = ['This', 'is', 'a', 'test', 'string']
stopwords = ['this', 'test']
for stopword in stopwords:
if stopword in words:
words.remove(stopword);
print words
```
The result shown is this: `['This', 'is', 'a', 'string']`
The correct return should have been this: `['is', 'a', 'string']` | Make your word lowercase so you don't need to worry about casing:
```
words = ['This', 'is', 'a', 'test', 'string']
stopwords = set(['this', 'test'])
print([i for i in words if i.lower() not in stopwords])
```
As an additional note, per @cricket\_007 (and thanks to @chepner for the correction) comment, making stopwords a set would make it more performant. Notice the change to stopwords above making it a set instead of a list. |
AttributeError: 'module' object has no attribute 'SignedJwtAssertionCredentials' | 35,634,085 | 3 | 2016-02-25T17:22:52Z | 35,666,374 | 8 | 2016-02-27T06:07:32Z | [
"android",
"python",
"attributeerror",
"oauth2client",
"google-play-developer-api"
] | **Problem**: I've been using [Python Script Samples by Google](https://github.com/googlesamples/android-play-publisher-api) to upload the apk to Play Store and to get list of apps published via my account ([list\_apks.py](https://github.com/googlesamples/android-play-publisher-api/blob/master/v2/python/basic_list_apks_service_account.py) and `upload_apk.py`). However recently it started breaking. I've tried to update the packages like `google-api-python-client`, `oath2client` etc by doing `pip install --update packagename` but it didn't help.
**Logs**:
This if while listing apk's:
```
Determining latest version for my.package.name...
error 25-Feb-2016 06:30:52 Traceback (most recent call last):
error 25-Feb-2016 06:30:52 File "list_apks.py", line 80, in <module>
error 25-Feb-2016 06:30:52 main()
error 25-Feb-2016 06:30:52 File "list_apks.py", line 46, in main
error 25-Feb-2016 06:30:52 credentials = client.SignedJwtAssertionCredentials(
error 25-Feb-2016 06:30:52 AttributeError: 'module' object has no attribute 'SignedJwtAssertionCredentials'
build 25-Feb-2016 06:30:52 Found latest APK version:
build 25-Feb-2016 06:30:52 Generated new APK version: 1
```
This is when uploading apk:
```
25-Feb-2016 06:33:30 Uploading APK...
25-Feb-2016 06:33:30 Traceback (most recent call last):
25-Feb-2016 06:33:30 File "upload_apk.py", line 115, in <module>
25-Feb-2016 06:33:30 main(sys.argv)
25-Feb-2016 06:33:30 File "upload_apk.py", line 62, in main
25-Feb-2016 06:33:30 credentials = client.SignedJwtAssertionCredentials(
25-Feb-2016 06:33:30 AttributeError: 'module' object has no attribute 'SignedJwtAssertionCredentials'
```
**Code sniplet**:
```
import argparse
from apiclient.discovery import build
import httplib2
from oauth2client import client
SERVICE_ACCOUNT_EMAIL = (
'myaccountemail.com')
# Declare command-line flags.
argparser = argparse.ArgumentParser(add_help=False)
argparser.add_argument('package_name',
help='The package name. Example: com.android.sample')
def main():
# Load the key in PKCS 12 format that you downloaded from the Google APIs
# Console when you created your Service account.
f = file('mykeyname.p12', 'rb')
key = f.read()
f.close()
# HERE IS THE EXCEPTION
credentials = client.SignedJwtAssertionCredentials(
SERVICE_ACCOUNT_EMAIL,
key,
scope='https://www.googleapis.com/auth/androidpublisher')
http = httplib2.Http()
http = credentials.authorize(http)
...
```
What else I can try? I would appreciate your help. | Finally after so many days, I was able to find answer to it. It turns out that the class `SignedJwtAssertionCredentials` was removed from the `oath2client` python package in the `2.0.0` update. It was no more under `oauth2client.client`. The behaviour has been moved onto `oauth2client.service_account.ServiceAccountCredentials`.
Following worked for me:
```
import argparse
from apiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
import httplib2
from oauth2client import client
SERVICE_ACCOUNT_EMAIL = ('myaccountemail.com')
# Declare command-line flags.
argparser = argparse.ArgumentParser(add_help=False)
argparser.add_argument('package_name',
help='The package name. Example: com.android.sample')
def main():
key='mykeyname.p12'
scope = 'https://www.googleapis.com/auth/androidpublisher'
credentials = ServiceAccountCredentials.from_p12_keyfile(
SERVICE_ACCOUNT_EMAIL,
key,
scopes=[scope]
)
http = httplib2.Http()
http = credentials.authorize(http)
....
```
Source:
* [SignedJwtAssertionCredentials has been removed: Why?](https://github.com/google/oauth2client/issues/401)
* [oauth2client-Release 2.0.0](https://github.com/google/oauth2client/pull/404/files) |
Python itertools: Best way to unpack product of product of list of lists | 35,637,189 | 4 | 2016-02-25T20:02:58Z | 35,637,254 | 9 | 2016-02-25T20:06:37Z | [
"python",
"itertools",
"flatten",
"iterable-unpacking"
] | I have a list of lists over which I need to iterate 3 times (3 nested loops)
```
rangeList = [[-0.18,0.18],[0.14,0.52],[0.48,0.85]]
```
I can achieve this using product of product as follows
```
from itertools import product
for val in product(product(rangeList,rangeList),rangeList):
print val
```
The output looks as follows
```
(([-0.18, 0.18], [-0.18, 0.18]), [-0.18, 0.18])
(([-0.18, 0.18], [-0.18, 0.18]), [0.14, 0.52])
(([-0.18, 0.18], [-0.18, 0.18]), [0.48, 0.85])
(([-0.18, 0.18], [0.14, 0.52]), [-0.18, 0.18])
```
Its a tuple of tuple. My questions are
1. Is this a good approach?
2. If so, what is the bestway to unpack the
output of the product `val` into 3 separate variables say `xRange`, `yRange` and
`zRange`, where each holds a list value of say `[-0.18, 0.18]` or `[0.14, 0.52]` etc. | This is probably the most elegant way to do what you want:
```
for xrange, yrange, zrange in product(rangeList, repeat=3):
print xrange, yrange, zrange
```
But just to demonstrate how you can do the "deep" tuple unpacking you were trying:
```
for (xrange, yrange), zrange in product(product(rangeList,rangeList),rangeList):
print xrange, yrange, zrange
``` |
Python: get all months in range? | 35,650,793 | 8 | 2016-02-26T11:42:04Z | 35,651,063 | 9 | 2016-02-26T11:55:47Z | [
"python",
"datetime"
] | I want to get all months between now and August 2010, as a list formatted like this:
```
['2010-08-01', '2010-09-01', .... , '2016-02-01']
```
Right now this is what I have:
```
months = []
for y in range(2010, 2016):
for m in range(1, 13):
if (y == 2010) and m < 8:
continue
if (y == 2016) and m > 2:
continue
month = '%s-%s-01' % (y, ('0%s' % (m)) if m < 10 else m)
months.append(month)
```
What would be a better way to do this? | [`dateutil.relativedelta`](http://dateutil.readthedocs.org/en/latest/relativedelta.html) is handy here.
I've left the formatting out as an exercise.
```
from dateutil.relativedelta import relativedelta
import datetime
result = []
today = datetime.date.today()
current = datetime.date(2010, 8, 1)
while current <= today:
result.append(current)
current += relativedelta(months=1)
``` |
Make function have ellipsis for arguments in help() function | 35,665,130 | 9 | 2016-02-27T02:53:32Z | 35,970,257 | 7 | 2016-03-13T12:29:41Z | [
"python",
"docstring"
] | If you type `help(vars)`, the following is produced:
```
vars(...)
vars([object]) -> dictionary
Without arguments, equivalent to locals().
With an argument, equivalent to object.__dict__.
```
When I do the following:
```
def func(x, y): pass
help(func)
```
it displays this:
```
func(x, y)
```
How can I change it so that it shows up with `...` between the parentheses like the built-in function `vars()`? (That is, `func(...)`)
**Edit**: It has been suggested to use a docstring, but that won't do what I want. Here is an example:
```
def func(x, y):
"""func(...) -> None"""
help(func)
```
result:
```
func(x, y)
func(...) -> None
```
You see, `x, y` is still being displayed instead of `...` | You have (at least) two alternatives to achieve what you want.
The best alternative would be to override the `__str__` method of the `inspect.Signature` class. However, as it is written in C, it is read only.
So to do that you need to extend the class as following:
```
class MySignature(inspect.Signature):
def __str__(self):
return '(...)'
```
then after defining your function you execute:
```
func_signature = inspect.signature(func)
func.__signature__ = MySignature(func_signature.parameters.values(),
return_annotation=func_signature.return_annotation)
```
which would then return the following for `help(func)`:
```
Help on function func in module __main__:
func(...)
(END)
```
With this approach inspect.signature still works:
```
In [1]: inspect.signature(func)
Out[1]: <MySignature (...)>
```
Alternatively if you don't really care about being able to properly introspect your function (and probably some other use cases), then you can define the value of your function's `__signature__` to an object which is not a `Signature` instance:
```
def func(x, y):
pass
func.__signature__ = object()
help(func)
```
generates the result:
```
Help on function func in module __main__:
func(...)
(END)
```
But now `inspect.signature(func)` will raise `TypeError: unexpected object <object object at 0x10182e200> in __signature__ attribute`.
Note: this last version is quite hacky and I would not recommend it.
For more info on these two techniques and how the signature works see [PEP 0362](https://www.python.org/dev/peps/pep-0362/).
**Update:**
For python 2.7 you can do the following (probably better using a mock framework):
```
In [1]: import inspect
In [2]: def myformatargspec(*args, **kwargs):
...: return '(...)'
...:
In [3]: def func(x, y):
...: pass
...:
In [6]: inspect.formatargspec = myformatargspec
In [7]: help(func)
Help on function func in module __main__:
func(...)
(END)
``` |
How to transform a pair of values into a sorted unique array? | 35,667,931 | 5 | 2016-02-27T09:18:12Z | 35,667,956 | 9 | 2016-02-27T09:20:08Z | [
"python"
] | I have a result like this:
```
[(196, 128), (196, 128), (196, 128), (128, 196),
(196, 128), (128, 196), (128, 196), (196, 128),
(128, 196), (128, 196)]
```
And I'd like to convert it to unique values like this, in sorted order:
```
[128, 196]
```
And I'm pretty sure there's something like a one-liner trick in Python (batteries included) but I can't find one. | Create the set union of all tuples, then sort the result:
```
sorted(set().union(*input_list))
```
Demo:
```
>>> input_list = [(196, 128), (196, 128), (196, 128), (128, 196),
... (196, 128), (128, 196), (128, 196), (196, 128),
... (128, 196), (128, 196)]
>>> sorted(set().union(*input_list))
[128, 196]
``` |
Bash: Sort text file by bytewise case sensitive sort command or using python sort command | 35,669,702 | 4 | 2016-02-27T12:20:43Z | 35,669,891 | 7 | 2016-02-27T12:38:58Z | [
"python",
"linux",
"bash",
"sorting",
"case"
] | Text File
**using sort -s**
(case sensitive)
```
Act
Bad
Bag
Card
East
about
across
back
ball
camera
canvas
danger
dark
early
edge
```
**using sort -f** (Not case sensitive)
```
about
across
Act
back
Bad
Bag
ball
camera
canvas
Card
danger
dark
early
East
edge
```
The words starting with an uppercase are sorted alphabetically between the lowercase words.
What I want is that the words in uppercase are at the start of each next letter (upercase alphabetically sorted):
**Expected output**:
```
Act
about
across
Bad
Bag
back
ball
Card
camera
canvas
danger
dark
East
early
edge
```
How can I realize this using bash or python sort command? | This command will do it:
```
LC_ALL=C sort -k 1.1f,1.1 PATH
```
where `PATH` is your file path.
Explanation:
* `sort` collation order is affected by the current locale, so `LC_ALL=C` is used to set the locale to a known value (the POSIX locale, collation order based on ASCII character code values)
* `-k 1.1f,1.1` tells `sort` to use the first character as the primary sort key in a case-insensitive manner
* Equal comparisons of the primary key will be resolved by comparing all characters again (this time, in a case-sensitive manner).
The output is exactly as requested in the question. |
What's the pythonic way to cut a string at multiple positions? | 35,670,229 | 2 | 2016-02-27T13:14:00Z | 35,670,288 | 11 | 2016-02-27T13:18:45Z | [
"python",
"string"
] | If I have a string, say, `"The quick brown fox jumps over the lazy dog"`, and there's a list `[1, 8, 14, 18, 27]` indicates where to cut the string.
What I expect to get is a list that contains parts of the cut string. For this example, the output should be:
```
['T', 'he quic', 'k brow', 'n fo', 'x jumps o', 'ver the lazy dog']
```
My intuitive and naive way is to simply write a for loop, remember the previous index, slice the string and append the slice to output.
```
_str="The quick brown fox jumps over the lazy dog"
cut=[1, 8, 14, 18, 27]
prev=0
out=[]
for i in cut:
out.append(_str[prev:i])
prev=i
out.append(_str[prev:])
```
Is there any better way? | Here's how I would do it:
```
>>> s = "The quick brown fox jumps over the lazy dog"
>>> l = [1, 8, 14, 18, 27]
>>> l = [0] + l + [len(s)]
>>> [s[x:y] for x,y in zip(l, l[1:])]
['T', 'he quic', 'k brow', 'n fo', 'x jumps o', 'ver the lazy dog']
```
Some explanation:
I'am adding 0 to the front and `len(s)` to the end of the list, such that
```
>>> zip(l, l[1:])
[(0, 1), (1, 8), (8, 14), (14, 18), (18, 27), (27, 43)]
```
gives me a sequence of tuples of slice indices. All that's left to do is unpack those indices in a list comprehension and generate the slices you want.
edit:
If you *really* care about the memory footprint of this operation, because you deal with very large large strings and lists often of times, use generators all the way and build your list `l` such that it includes the 0 and `len(s)` in the first place.
For Python 2:
```
>>> from itertools import izip, tee
>>> s = "The quick brown fox jumps over the lazy dog"
>>> l = [0, 1, 8, 14, 18, 27, 43]
>>>
>>> def get_slices(s, l):
... it1, it2 = tee(l)
... next(it2)
... for start, end in izip(it1, it2):
... yield s[start:end]
...
>>> list(get_slices(s,l))
['T', 'he quic', 'k brow', 'n fo', 'x jumps o', 'ver the lazy dog']
```
For Python 3:
`zip` does what `izip` did in Python 2 (see Python 3.3 version)
For Python 3.3+ with the `yield from` syntax:
```
>>> from itertools import tee
>>> s = "The quick brown fox jumps over the lazy dog"
>>> l = [0, 1, 8, 14, 18, 27, 43]
>>>
>>> def get_slices(s, l):
... it1, it2 = tee(l)
... next(it2)
... yield from (s[start:end] for start, end in zip(it1, it2))
...
>>> list(get_slices(s,l))
['T', 'he quic', 'k brow', 'n fo', 'x jumps o', 'ver the lazy dog']
``` |
Using abc.ABCMeta in a way it is compatible both with Python 2.7 and Python 3.5 | 35,673,474 | 7 | 2016-02-27T18:10:30Z | 35,673,504 | 11 | 2016-02-27T18:13:28Z | [
"python",
"python-2.7",
"metaclass",
"python-3.5",
"abc"
] | I'd like to create a class which has `abc.ABCMeta` as a metaclass and is compatible both with Python 2.7 and Python 3.5. Until now, I only succeeded doing this either on 2.7 or on 3.5 - but never on both versions simultaneously. Could someone give me a hand?
Python 2.7:
```
import abc
class SomeAbstractClass(object):
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def do_something(self):
pass
```
Python 3.5:
```
import abc
class SomeAbstractClass(metaclass=abc.ABCMeta):
@abc.abstractmethod
def do_something(self):
pass
```
## Testing
If we run the following test using the suitable version of the Python interpreter (Python 2.7 -> Example 1, Python 3.5 -> Example 2), it succeeds in both scenarios:
```
import unittest
class SomeAbstractClassTestCase(unittest.TestCase):
def test_do_something_raises_exception(self):
with self.assertRaises(TypeError) as error:
processor = SomeAbstractClass()
msg = str(error.exception)
expected_msg = "Can't instantiate abstract class SomeAbstractClass with abstract methods do_something"
self.assertEqual(msg, expected_msg)
```
## Problem
While running the test using Python 3.5, the expected behavior doesn't happen (`TypeError` is not raised while instantiating `SomeAbstractClass`):
```
======================================================================
FAIL: test_do_something_raises_exception (__main__.SomeAbstractClassTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/tati/sample_abc.py", line 22, in test_do_something_raises_exception
processor = SomeAbstractClass()
AssertionError: TypeError not raised
----------------------------------------------------------------------
```
Whereas running the test using Python 2.7 raises a `SyntaxError`:
```
Python 2.7 incompatible
Raises exception:
File "/home/tati/sample_abc.py", line 24
class SomeAbstractClass(metaclass=abc.ABCMeta):
^
SyntaxError: invalid syntax
``` | You could use [`six.add_metaclass`](https://pythonhosted.org/six/#six.add_metaclass) or [`six.with_metaclass`](https://pythonhosted.org/six/#six.with_metaclass):
```
import abc, six
@six.add_metaclass(abc.ABCMeta)
class SomeAbstractClass():
@abc.abstractmethod
def do_something(self):
pass
```
`six` is a [*Python 2 and 3 compatibility library*](https://bitbucket.org/gutworth/six). You can install it by running `pip install six` or by downloading the latest version of `six.py` to your project directory. |
TensorFlow, why was python the chosen language? | 35,677,724 | 6 | 2016-02-28T01:40:08Z | 35,678,837 | 23 | 2016-02-28T04:52:37Z | [
"python",
"c++",
"machine-learning",
"tensorflow"
] | I recently started studying deep learning and other ML techniques, and i started searching for frameworks that simplify the process of build a net and training it, then i found TensorFlow, having a little experience in the field, for me, it seems that speed is a big factor for making a big ML system even more if working with deep learning, so why python was chosen by google to make TensorFlow? it wouldn't be better to make it over an language that can be compiled and not interpreted?
What are the advantages of using Python over a language like C++ for machine learning?
Thanks. | The most important thing to realize about TensorFlow is that, for the most part, *the core is not written in Python*: It's written in a combination of highly-optimized C++ and CUDA (Nvidia's language for programming GPUs). Much of that happens, in turn, by using [Eigen](http://eigen.tuxfamily.org/index.php?title=Main_Page) (a high-performance C++ and CUDA numerical library) and [NVidia's cuDNN](https://developer.nvidia.com/cudnn) (a very optimized DNN library for [NVidia GPUs](https://developer.nvidia.com/cuda-gpus), for functions such as [convolutions](https://en.wikipedia.org/wiki/Convolutional_neural_network)).
The model for TensorFlow is that the programmer uses "some language" (most likely Python!) to express the model. This model, written in the TensorFlow constructs such as:
```
h1 = tf.nn.relu(tf.matmul(l1, W1) + b1)
h2 = ...
```
is not actually executed when the Python is run. Instead, what's actually created is a [dataflow graph](https://www.tensorflow.org/versions/master/how_tos/graph_viz/index.html) that says to take particular inputs, apply particular operations, supply the results as the inputs to other operations, and so on. *This model is executed by fast C++ code, and for the most part, the data going between operations is never copied back to the Python code*.
Then the programmer "drives" the execution of this model by pulling on nodes -- for training, usually in Python, and for serving, sometimes in Python and sometimes in raw C++:
```
sess.run(eval_results)
```
This one Python (or C++ function call) uses either an in-process call to C++ or an [RPC](https://en.wikipedia.org/wiki/Remote_procedure_call) for the distributed version to call into the C++ TensorFlow server to tell it to execute, and then copies back the results.
**So, with that said, let's re-phrase the question: Why did TensorFlow choose Python as the first well-supported language for expressing and controlling the training of models?**
The answer to that is simple: Python is probably *the* most comfortable language for a large range of data scientists and machine learning experts that's also that easy to integrate and have control a C++ backend, while also being general, widely-used both inside and outside of Google, and open source. Given that with the basic model of TensorFlow, the performance of Python isn't that important, it was a natural fit. It's also a huge plus that [NumPy](http://www.numpy.org/) makes it easy to do pre-processing in Python -- also with high performance -- before feeding it in to TensorFlow for the truly CPU-heavy things.
There's also a bunch of complexity in expressing the model that isn't used when executing it -- shape inference (e.g., if you do matmul(A, B), what is the shape of the resulting data?) and automatic [gradient](https://en.wikipedia.org/wiki/Gradient) computation. It turns out to have been nice to be able to express those in Python, though I think in the long term they'll probably move to the C++ backend to make adding other languages easier.
(The hope, of course, is to support other languages in the future for creating and expressing models. It's already quite straightforward to run inference using several other languages -- C++ works now, someone from Facebook contributed [Go](https://golang.org/) bindings that we're reviewing now, etc.) |
difference between two regular expressions: [abc]+ and ([abc])+ | 35,677,905 | 6 | 2016-02-28T02:11:45Z | 35,678,152 | 7 | 2016-02-28T02:55:03Z | [
"python",
"regex"
] | ```
In [29]: re.findall("([abc])+","abc")
Out[29]: ['c']
In [30]: re.findall("[abc]+","abc")
Out[30]: ['abc']
```
Confused by the grouped one. How does it make difference? | There are two things that need to be explained here: the behavior of quantified groups, and the design of the `findall()` method.
In your first example, `[abc]` matches the `a`, which is captured in group #1. Then it matches `b` and captures it in group #1, overwriting the `a`. Then again with the `c`, and that's what's left in group #1 at the end of the match.
But it *does* match the whole string. If you were using `search()` or `finditer()`, you would be able to look at the MatchObject and see that `group(0)` contains `abc` and `group(1)` contains `c`. But `findall()` returns strings, not MatchObjects. If there are no groups, it returns a list of the overall matches; if there are groups, the list contains all the captures, but *not* the overall match.
So both of your regexes are matching the whole string, but the first one is also capturing and discarding each character individually (which is kinda pointless). It's only the unexpected behavior of `findall()` that makes it look like you're getting different results. |
How does the "all" function in Python work? | 35,685,768 | 5 | 2016-02-28T17:26:29Z | 35,685,800 | 9 | 2016-02-28T17:29:31Z | [
"python",
"python-3.x"
] | I searched for understanding about the [`all`](https://docs.python.org/2/library/functions.html#all) function in Python, and I found [this](http://stackoverflow.com/questions/19389490/how-pythons-any-and-all-functions-work), according to here:
> [`all`](https://docs.python.org/2/library/functions.html#all) will return `True` only when all the elements are Truthy.
But when I work with this function it's acting differently:
```
'?' == True # False
'!' == True # False
all(['?','!']) # True
```
Why is it that when all elements in input are `False` it returns `True`? Did I misunderstand its functionality or is there an explanation? | > only when all the elements are *Truthy*.
Truthy != `True`.
`all` essentially checks whether `bool(something)` is `True` (for all `something`s in the iterable).
```
>>> "?" == True
False
>>> "?" == False # it's not False either
False
>>> bool("?")
True
``` |
Using a pre-trained word embedding (word2vec or Glove) in TensorFlow | 35,687,678 | 21 | 2016-02-28T20:11:43Z | 35,688,187 | 23 | 2016-02-28T20:59:12Z | [
"python",
"numpy",
"tensorflow",
"deep-learning"
] | I've recently reviewed an interesting implementation for [convolutional text classification](http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/). However all TensorFlow code I've reviewed uses a random (not pre-trained) embedding vectors like the following:
```
with tf.device('/cpu:0'), tf.name_scope("embedding"):
W = tf.Variable(
tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0),
name="W")
self.embedded_chars = tf.nn.embedding_lookup(W, self.input_x)
self.embedded_chars_expanded = tf.expand_dims(self.embedded_chars, -1)
```
Does anybody know how to use the results of Word2vec or a GloVe pre-trained word embedding instead of a random one? | There are a few ways that you can use a pre-trained embedding in TensorFlow. Let's say that you have the embedding in a NumPy array called `embedding`, with `vocab_size` rows and `embedding_dim` columns and you want to create a tensor `W` that can be used in a call to [`tf.nn.embedding_lookup()`](https://www.tensorflow.org/versions/r0.7/api_docs/python/nn.html#embedding_lookup).
1. Simply create `W` as a [`tf.constant()`](https://www.tensorflow.org/versions/r0.7/api_docs/python/constant_op.html#constant) that takes `embedding` as its value:
```
W = tf.constant(embedding, name="W")
```
This is the easiest approach, but it is not memory efficient because the value of a `tf.constant()` is stored multiple times in memory. Since `embedding` can be very large, you should only use this approach for toy examples.
2. Create `W` as a `tf.Variable` and initialize it from the NumPy array via a [`tf.placeholder()`](https://www.tensorflow.org/versions/r0.7/api_docs/python/io_ops.html#placeholder):
```
W = tf.Variable(tf.constant(0.0, shape=[vocab_size, embedding_dim]),
trainable=False, name="W")
embedding_placeholder = tf.placeholder(tf.float32, [vocab_size, embedding_dim])
embedding_init = W.assign(embedding_placeholder)
# ...
sess = tf.Session()
sess.run(embedding_init, feed_dict={embedding_placeholder: embedding})
```
This avoid storing a copy of `embedding` in the graph, but it does require enough memory to keep two copies of the matrix in memory at once (one for the NumPy array, and one for the `tf.Variable`). Note that I've assumed that you want to hold the embedding matrix constant during training, so `W` is created with `trainable=False`.
3. If the embedding was trained as part of another TensorFlow model, you can use a [`tf.train.Saver`](https://www.tensorflow.org/versions/r0.7/api_docs/python/state_ops.html#Saver) to load the value from the other model's checkpoint file. This means that the embedding matrix can bypass Python altogether. Create `W` as in option 2, then do the following:
```
W = tf.Variable(...)
embedding_saver = tf.train.Saver({"name_of_variable_in_other_model": W})
# ...
sess = tf.Session()
embedding_saver.restore(sess, "checkpoint_filename.ckpt")
``` |
How to do a column sum in Tensorflow? | 35,689,671 | 5 | 2016-02-28T23:34:15Z | 35,689,756 | 7 | 2016-02-28T23:42:13Z | [
"python",
"numpy",
"tensorflow"
] | What is the equivalent of the following in Tensorflow?
```
np.sum(A, axis=1)
``` | There is tf.reduce\_sum which is a bit more powerfull tool for doing so.
<https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/math_ops.md#tfreduce_suminput_tensor-reduction_indicesnone-keep_dimsfalse-namenone-reduce_sum>
```
# 'x' is [[1, 1, 1]
# [1, 1, 1]]
tf.reduce_sum(x) ==> 6
tf.reduce_sum(x, 0) ==> [2, 2, 2]
tf.reduce_sum(x, 1) ==> [3, 3]
tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]]
tf.reduce_sum(x, [0, 1]) ==> 6
``` |
Find first item with alphabetical precedence in list with numbers | 35,691,994 | 9 | 2016-02-29T04:29:34Z | 35,692,044 | 8 | 2016-02-29T04:34:59Z | [
"python",
"list",
"python-3.x",
"for-loop"
] | Say I have a list object occupied with both numbers and strings. If I want to retrieve the first string item with the highest alphabetical precedence, how would I do so?
Here is an example attempt which is clearly incorrect, but corrections as to what needs to be changed in order for it to achieve the desired result would be greatly appreciated:
```
lst = [12, 4, 2, 15, 3, 'ALLIGATOR', 'BEAR', 'ANTEATER', 'DOG', 'CAT']
lst.sort()
for i in lst:
if i[0] == "A":
answer = i
print(answer)
``` | IIUC you could use [`isinstance`](https://docs.python.org/3/library/functions.html#isinstance) to get sublist of your original list with only strings, then with `sorted` get first element by alphabetical sorting:
```
sub_lst = [i for i in lst if isinstance(i, str)]
result = sorted(sub_lst)[0]
print(sub_lst)
['ALLIGATOR', 'BEAR', 'ANTEATER', 'DOG', 'CAT']
print(result)
'ALLIGATOR'
```
Or you could use `min` as @TigerhawkT3 suggested in the comment:
```
print(min(sub_lst))
'ALLIGATOR'
``` |
Find first item with alphabetical precedence in list with numbers | 35,691,994 | 9 | 2016-02-29T04:29:34Z | 35,692,075 | 15 | 2016-02-29T04:38:12Z | [
"python",
"list",
"python-3.x",
"for-loop"
] | Say I have a list object occupied with both numbers and strings. If I want to retrieve the first string item with the highest alphabetical precedence, how would I do so?
Here is an example attempt which is clearly incorrect, but corrections as to what needs to be changed in order for it to achieve the desired result would be greatly appreciated:
```
lst = [12, 4, 2, 15, 3, 'ALLIGATOR', 'BEAR', 'ANTEATER', 'DOG', 'CAT']
lst.sort()
for i in lst:
if i[0] == "A":
answer = i
print(answer)
``` | First use a [generator expression](https://docs.python.org/3/glossary.html#term-generator-expression) to filter out non-strings, and then use [`min()`](https://docs.python.org/3/library/functions.html#min) to select the string with the highest alphabetical presence:
```
>>> min(x for x in lst if isinstance(x, str))
'ALLIGATOR
``` |
Check if an object (with certain properties values) not in list | 35,694,596 | 6 | 2016-02-29T08:09:37Z | 35,694,700 | 7 | 2016-02-29T08:16:21Z | [
"python",
"python-2.7"
] | I am new in Python. I am using Python v2.7.
I have defined a simple class `Product`:
```
class Product:
def __init__(self, price, height, width):
self.price = price
self.height = height
self.width = width
```
Then, I created a list, which is then appended with a `Product` object:
```
# empty list
prod_list = []
# append a product to the list, all properties have value 3
prod1 = Product(3,3,3)
prod_list.append(prod1)
```
Then, I created another `Product` object which is set the same initialize values (all 3):
```
prod2 = Product(3,3,3)
```
Then, I want to check if `prod_list` **doesn't** contain a `Product` object that has price=3, width=3 & height=3, by:
```
if prod2 not in prod_list:
print("no product in list has price=3, width=3 & height=3")
```
I expect there is no print out message, but it is printed out. In Python, how can I check if a list doesn't have an object with certain property values then? | You need to add an `equality` attribute to your object. For getting the objects attributes you can pass the attribute names to `operator.attrgetter` which returns a tuple of gotten attributes then you can compare the tuples. Also you can use `__dict__` attribute which will gives you the moduleâs namespace as a dictionary object. Then you can get the attributes names which you want to compare the objects based on them.
```
from operator import attrgetter
class Product:
def __init__(self, price, height, width):
self.price = price
self.height = height
self.width = width
def __eq__(self, val):
attrs = ('width', 'price', 'height')
return attrgetter(*attrs)(self) == attrgetter(*attrs)(val)
def __ne__(self, val):
attrs = ('width', 'price', 'height')
return attrgetter(*attrs)(self) != attrgetter(*attrs)(val)
```
Edit:
As @Ashwini mentioned in comment based on python wiki:
> There are no implied relationships among the comparison operators. The
> truth of `x==y` does not imply that `x!=y` is false. Accordingly, when
> defining `__eq__()`, one should also define `__ne__()` so that the
> operators will behave as expected.
So as a more comprehensive way I also added the `__ne__` attribute to the object. Which will return True if one of the attributes is not equal to it's relative one in other object.
Demo:
```
prod_list = []
prod1 = Product(3, 3, 3)
prod_list.append(prod1)
prod2 = Product(3, 3, 2)
prod_list.append(prod2)
prod3 = Product(3, 3, 3)
print prod3 in prod_list
True
prod3 = Product(3, 3, 5)
print prod3 in prod_list
False
``` |
How to quickly find first multiple of 2 of list element in list of large integers? | 35,700,373 | 6 | 2016-02-29T13:06:54Z | 35,700,525 | 7 | 2016-02-29T13:14:21Z | [
"python",
"python-3.x",
"math"
] | I have a list of approximately 100 000 sorted even integers in the range 10^12 to 10^14. My goal is to find the first integer `x` in the list such that `x*2` is also a member of the list. As my list is rather long speed is very important.
My first thought was to just iterate over the list and check whether each element when multiplied by 2 was also in the list, but upon implementing this it is clear this is too slow for my purposes.
My next thought was to decompose each element in the list into its prime decomposition with SymPy's `factorint`, and then search my decomposed list for the same decomposition except with an extra 2. This didn't turn out to be any quicker obviously, but I feel like there must be a way using prime decomposition, if not something else. | You can iterate on your list with two iterators: one pointing to the current element and one pointing to the first one greater or equal to it's double. This will be O(N).
Here is a draft of the idea:
```
l = [1, 3, 5, 7, 10, 12, 15]
# ...
j = 0
for i in range(0, len(l)):
while l[j] < 2*l[i]:
j += 1
if j == len(l):
return -1
if l[j] == 2*l[i]:
return i
```
**Edit:** Following comments in another answer, a few performances tests show that this version will be much faster (3 times in my tests) by eliminating multiplications, calls to `len` and reducing number of item retrievals in the list:
```
j = 0
s = len(l)
for i in range(0, s):
l_i = l[i]
l_i2 = l_i<<1
while l[j] < l_i2:
j += 1
if j == s:
return -1
if l[j] == l_i2:
return i
``` |
Printing File Names | 35,702,996 | 6 | 2016-02-29T15:14:20Z | 35,703,151 | 9 | 2016-02-29T15:22:09Z | [
"python",
"python-3.x"
] | I am very new to `python` and just installed `Eric6` I am wanting to search a folder (and all sub dirs) to print the filename of any file that has the extension of `.pdf` I have this as my syntax, but it errors saying
> The debugged program raised the exception unhandled FileNotFoundError
> "[WinError 3] The system can not find the path specified 'C:'"
> File: C:\Users\pcuser\EricDocs\Test.py, Line: 6
And this is the syntax I want to execute:
```
import os
results = []
testdir = "C:\Test"
for folder in testdir:
for f in os.listdir(folder):
if f.endswith('.pdf'):
results.append(f)
print (results)
``` | Use the [`glob`](https://docs.python.org/3/library/glob.html) module.
> The glob module finds all the pathnames matching a specified pattern
```
import glob, os
parent_dir = 'path/to/dir'
for pdf_file in glob.glob(os.path.join(parent_dir, '*.pdf')):
print (pdf_file)
```
This will work on Windows and \*nix platforms.
---
Just make sure that your path is fully escaped on windows, could be useful to use a raw string.
In your case, that would be:
```
import glob, os
parent_dir = r"C:\Test"
for pdf_file in glob.glob(os.path.join(parent_dir, '*.pdf')):
print (pdf_file)
```
---
For only a list of filenames (not full paths, as per your comment) you can do this one-liner:
```
results = [os.path.basename(f) for f in glob.glob(os.path.join(parent_dir, '*.pdf')]
``` |
Why is there a performance difference between the order of a nested loop? | 35,710,346 | 11 | 2016-02-29T21:47:22Z | 35,710,710 | 12 | 2016-02-29T22:11:10Z | [
"python",
"loops",
"python-2.x"
] | I have a process that loops through two lists, one being relatively large while the other being significantly smaller.
Example:
```
larger_list = list(range(15000))
smaller_list = list(range(2500))
for ll in larger_list:
for sl in smaller_list:
pass
```
I scaled the sized down of the lists to test performance, and I noticed there is a decent difference between which list is looped through first.
```
import timeit
larger_list = list(range(150))
smaller_list = list(range(25))
def large_then_small():
for ll in larger_list:
for sl in smaller_list:
pass
def small_then_large():
for sl in smaller_list:
for ll in larger_list:
pass
print('Larger -> Smaller: {}'.format(timeit.timeit(large_then_small)))
print('Smaller -> Larger: {}'.format(timeit.timeit(small_then_large)))
>>> Larger -> Smaller: 114.884992572
>>> Smaller -> Larger: 98.7751009799
```
At first glance, they look identical - however there is 16 second difference between the two functions.
Why is that? | When you disassemble one of your functions you get:
```
>>> dis.dis(small_then_large)
2 0 SETUP_LOOP 31 (to 34)
3 LOAD_GLOBAL 0 (smaller_list)
6 GET_ITER
>> 7 FOR_ITER 23 (to 33)
10 STORE_FAST 0 (sl)
3 13 SETUP_LOOP 14 (to 30)
16 LOAD_GLOBAL 1 (larger_list)
19 GET_ITER
>> 20 FOR_ITER 6 (to 29)
23 STORE_FAST 1 (ll)
4 26 JUMP_ABSOLUTE 20
>> 29 POP_BLOCK
>> 30 JUMP_ABSOLUTE 7
>> 33 POP_BLOCK
>> 34 LOAD_CONST 0 (None)
37 RETURN_VALUE
>>>
```
Looking at address 29 & 30, it looks like these will execute every time the inner loop ends. The two loops look basically the same, but these two instructions are executed each time the inner loop exits. Having the smaller number on the inside would cause these to be executed more often, hence increasing the time (vs the larger number on the inner loop). |
Celery tasks immediately autodiscover | 35,711,235 | 4 | 2016-02-29T22:43:14Z | 35,735,471 | 7 | 2016-03-01T23:10:55Z | [
"python",
"celery"
] | I am trying to build an application on top of Celery framework.
I have a module `settings/celery_settings.py` with the code that initializes Celery application like this (I expand some variables):
```
from __future__ import absolute_import
from celery import Celery
pfiles = ['other_tasks.test123', 'balance_log.balance_log']
app = Celery('myapp')
# here I just have some parameters defined like broker, result backend, etc
# app.config_from_object(settings)
# TRYING to discover tasks
app.autodiscover_tasks(pfiles)
```
Files `other_tasks/test123.py` and `balance_log/balance_log.py` contain task definitions like these:
```
# file other_tasks/test123.py
from celery import shared_task, Task
@shared_task()
def mytask():
print("Test 1234!")
class TestTask01(Task):
def run(self, client_id=None):
logger.debug("TestTask01: run")
return client_id
```
I run celery worker:
```
python3 /usr/local/bin/celery -A settings.celery_settings worker
```
And this way it **can discover** tasks. I can call these tasks.
But then I try to use IPython:
```
In [1]: from settings.celery_settings import app
In [2]: app.tasks
Out[2]:
{'celery.backend_cleanup': <@task: celery.backend_cleanup of XExchange:0x7f9f50267ac8>,
'celery.chain': <@task: celery.chain of XExchange:0x7f9f50267ac8>,
'celery.chord': <@task: celery.chord of XExchange:0x7f9f50267ac8>,
'celery.chord_unlock': <@task: celery.chord_unlock of XExchange:0x7f9f50267ac8>,
'celery.chunks': <@task: celery.chunks of XExchange:0x7f9f50267ac8>,
'celery.group': <@task: celery.group of XExchange:0x7f9f50267ac8>,
'celery.map': <@task: celery.map of XExchange:0x7f9f50267ac8>,
'celery.starmap': <@task: celery.starmap of XExchange:0x7f9f50267ac8>}
```
And apparently it **does not** discover the tasks.
It seems that when I call tasks explicitly, I first import them and specify exact path for the celery while call. That's why it works.
Question: how do I make them discovered to have a list of known tasks? | Finally I figured out that there is an additional parameter for `autodiscover_tasks` function:
```
def autodiscover_tasks(self, packages, related_name='tasks', force=False):
...
```
So, after setting `force=True` it turned to work!
```
app.autodiscover_tasks(pfiles, force=True)
``` |
Zeppelin: Scala Dataframe to python | 35,719,142 | 3 | 2016-03-01T09:32:52Z | 35,720,597 | 8 | 2016-03-01T10:35:45Z | [
"python",
"apache-spark",
"pyspark",
"apache-zeppelin"
] | If I have a Scala paragraph with a DataFrame, can I share and use that with python. (As I understand it pyspark uses [py4j](https://www.py4j.org/advanced_topics.html#array))
I tried this:
Scala paragraph:
```
x.printSchema
z.put("xtable", x )
```
Python paragraph:
```
%pyspark
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
the_data = z.get("xtable")
print the_data
sns.set()
g = sns.PairGrid(data=the_data,
x_vars=dependent_var,
y_vars=sensor_measure_columns_names + operational_settings_columns_names,
hue="UnitNumber", size=3, aspect=2.5)
g = g.map(plt.plot, alpha=0.5)
g = g.set(xlim=(300,0))
g = g.add_legend()
```
Error :
```
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark.py", line 222, in <module>
eval(compiledCode)
File "<string>", line 15, in <module>
File "/usr/local/lib/python2.7/dist-packages/seaborn/axisgrid.py", line 1223, in __init__
hue_names = utils.categorical_order(data[hue], hue_order)
TypeError: 'JavaObject' object has no attribute '__getitem__'
```
Solution:
```
%pyspark
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import StringIO
def show(p):
img = StringIO.StringIO()
p.savefig(img, format='svg')
img.seek(0)
print "%html <div style='width:600px'>" + img.buf + "</div>"
df = sqlContext.table("fd").select()
df.printSchema
pdf = df.toPandas()
g = sns.pairplot(data=pdf,
x_vars=["setting1","setting2"],
y_vars=["s4", "s3",
"s9", "s8",
"s13", "s6"],
hue="id", aspect=2)
show(g)
```
[](http://i.stack.imgur.com/fjMyv.png) | You can register `DataFrame` as a temporary table in Scala:
```
df.registerTempTable("df")
```
and read it in Python with `SQLContext.table`:
```
df = sqlContext.table("df")
```
If you really want to use `put` / `get` you'll have build Python `DataFrame` from scratch:
```
z.put("df", df: org.apache.spark.sql.DataFrame)
```
```
from pyspark.sql import DataFrame
df = DataFrame(z.get("df"), sqlContext)
```
To plot with `matplotlib` you'll have convert `DataFrame` to a local Python object with either `collect` or `toPandas`:
```
pdf = df.toPandas()
```
Please note that it will fetch data to the driver.
See also [moving Spark DataFrame from Python to Scala whithn Zeppelin](http://stackoverflow.com/q/37263300/1560062) |
Find tuple structure containing an unknown value inside a list | 35,719,560 | 34 | 2016-03-01T09:49:39Z | 35,719,663 | 71 | 2016-03-01T09:53:41Z | [
"python",
"list",
"tuples"
] | Say I have list of tuples:
```
list = [(1,5), (1,7), (2,3)]
```
Is there a way in Python to write something like
```
if (1, *) in list: do things
```
where `*` means "*I donât care about this value*"? So we are checking if there is a tuple with `1` at the first position and with whatever value on the second one.
As far as I know there are special mechanisms in other languages, but I just donât know the name of this particular problem. So is there similar behavior in Python?
*P.S.:* I know that I can use list comprehensions here. I am just interested in this particular mechanism. | You can use the [`any()` function](https://docs.python.org/2/library/functions.html#any):
```
if any(t[0] == 1 for t in yourlist):
```
This efficiently tests and exits early if `1` is found in the first position of a tuple. |
Find tuple structure containing an unknown value inside a list | 35,719,560 | 34 | 2016-03-01T09:49:39Z | 35,720,104 | 10 | 2016-03-01T10:13:21Z | [
"python",
"list",
"tuples"
] | Say I have list of tuples:
```
list = [(1,5), (1,7), (2,3)]
```
Is there a way in Python to write something like
```
if (1, *) in list: do things
```
where `*` means "*I donât care about this value*"? So we are checking if there is a tuple with `1` at the first position and with whatever value on the second one.
As far as I know there are special mechanisms in other languages, but I just donât know the name of this particular problem. So is there similar behavior in Python?
*P.S.:* I know that I can use list comprehensions here. I am just interested in this particular mechanism. | *Not all of my solution methods provided below will be necessarily efficient. My goal is to demonstrate every possible solution method I can think of - at the end of my answer I provide "benchmark" results to show why or why not you should use one certain method over another. I believe that is a good way of learning, and I will shamelessly encourage such learning in my answers.*
---
### Subset + hash `set`s
```
>>> a_list = [(1,5), (1,7), (2,3)]
>>>
>>> set([l[0] for l in a_list])
{1, 2}
>>>
>>> 1 in set([l[0] for l in a_list])
True
```
---
### `map()`, and anonymous functions
```
>>> a_list = [(1,5), (1,7), (2,3)]
>>>
>>> map(lambda x: x[0] == 1, a_list)
[True, True, False]
>>>
>>> True in set(map(lambda x: x[0] == 1, a_list))
True
```
---
### `filter` and anonymous functions
```
>>> a_list = [(1,5), (1,7), (2,3)]
>>>
>>> filter(lambda x: x[0] == 1, a_list)
[(1,5), (1,7)]
>>>
>>> len(filter(lambda x: x[0] == 1, a_list)) > 0 # non-empty list
True
```
---
## MICROBENCHMARKS
### Conditions
* 1000 items
* 100K repetition
* 0-100 random range
* Python 2.7.10, IPython 2.3.0
### Script
```
from pprint import pprint
from random import randint
from timeit import timeit
N_ITEMS = 1000
N_SIM = 1 * (10 ** 5) # 100K = 100000
a_list = [(randint(0, 100), randint(0, 100)) for _ in range(N_ITEMS)]
set_membership_list_comprehension_time = timeit(
"1 in set([l[0] for l in a_list])",
number = N_SIM,
setup="from __main__ import a_list"
)
bool_membership_map_time = timeit(
"True in set(map(lambda x: x[0] == 1, a_list))",
number = N_SIM,
setup="from __main__ import a_list"
)
nonzero_length_filter_time = timeit(
"len(filter(lambda x: x[0] == 1, a_list)) > 0",
number = N_SIM,
setup="from __main__ import a_list"
)
any_list_comprehension_time = timeit(
"any(t[0] == 1 for t in a_list)",
number = N_SIM,
setup="from __main__ import a_list"
)
results = {
"any(t[0] == 1 for t in a_list)": any_list_comprehension_time,
"len(filter(lambda x: x[0] == 1, a_list)) > 0": nonzero_length_filter_time,
"True in set(map(lambda x: x[0] == 1, a_list))": bool_membership_map_time,
"1 in set([l[0] for l in a_list])": set_membership_list_comprehension_time
}
pprint(
sorted(results.items(), key = lambda x: x[1])
)
```
### Results (in seconds)
```
[('any(t[0] == 1 for t in a_list)', 2.6685791015625), # winner - Martijn
('1 in set([l[0] for l in a_list])', 4.85234808921814),
('len(filter(lambda x: x[0] == 1, a_list)) > 0', 7.11224889755249),
('True in set(map(lambda x: x[0] == 1, a_list))', 10.343087911605835)]
```
*Who's got the last laugh now? ... Martijn (at least I tried)*
*MORAL OF THE STORY: Don't spend more than 10 minutes "proving" your inferior solution is faster and more efficient on a small test data, when another user's answer is the de-facto correct one* |
Find tuple structure containing an unknown value inside a list | 35,719,560 | 34 | 2016-03-01T09:49:39Z | 35,720,213 | 33 | 2016-03-01T10:18:59Z | [
"python",
"list",
"tuples"
] | Say I have list of tuples:
```
list = [(1,5), (1,7), (2,3)]
```
Is there a way in Python to write something like
```
if (1, *) in list: do things
```
where `*` means "*I donât care about this value*"? So we are checking if there is a tuple with `1` at the first position and with whatever value on the second one.
As far as I know there are special mechanisms in other languages, but I just donât know the name of this particular problem. So is there similar behavior in Python?
*P.S.:* I know that I can use list comprehensions here. I am just interested in this particular mechanism. | A placeholder object like you're asking for isn't supported natively, but you can make something like that yourself:
```
class Any(object):
def __eq__(self, other):
return True
ANYTHING = Any()
lst = [(1,5), (1,7), (2,3)]
```
The `__eq__` method defines how two objects test for equality. (See <https://docs.python.org/3/reference/datamodel.html> for details.) Here, `ANYTHING` will always test positive for equality with any object. (Unless that object also overrode `__eq__` in a way to return False.)
The `in` operator merely calls `__eq__` for each element in your list. I.e. `a in b` does something like:
```
for elem in b:
if elem == a:
return True
```
This means that, if you say `(1, ANYTHING) in lst`, Python will first compare `(1, ANYTHING)` to the first element in `lst`. (Tuples, in turn, define `__eq__` to return True if all its elements' `__eq__` return True. I.e. `(x, y) == (a, b)` is equivalent to `x==a and y==b`, or `x.__eq__(a) and y.__eq__(b)`.)
Hence, `(1, ANYTHING) in lst` will return True, while `(3, ANYTHING) in lst` will return False.
Also, note that I renamed your list `lst` instead of `list` to prevent name clashes with the Python built-in `list`. |
Import Error: Google Analytics API Authorization | 35,733,897 | 4 | 2016-03-01T21:28:46Z | 35,734,122 | 7 | 2016-03-01T21:42:57Z | [
"python",
"oauth-2.0",
"google-api",
"google-oauth",
"google-analytics-api"
] | I'm trying to run the sample provided here <https://developers.google.com/analytics/devguides/reporting/core/v3/quickstart/service-py> for authorization.
I've noticed from other questions in SO that ([ImportError: cannot import name SignedJwtAssertionCredentials](http://stackoverflow.com/questions/14063124/importerror-cannot-import-name-signedjwtassertioncredentials)) SignedJwtAssertionCredentials has been removed and therefore could not be imported.
So, I started to follow the solutions provided both on the GitHub page (<https://github.com/google/oauth2client/issues/401>) and StackOverflow. So far, nothing worked, I'm still seeing the same error. Following is my code.
```
import argparse
from apiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
import httplib2
from oauth2client import client
from oauth2client import file
from oauth2client import tools
```
And, this is the error I'm receiving on running the above code.
```
ImportError: cannot import name ServiceAccountCredentials
```
As I'm a complete newbie in this space, I tried to do this for both versions of `OAuth` (2.0.0 and 1.5.2). I also tried it after installing `pyopenssl`, but still failed. | It seems oauth2client installation is unsuccessful. Try
> pip install --upgrade google-api-python-client |
String with 'f' prefix in python-3.6 | 35,745,050 | 9 | 2016-03-02T10:49:29Z | 35,745,079 | 11 | 2016-03-02T10:50:52Z | [
"python",
"python-3.x",
"scope",
"string-formatting",
"python-3.6"
] | I'm trying out Python 3.6. Going through new code, I stumbled upon this new syntax:
```
f"My formatting string!"
```
It seems we can do things like this:
```
>>> name = "George"
>>> print f"My cool string is called {name}."
My cool string is called George.
```
Can anyone shed some light on the inner workings of this? In particular what is the scope of the variables that an f-prefixed string can take? | See [PEP 498 *Literal String Interpolation*](https://www.python.org/dev/peps/pep-0498/):
> The expressions that are extracted from the string are evaluated in the context where the f-string appeared. This means the expression has full access to local and global variables. Any valid Python expression can be used, including function and method calls.
So the expressions are evaluated as if they appear in the same scope; locals, closures, and globals all work the same as in other code in the same context.
You'll find more details in the [reference documentation](https://docs.python.org/3.6/reference/lexical_analysis.html#f-strings):
> Expressions in formatted string literals are treated like regular Python expressions surrounded by parentheses, with a few exceptions. An empty expression is not allowed, and a `lambda` expression must be surrounded by explicit parentheses. Replacement expressions can contain line breaks (e.g. in triple-quoted strings), but they cannot contain comments. Each expression is evaluated in the context where the formatted string literal appears, in order from left to right.
Since you are trying out a 3.6 alpha build, please do read the [*What's New In Python 3.6* documentation](https://docs.python.org/3.6/whatsnew/3.6.html). It summarises all changes, including links to the relevant documentation and PEPs.
And just to be clear: 3.6 isn't released *yet*; the first alpha is not expected to be released until May 2016. See the [3.6 release schedule](https://www.python.org/dev/peps/pep-0494/). |
list comprehension with concurrent loops python | 35,752,287 | 5 | 2016-03-02T16:03:50Z | 35,752,500 | 7 | 2016-03-02T16:12:35Z | [
"python",
"loops",
"list-comprehension"
] | Simple question as i just want to write more pythonic code. I want to convert the following into a list comprehension
```
index_row = 0
for row in stake_year.iterrows():
self.assertTrue(row[0] == counts[index_row][0])
self.assertTrue(row[1][0] == counts[index_row][1])
index_row += 1
```
What i don't understand is how to walk through the counts list. I don't want a nested for like:
```
[self.assertTrue(x[0] == counts[y][0] for x in stake_year for y in counts]
```
The code i have now is working but I'd like to understand python better and use the language as it should be used. | The more pythonic way to use in your case is to use [**`enumerate`**](https://docs.python.org/2/library/functions.html#enumerate):
```
for index_row, row in enumerate(stake_year.iterrows()):
self.assertTrue(row[0] == counts[index_row][0])
self.assertTrue(row[1][0] == counts[index_row][1])
``` |
Alpine 3.3, Python 2.7.11, urllib2 causing SSL: CERTIFICATE_VERIFY_FAILED | 35,762,510 | 6 | 2016-03-03T03:13:52Z | 35,763,125 | 7 | 2016-03-03T04:20:25Z | [
"python",
"ssl",
"alpine"
] | I have this small Dockerfile
```
FROM alpine:3.3
RUN apk --update add python
CMD ["python", "-c", "import urllib2; response = urllib2.urlopen('https://www.python.org')"]
```
Building it with `docker build -t alpine-py/01 .` and then running it with `docker run -it --rm alpine-py/01` creates the following output
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>
```
Yesterday I got bitten by the recent OpenSSL 1.0.2g release, which caused `py-cryptograpy` to not compile. Luckily the guys from `py-cryptography` released a new version on PyPI a couple of hours later. The issue was that a function in OpenSSL got a new signature.
Could this be related or am I missing something? | You need to install ca-certificates to be able to validate signed certs by public CAs:
```
FROM alpine:3.3
RUN apk --no-cache add python ca-certificates
CMD ["python", "-c", "import urllib2; response = urllib2.urlopen('https://www.python.org')"]
``` |
Python: understanding class and instance variables | 35,766,834 | 54 | 2016-03-03T08:26:55Z | 35,767,155 | 38 | 2016-03-03T08:44:53Z | [
"python"
] | I think I have some misconception about class and instance variables. Here is an example code:
```
class Animal(object):
energy = 10
skills = []
def work(self):
print 'I do something'
self.energy -= 1
def new_skill(self, skill):
self.skills.append(skill)
if __name__ == '__main__':
a1 = Animal()
a2 = Animal()
a1.work()
print a1.energy # result:9
print a2.energy # result:10
a1.new_skill('bark')
a2.new_skill('sleep')
print a1.skills # result:['bark', 'sleep']
print a2.skills # result:['bark', 'sleep']
```
I thought that `energy` and `skill` were class variables, because I declared them out of any method. I modify its values inside the methods in the same way (with `self` in his declaration, maybe incorrect?). But the results show me that `energy` takes different values for each object (like a instance variable), while `skills` seems to be shared (like a class variable). I think I've missed something important... | The trick here is in understanding what `self.energy -= 1` does. It's really two expressions; one getting the value of `self.energy - 1`, and one assigning that back to `self.energy`.
But the thing that's confusing you is that the references are not interpreted the same way on both sides of that assignment. When Python is told to get `self.energy`, it tries to find that attribute on the instance, fails, and falls back to the class attribute. However, when it assigns to `self.energy`, it will always assign to an instance attribute, even though that hadn't previously existed. |
Python: understanding class and instance variables | 35,766,834 | 54 | 2016-03-03T08:26:55Z | 35,767,632 | 22 | 2016-03-03T09:08:40Z | [
"python"
] | I think I have some misconception about class and instance variables. Here is an example code:
```
class Animal(object):
energy = 10
skills = []
def work(self):
print 'I do something'
self.energy -= 1
def new_skill(self, skill):
self.skills.append(skill)
if __name__ == '__main__':
a1 = Animal()
a2 = Animal()
a1.work()
print a1.energy # result:9
print a2.energy # result:10
a1.new_skill('bark')
a2.new_skill('sleep')
print a1.skills # result:['bark', 'sleep']
print a2.skills # result:['bark', 'sleep']
```
I thought that `energy` and `skill` were class variables, because I declared them out of any method. I modify its values inside the methods in the same way (with `self` in his declaration, maybe incorrect?). But the results show me that `energy` takes different values for each object (like a instance variable), while `skills` seems to be shared (like a class variable). I think I've missed something important... | Upon initial creation both attributes are the same object:
```
>>> a1 = Animal()
>>> a2 = Animal()
>>> a1.energy is a2.energy
True
>>> a1.skills is a2.skills
True
>>> a1 is a2
False
```
When you *assign* to a `class` attribute, it is made local to the instance:
```
>>> id(a1.energy)
31346816
>>> id(a2.energy)
31346816
>>> a1.work()
I do something
>>> id(a1.energy)
31346840 # id changes as attribute is made local to instance
>>> id(a2.energy)
31346816
```
The `new_skill()` method does not *assign* a new value to the `skills` array, but rather it `appends` which modifies the list in place.
If you were to manually add a skill, then the `skills` list would be come local to the instance:
```
>>> id(a1.skills)
140668681481032
>>> a1.skills = ['sit', 'jump']
>>> id(a1.skills)
140668681617704
>>> id(a2.skills)
140668681481032
>>> a1.skills
['sit', 'jump']
>>> a2.skills
['bark', 'sleep']
```
Finally, if you were to delete the instance attribute `a1.skills`, the reference would revert back to the class attribute:
```
>>> a1.skills
['sit', 'jump']
>>> del a1.skills
>>> a1.skills
['bark', 'sleep']
>>> id(a1.skills)
140668681481032
``` |
Python: understanding class and instance variables | 35,766,834 | 54 | 2016-03-03T08:26:55Z | 35,777,981 | 31 | 2016-03-03T16:42:05Z | [
"python"
] | I think I have some misconception about class and instance variables. Here is an example code:
```
class Animal(object):
energy = 10
skills = []
def work(self):
print 'I do something'
self.energy -= 1
def new_skill(self, skill):
self.skills.append(skill)
if __name__ == '__main__':
a1 = Animal()
a2 = Animal()
a1.work()
print a1.energy # result:9
print a2.energy # result:10
a1.new_skill('bark')
a2.new_skill('sleep')
print a1.skills # result:['bark', 'sleep']
print a2.skills # result:['bark', 'sleep']
```
I thought that `energy` and `skill` were class variables, because I declared them out of any method. I modify its values inside the methods in the same way (with `self` in his declaration, maybe incorrect?). But the results show me that `energy` takes different values for each object (like a instance variable), while `skills` seems to be shared (like a class variable). I think I've missed something important... | You are running into initialization issues based around mutability.
**First**, the fix. `skills` and `energy` are class attributes.
It is a good practice to consider them as read only, as initial values for instance attributes. The classic way to build your class is:
```
class Animal(object):
energy = 10
skills = []
def __init__(self,en=energy,sk=skills):
self.energy=en
self.skills=sk
....
```
Then each instance will have its own attributes, all your problems will disappear.
**Second**, what's happening with this code?
Why is `skills` shared, when `energy` is per-instance?
The `-=` operator is subtle. it is for *in-place* assignation *if* possible. The difference here is that `list` types are mutable so in-place modification often occurs:
```
In [6]:
b=[]
print(b,id(b))
b+=['strong']
print(b,id(b))
[] 201781512
['strong'] 201781512
```
So `a1.skills` and `a2.skills` are the same list, which is also accessible as `Animal.skills`. But `energy` is a non-mutable `int`, so modification is impossible. In this case a new `int` object is created, so each instance manages its own copy of the `energy` variable:
```
In [7]:
a=10
print(a,id(a))
a-=1
print(a,id(a))
10 1360251232
9 1360251200
``` |
'For' loop behaviour in Python | 35,768,738 | 10 | 2016-03-03T09:55:38Z | 35,768,758 | 41 | 2016-03-03T09:56:19Z | [
"python",
"for-loop"
] | Why is the following simple loop not saving the value of `i` at the end of the loop?
```
for i in range( 1, 10 ):
print i
i = i + 3
```
The above prints:
```
1
2
3
4
5
6
7
8
9
```
But it should print:
```
1
4
7
``` | `for` **sets** `i` each iteration, to the next value from the object being iterated over. Whatever you set `i` to in the loop is ignored at that point.
From the [`for` statement documentation](https://docs.python.org/2/reference/compound_stmts.html#the-for-statement):
> The suite is then executed once for each item provided by the iterator, in the order of ascending indices. Each item in turn is assigned to the target list using the standard rules for assignments, and then the suite is executed.
`i` is the target list here, so it is assigned each value from the `range(1, 10)` object. Setting `i` to something else later on doesn't change what the `range(1, 10)` expression produced.
If you want to produce a loop where you alter `i`, use a `while` loop instead; it re-tests the condition each time through:
```
i = 1
while i < 10:
print i
i += 3
```
but it'll be easier to just use a `range()` with a step, producing the values *up front*:
```
for i in range(1, 10, 3):
print i
``` |
Can't install python module: PyCharm Error: "byte-compiling is disabled, skipping" | 35,781,342 | 4 | 2016-03-03T19:31:47Z | 36,241,973 | 10 | 2016-03-26T23:11:11Z | [
"python",
"python-2.7",
"pycharm"
] | I just installed PyCharm 5 for the first time and trying to get things working. I have a simple python script that tries to import pandas (import pandas as pd). It fails because the pandas isn't installed... So I go to install it and then get an error (copied below).
I tried looking for some "byte-compiling" setting in Preferences or Help but to no avail. I've already tried the workarounds suggested here involving changing the default project editor to Python 2.7, but that didn't help (<https://github.com/spacy-io/spaCy/issues/114>).
What do I do?
```
================= Error below =================
Executed command:"
/var/folders/kf/nd7950995gn25k6_xsh3tv6c0000gn/T/tmpgYwltUpycharm-management/pip-7.1.0/setup.py install
Error occurred:
40:357: execution error: warning: build_py: byte-compiling is disabled, skipping.
Command Output:
40:357: execution error: warning: build_py: byte-compiling is disabled, skipping.
warning: install_lib: byte-compiling is disabled, skipping.
error: byte-compiling is disabled.
(1)
``` | For all who have the same problem, it took me a while to find the solution in a new installation of PyCharm 5.
The problem is you have to change the default interpreter that brings PyCharm 5(2.6 default). It is different from your python version system and the IDE.
Windows or Linux **File -> Settings -> Project Interpreter**
Mac **PyCharm -> Preferences -> Project Interpreter**
Select your python --version and then you can install all the modules you need with
```
pip install ModuleName
```
I recommend also add all the PATHs
**Preferences -> Tools ->Terminal ->Shell path** : echo $PATH |
Matplotlib control which plot is on top | 35,781,612 | 3 | 2016-03-03T19:46:21Z | 35,781,743 | 8 | 2016-03-03T19:53:53Z | [
"python",
"matplotlib",
"plot"
] | I am wondering if there is a way to control which plot lies on top of other plots if one makes multiple plots on one axis. An example:
[](http://i.stack.imgur.com/1gniG.png)
As you can see, the green series is on top of the blue series, and both series are on top of the black dots (which I made with a scatter plot). I would like the black dots to be on top of both series (lines).
I first did the above with the following code
```
plt.plot(series1_x, series1_y)
plt.plot(series2_x, series2_y)
plt.scatter(series2_x, series2_y)
```
Then I tried the following
```
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.plot(series1_x, series1_y)
ax2 = fig.add_subplot(111)
ax2.plot(series2_x, series2_y)
ax3 = fig.add_subplot(111)
ax3.scatter(series2_x, series2_y)
```
And some variations on that, but no luck.
Swapping around the `plot` functions has an effect on which plot is on top, but no matter where I put the `scatter` function, the lines are on top of the dots.
**NOTE:**
I am using Python 3.5 on Windows 10 (this example), but mostly Python 3.4 on Ubuntu.
**NOTE 2:**
I know this may seem like a trivial issue, but I have a case where the series on top of the dots are so dense that the colour of the dots get obscured, and in those cases I need my readers to clearly see which dots are what colour, hence why I need the dots to be on top. | Use the [zorder kwarg](http://matplotlib.org/api/artist_api.html#matplotlib.artist.Artist.set_zorder) where the lower the zorder the further back the plot, e.g.
```
plt.plot(series1_x, series1_y, zorder=1)
plt.plot(series2_x, series2_y, zorder=2)
plt.scatter(series2_x, series2_y, zorder=3)
``` |
Regex - match a character and all its diacritic variations (aka accent-insensitive) | 35,783,135 | 3 | 2016-03-03T21:12:36Z | 35,783,136 | 7 | 2016-03-03T21:12:36Z | [
"python",
"regex",
"python-3.x",
"diacritics",
"accent-insensitive"
] | I am trying to match a character and all its possible diacritic variations (aka accent-insensitive) with a regular expression. What I could do of course is:
```
re.match(r"^[eÄéÄèÈ
êÄëÄẹẽÄÈÈ©ÄÌá¸á¸á¸á¸á¸ÄÌ]$", "é")
```
but that is not a general solution. If I use unicode categories like `\pL` I can't reduce the match to a specific character, in this case `e`. | A workaround to achieve the desired goal would be to use [unidecode](https://pypi.python.org/pypi/Unidecode) to get rid of all diacritics first, and then just match agains the regular `e`
```
re.match(r"^e$", unidecode("é"))
```
Or in this simplified case
```
unidecode("é") == "e"
```
---
Another solution which doesn't depend on the unidecode-library, preserves unicode and gives more control is manually removing the diacritics as follows:
Use [unicodedata.normalize()](https://docs.python.org/3/library/unicodedata.html#unicodedata.normalize) to turn your input string into normal form D, making sure composite characters like `é` get turned into the decomposite form `e\u301` (e + COMBINING ACUTE ACCENT)
```
>>> input = "Héllô"
>>> input
'Héllô'
>>> normalized = unicodedata.normalize("NFKD", input)
>>> normalized
'He\u0301llo\u0302'
```
Then, remove all codepoints which fall into the category [Mark, Nonspacing](http://www.fileformat.info/info/unicode/category/Mn/list.htm) (short `Mn`). Those are all characters that have no width themselves and just decorate the previous character.
Use [unicodedata.category()](https://docs.python.org/3/library/unicodedata.html#unicodedata.category) to determine the category.
```
>>> stripped = "".join(c for c in normalized if unicodedata.category(c) != "Mn")
>>> stripped
'Hello'
```
The result can be used as a source for regex-matching, just as in the unidecode-example above.
Here's the whole thing as a function:
```
def remove_diacritics(text):
"""
Returns a string with all diacritics (aka non-spacing marks) removed.
For example "Héllô" will become "Hello".
Useful for comparing strings in an accent-insensitive fashion.
"""
normalized = unicodedata.normalize("NFKD", text)
return "".join(c for c in normalized if unicodedata.category(c) != "Mn")
``` |
Why do dict keys support list subtraction but not tuple subtraction? | 35,784,258 | 14 | 2016-03-03T22:23:19Z | 35,784,866 | 15 | 2016-03-03T23:06:57Z | [
"python",
"python-3.x"
] | Presumably dict\_keys are supposed to behave as a set-like object, but they are lacking the `difference` method and the subtraction behaviour seems to diverge.
```
>>> d = {0: 'zero', 1: 'one', 2: 'two', 3: 'three'}
>>> d.keys() - [0, 2]
{1, 3}
>>> d.keys() - (0, 2)
TypeError: 'int' object is not iterable
```
Why does dict\_keys class try to iterate an integer here? Doesn't that violate duck-typing?
---
```
>>> dict.fromkeys(['0', '1', '01']).keys() - ('01',)
{'01'}
>>> dict.fromkeys(['0', '1', '01']).keys() - ['01',]
{'1', '0'}
``` | This looks to be a bug. [The implementation is to convert the `dict_keys` to a `set`, then call `.difference_update(arg)` on it.](https://hg.python.org/cpython/file/v3.5.1/Objects/dictobject.c#l3437)
It looks like they misused `_PyObject_CallMethodId` (an optimized variant of `PyObject_CallMethod`), by passing a format string of just `"O"`. [Thing is, `PyObject_CallMethod` and friends are documented to require a `Py_BuildValue` format string that "should produce a `tuple`"](https://docs.python.org/3/c-api/object.html#c.PyObject_CallMethod). With more than one format code, it wraps the values in a `tuple` automatically, but with only one format code, it doesn't `tuple`, it just creates the value (in this case, because it's already `PyObject*`, all it does is increment the reference count).
While I haven't tracked down where it might be doing this, I suspect somewhere in the internals it's identifying `CallMethod` calls that don't produce a `tuple` and wrapping them to make a one element `tuple` so the called function can actually receive the arguments in the expected format. When subtracting a `tuple`, it's already a `tuple`, and this fix up code never activates; when passing a `list`, it does, becoming a one element `tuple` containing the `list`.
`difference_update` takes varargs (as if it were declared `def difference_update(self, *args)`). So when it receives the unwrapped `tuple`, it thinks it's supposed to subtract away the elements from each entry in the `tuple`, not treat said entries as values to subtract away themselves. To illustrate, when you do:
```
mydict.keys() - (1, 2)
```
the bug is causing it to do (roughly):
```
result = set(mydict)
# We've got a tuple to pass, so all's well...
result.difference_update(*(1, 2)) # Unpack behaves like difference_update(1, 2)
# OH NO!
```
While:
```
mydict.keys() - [1, 2]
```
does:
```
result = set(mydict)
# [1, 2] isn't a tuple, so wrap
result.difference_update(*([1, 2],)) # Behaves like difference_update([1, 2])
# All's well
```
That's why a `tuple` of `str` works (incorrectly), `- ('abc', '123')` is performing a call equivalent to:
```
result.difference_update(*('abc', '123'))
# or without unpacking:
result.difference_update('abc', '123')
```
and since `str`s are iterables of their characters, it just blithely removes entries for `'a'`, `'b'`, `'c'`, etc. instead of `'abc'` and `'123'` like you expected.
Basically, this is a bug and (when I get a chance), I'll file it against the CPython folks.
The correct behavior probably should have been to call (assuming this `Id` variant exists for this API):
```
_PyObject_CallMethodObjArgsId(result, &PyId_difference_update, other, NULL);
```
which wouldn't have the packing issues at all, and would run faster to boot; the smallest change would be to change the format string to `"(O)"` to force `tuple` creation even for a single item, but since the format string gains nothing, `_PyObject_CallMethodObjArgsId` is better. |
How to construct a string by joining characters efficiently? | 35,796,965 | 2 | 2016-03-04T13:11:00Z | 35,797,028 | 11 | 2016-03-04T13:14:21Z | [
"python",
"string"
] | I have a long string `s1`, and a list of positions of characters `lst = [...]` in this string. I want to construct a string `s2` which contains only the characters of `s1` in the positions given by `lst`. How can I do this efficiently? | ```
newstring = "".join(s1[i] for i in lst)
```
If you don't absolutely *know* that `s1` won't have any indexes that are too long, do this:
```
newstring = "".join(s1[i:i+1] for i in lst)
```
It's slower, but no index errors.
**Edit:** It has been brought to my attention that using a list comprehension rather than a generator expression in `"".join(...)` is more efficient, so to do that, just add brackets:
```
newstring = "".join([s1[i] for i in lst])
``` |
How to construct a string by joining characters efficiently? | 35,796,965 | 2 | 2016-03-04T13:11:00Z | 35,797,057 | 7 | 2016-03-04T13:16:22Z | [
"python",
"string"
] | I have a long string `s1`, and a list of positions of characters `lst = [...]` in this string. I want to construct a string `s2` which contains only the characters of `s1` in the positions given by `lst`. How can I do this efficiently? | Alternatively, use `operator.itemgetter`:
```
>>> from operator import itemgetter
>>> s = '0123456789'
>>> lst = [0, 3, 6, 8]
>>> ''.join(itemgetter(*lst)(s))
'0368'
```
Since you asked for efficiency, this should be a little faster than joining the generator:
```
In [6]: timeit ''.join(s[i] for i in lst)
1000000 loops, best of 3: 1.18 µs per loop
In [7]: timeit ''.join(itemgetter(*lst)(s))
1000000 loops, best of 3: 430 ns per loop
```
edit: I also think the code should not jump through hoops for lists that don't make sense. If there are nonsensical indexes in the list you want your code to raise an `IndexError` and then recover from there. |
Create new list by taking first item from first list, and last item from second list | 35,797,523 | 7 | 2016-03-04T13:38:43Z | 35,797,643 | 11 | 2016-03-04T13:44:18Z | [
"python",
"list",
"python-3.x"
] | How do I loop through my 2 lists so that I can use
```
a=[1,2,3,8,12]
b=[2,6,4,5,6]
```
to get
```
[1,6,2,5,3,8,6,12,2]
```
OR use
```
d=[a,b,c,d]
e=[w,x,y,z]
```
to get
```
[a,z,b,y,c,x,d,w]
```
(1st element from 1st list, last element from 2nd list)
(2nd element from 1st list, 2nd to last element from 2nd list) | ```
[value for pair in zip(a, b[::-1]) for value in pair]
``` |
Python eval: is it still dangerous if I disable builtins and attribute access? | 35,804,961 | 22 | 2016-03-04T19:55:17Z | 35,805,094 | 9 | 2016-03-04T20:03:10Z | [
"python",
"eval",
"python-internals"
] | We all know that [`eval` is dangerous](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html), even if you hide dangerous functions, because you can use Python's introspection features to dig down into things and re-extract them. For example, even if you delete `__builtins__`, you can retrieve them with
```
[c for c in ().__class__.__base__.__subclasses__()
if c.__name__ == 'catch_warnings'][0]()._module.__builtins__
```
However, every example I've seen of this uses attribute access. What if I disable all builtins, *and* disable attribute access (by tokenizing the input with a Python tokenizer and rejecting it if it has an attribute access token)?
And before you ask, no, for my use-case, I do not need either of these, so it isn't too crippling.
What I'm trying to do is make SymPy's [sympify](http://docs.sympy.org/latest/modules/core.html#id1) function more safe. Currently it tokenizes the input, does some transformations on it, and evals it in a namespace. But it's unsafe because it allows attribute access (even though it really doesn't need it). | Users can still DoS you by inputting an expression that evaluates to a huge number, which would fill your memory and crash the Python process, for example
```
'10**10**100'
```
I am definitely still curious if more traditional attacks, like recovering builtins or creating a segfault, are possible here.
EDIT:
It turns out, even Python's parser has this issue.
```
lambda: 10**10**100
```
will hang, because it tries to precompute the constant. |
Python eval: is it still dangerous if I disable builtins and attribute access? | 35,804,961 | 22 | 2016-03-04T19:55:17Z | 35,805,330 | 13 | 2016-03-04T20:18:56Z | [
"python",
"eval",
"python-internals"
] | We all know that [`eval` is dangerous](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html), even if you hide dangerous functions, because you can use Python's introspection features to dig down into things and re-extract them. For example, even if you delete `__builtins__`, you can retrieve them with
```
[c for c in ().__class__.__base__.__subclasses__()
if c.__name__ == 'catch_warnings'][0]()._module.__builtins__
```
However, every example I've seen of this uses attribute access. What if I disable all builtins, *and* disable attribute access (by tokenizing the input with a Python tokenizer and rejecting it if it has an attribute access token)?
And before you ask, no, for my use-case, I do not need either of these, so it isn't too crippling.
What I'm trying to do is make SymPy's [sympify](http://docs.sympy.org/latest/modules/core.html#id1) function more safe. Currently it tokenizes the input, does some transformations on it, and evals it in a namespace. But it's unsafe because it allows attribute access (even though it really doesn't need it). | It is possible to construct a return value from `eval` that would throw an *exception* **outside** `eval` if you tried to `print`, `log`, `repr`, anything:
```
eval('''((lambda f: (lambda x: x(x))(lambda y: f(lambda *args: y(y)(*args))))
(lambda f: lambda n: (1,(1,(1,(1,f(n-1))))) if n else 1)(300))''')
```
This creates a nested tuple of form `(1,(1,(1,(1...`; that value cannot be `print`ed (on Python 3), `str`ed or `repr`ed; all attempts to debug it would lead to
```
RuntimeError: maximum recursion depth exceeded while getting the repr of a tuple
```
`pprint` and `saferepr` fails too:
```
...
File "/usr/lib/python3.4/pprint.py", line 390, in _safe_repr
orepr, oreadable, orecur = _safe_repr(o, context, maxlevels, level)
File "/usr/lib/python3.4/pprint.py", line 340, in _safe_repr
if issubclass(typ, dict) and r is dict.__repr__:
RuntimeError: maximum recursion depth exceeded while calling a Python object
```
Thus there is no safe built-in function to stringify this: the following helper could be of use:
```
def excsafe_repr(obj):
try:
return repr(obj)
except:
return object.__repr__(obj).replace('>', ' [exception raised]>')
```
---
And then there is the problem that *`print`* in Python **2** does not actually use `str`/`repr`, so you do not have any safety due to lack of recursion checks. That is, take the return value of the lambda monster above, and you cannot `str`, `repr` it, but ordinary `print` (not `print_function`!) prints it nicely. However, you can exploit this to generate a SIGSEGV on Python 2 if you know it will be printed using the `print` statement:
```
print eval('(lambda i: [i for i in ((i, 1) for j in range(1000000))][-1])(1)')
```
**crashes Python 2 with SIGSEGV**. [This is *WONTFIX* in the bug tracker](http://bugs.python.org/issue1069092). Thus never use `print`-the-statement if you want to be safe. `from __future__ import print_function`!
---
This is not a crash, but
```
eval('(1,' * 100 + ')' * 100)
```
when run, outputs
```
s_push: parser stack overflow
Traceback (most recent call last):
File "yyy.py", line 1, in <module>
eval('(1,' * 100 + ')' * 100)
MemoryError
```
The `MemoryError` can be caught, is a subclass of `Exception`. The parser has some [really conservative limits to avoid crashes from stackoverflows](http://bugs.python.org/issue215555) (pun intended). However, `s_push: parser stack overflow` is output to `stderr` by C code, and cannot be suppressed.
---
And just yesterday I asked [why doesn't Python 3.4 be fixed for a crash from](http://bugs.python.org/issue25973),
```
% python3
Python 3.4.3 (default, Mar 26 2015, 22:03:40)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> class A:
... def f(self):
... nonlocal __x
...
[4] 19173 segmentation fault (core dumped) python3
```
and [Serhiy Storchaka's answer](http://bugs.python.org/issue25973#msg261201) confirmed that Python core devs do not consider SIGSEGV on seemingly well-formed code a security issue:
> Only security fixes are accepted for 3.4.
Thus it can be concluded that it can never be considered safe to execute any code from 3rd party in Python, sanitized or not.
And [Nick Coghlan](http://stackoverflow.com/users/597742/ncoghlan) then [added](http://bugs.python.org/issue25973#msg261216):
> And as some additional background as to why segmentation faults provoked by Python code aren't currently considered a security bug: since CPython doesn't include a security sandbox, we're already relying entirely on the OS to provide process isolation.
> That OS level security boundary isn't affected by whether the code is running "normally", or in a modified state following a deliberately triggered segmentation fault. |
Python eval: is it still dangerous if I disable builtins and attribute access? | 35,804,961 | 22 | 2016-03-04T19:55:17Z | 35,806,044 | 16 | 2016-03-04T21:07:19Z | [
"python",
"eval",
"python-internals"
] | We all know that [`eval` is dangerous](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html), even if you hide dangerous functions, because you can use Python's introspection features to dig down into things and re-extract them. For example, even if you delete `__builtins__`, you can retrieve them with
```
[c for c in ().__class__.__base__.__subclasses__()
if c.__name__ == 'catch_warnings'][0]()._module.__builtins__
```
However, every example I've seen of this uses attribute access. What if I disable all builtins, *and* disable attribute access (by tokenizing the input with a Python tokenizer and rejecting it if it has an attribute access token)?
And before you ask, no, for my use-case, I do not need either of these, so it isn't too crippling.
What I'm trying to do is make SymPy's [sympify](http://docs.sympy.org/latest/modules/core.html#id1) function more safe. Currently it tokenizes the input, does some transformations on it, and evals it in a namespace. But it's unsafe because it allows attribute access (even though it really doesn't need it). | I'm going to mention one of the new features of Python 3.6 - [*f-strings*](https://www.python.org/dev/peps/pep-0498/).
They can evaluate expressions,
```
>>> eval('f"{().__class__.__base__}"', {'__builtins__': None}, {})
"<class 'object'>"
```
but the attribute access won't be detected by Python's tokenizer:
```
0,0-0,0: ENCODING 'utf-8'
1,0-1,1: ERRORTOKEN "'"
1,1-1,27: STRING 'f"{().__class__.__base__}"'
2,0-2,0: ENDMARKER ''
``` |
How do I convert a password into asterisks while it is being entered? | 35,805,078 | 2 | 2016-03-04T20:02:18Z | 35,805,111 | 9 | 2016-03-04T20:04:12Z | [
"python"
] | Is there a way in Python to convert characters as they are being entered by the user to asterisks, like it can be seen on many websites?
For example, if an email user was asked to sign in to their account, while typing in their password, it wouldn't appear as characters but rather as `*` after each individual stroke without any time lag.
If the actual password was `KermitTheFrog`, it would appear as `*************` when typed in. | There is [`getpass()`](https://docs.python.org/3/library/getpass.html#getpass.getpass), a function which *hides* the user input.
```
import getpass
password = getpass.getpass()
print(password)
``` |
How to return 1 or -1 if number is positive or negative (including 0)? | 35,805,959 | 2 | 2016-03-04T21:02:15Z | 35,806,022 | 7 | 2016-03-04T21:06:06Z | [
"python",
"math"
] | Can I ask how to achieve this in Python:
Input:
`I = [10,-22,0]`
Output:
`O = [1,-1,-1]`
I was thinking
`O=I/abs(I)`
But how to deal with zero? | The following should do what you want:
```
>>> I = [10,-22,0]
>>> O = [1 if v > 0 else -1 for v in I]
>>> O
[1, -1, -1]
>>>
```
If you want to use `map` with a `lambda`, you can do:
```
>>> O = map(lambda v: 1 if v > 0 else -1, I)
>>> O
[1, -1, -1]
>>>
``` |
Multi POST query (session mode) | 35,809,428 | 15 | 2016-03-05T02:54:36Z | 35,843,216 | 21 | 2016-03-07T12:00:59Z | [
"python",
"session",
"scrapy",
"httr"
] | I am trying to interrogate this [site](https://compare.switchon.vic.gov.au/welcome) to get the list of offers.
The problem is that we need to fill 2 forms (2 POST queries) before receiving the final result.
This what I have done so far:
First I am sending the first POST after setting the cookies:
```
library(httr)
set_cookies(.cookies = c(a = "1", b = "2"))
first_url <- "https://compare.switchon.vic.gov.au/submit"
body <- list(energy_category="electricity",
location="home",
"location-home"="shift",
"retailer-company"="",
postcode="3000",
distributor=7,
zone=1,
energy_concession=0,
"file-provider"="",
solar=0,
solar_feedin_tariff="",
disclaimer_chkbox="disclaimer_selected")
qr<- POST(first_url,
encode="form",
body=body)
```
Then trying to retrieve the offers using the second post query:
```
gov_url <- "https://compare.switchon.vic.gov.au/energy_questionnaire/submit"
qr1<- POST(gov_url,
encode="form",
body=list(`person-count`=1,
`room-count`=1,
`refrigerator-count`=1,
`gas-type`=4,
`pool-heating`=0,
spaceheating="none",
spacecooling="none",
`cloth-dryer`=0,
waterheating="other"),
set_cookies(a = 1, b = 2))
)
library(XML)
dc <- htmlParse(qr1)
```
But unfortunately I get a message indicating the end of session. Many thanks for any help to resolve this.
### update add cookies:
I added the cookies and the intermediate GET, but I still don't have any of the results.
```
library(httr)
first_url <- "https://compare.switchon.vic.gov.au/submit"
body <- list(energy_category="electricity",
location="home",
"location-home"="shift",
"retailer-company"="",
postcode=3000,
distributor=7,
zone=1,
energy_concession=0,
"file-provider"="",
solar=0,
solar_feedin_tariff="",
disclaimer_chkbox="disclaimer_selected")
qr<- POST(first_url,
encode="form",
body=body,
config=set_cookies(a = 1, b = 2))
xx <- GET("https://compare.switchon.vic.gov.au/energy_questionnaire",config=set_cookies(a = 1, b = 2))
gov_url <- "https://compare.switchon.vic.gov.au/energy_questionnaire/submit"
qr1<- POST(gov_url,
encode="form",
body=list(
`person-count`=1,
`room-count`=1,
`refrigerator-count`=1,
`gas-type`=4,
`pool-heating`=0,
spaceheating="none",
spacecooling="none",
`cloth-dryer`=0,
waterheating="other"),
config=set_cookies(a = 1, b = 2))
library(XML)
dc <- htmlParse(qr1)
``` | using a python [requests.Session](http://docs.python-requests.org/en/master/user/advanced/#session-objects) object with the following data gets to the results page:
```
form1 = {"energy_category": "electricity",
"location": "home",
"location-home": "shift",
"distributor": "7",
"postcode": "3000",
"energy_concession": "0",
"solar": "0",
"disclaimer_chkbox": "disclaimer_selected",
}
form2 = {"person-count":"1",
"room-count":"4",
"refrigerator-count":"0",
"gas-type":"3",
"pool-heating":"0",
"spaceheating[]":"none",
"spacecooling[]":"none",
"cloth-dryer":"0",
"waterheating[]":"other"}
sub_url = "https://compare.switchon.vic.gov.au/submit"
with requests.Session() as s:
s.post(sub_url, data=form1)
r = (s.get("https://compare.switchon.vic.gov.au/energy_questionnaire"))
s.post("https://compare.switchon.vic.gov.au/energy_questionnaire/submit",
data=form2)
r = s.get("https://compare.switchon.vic.gov.au/offers")
print(r.content)
```
You should see the matching `h1` in the returned html that you see on the page:
```
<h1>Your electricity offers</h1>
```
Or using scrapy form requests:
```
import scrapy
class Spider(scrapy.Spider):
name = 'comp'
start_urls = ['https://compare.switchon.vic.gov.au/energy_questionnaire/submit']
form1 = {"energy_category": "electricity",
"location": "home",
"location-home": "shift",
"distributor": "7",
"postcode": "3000",
"energy_concession": "0",
"solar": "0",
"disclaimer_chkbox": "disclaimer_selected",
}
sub_url = "https://compare.switchon.vic.gov.au/submit"
form2 = {"person-count":"1",
"room-count":"4",
"refrigerator-count":"0",
"gas-type":"3",
"pool-heating":"0",
"spaceheating[]":"none",
"spacecooling[]":"none",
"cloth-dryer":"0",
"waterheating[]":"other"}
def start_requests(self):
return [scrapy.FormRequest(
self.sub_url,
formdata=form1,
callback=self.parse
)]
def parse(self, response):
return scrapy.FormRequest.from_response(
response,
formdata=form2,
callback=self.after
)
def after(self, response):
print("<h1>Your electricity offers</h1>" in response.body)
```
Which we can verify has the `"<h1>Your electricity offers</h1>"`:
```
2016-03-07 12:27:31 [scrapy] DEBUG: Crawled (200) <GET https://compare.switchon.vic.gov.au/offers#list/electricity> (referer: https://compare.switchon.vic.gov.au/energy_questionnaire)
True
2016-03-07 12:27:31 [scrapy] INFO: Closing spider (finished)
```
The next problem is the actual data is dynamically rendered which you can verify if you look at the source of the results page, you can actually get all the provider in json format:
```
with requests.Session() as s:
s.post(sub_url, data=form1)
r = (s.get("https://compare.switchon.vic.gov.au/energy_questionnaire"))
s.post("https://compare.switchon.vic.gov.au/energy_questionnaire/submit",
data=form2)
r = s.get("https://compare.switchon.vic.gov.au/service/offers")
print(r.json())
```
A snippet of which is:
```
{u'pageMetaData': {u'showDual': False, u'isGas': False, u'showTouToggle': True, u'isElectricityInitial': True, u'showLoopback': False, u'isElectricity': True}, u'offersList': [{u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.peopleenergy.com.au', u'offerId': u'PEO33707SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1410, u'offerType': u'Standing offer', u'offerName': u'Residential 5-Day Time of Use', u'conditionalPrice': 1410, u'fullDiscountedPrice': 1390, u'greenPower': 0, u'retailerName': u'People Energy', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Time of use', u'retailerPhone': u'1300 788 970', u'isPartDual': False, u'retailerId': u'7322', u'isTouOffer': True, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'1645', u'exitFeeCount': 0, u'timeDefinition': u'Local time', u'retailerImageUrl': u'img/retailers/big/peopleenergy.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.peopleenergy.com.au', u'offerId': u'PEO33773SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1500, u'offerType': u'Standing offer', u'offerName': u'Residential Peak Anytime', u'conditionalPrice': 1500, u'fullDiscountedPrice': 1480, u'greenPower': 0, u'retailerName': u'People Energy', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Single rate', u'retailerPhone': u'1300 788 970', u'isPartDual': False, u'retailerId': u'7322', u'isTouOffer': False, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'1649', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/peopleenergy.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.energythatcould.com.au', u'offerId': u'PAC33683SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1400, u'offerType': u'Standing offer', u'offerName': u'Vic Home Flex', u'conditionalPrice': 1400, u'fullDiscountedPrice': 1400, u'greenPower': 0, u'retailerName': u'Pacific Hydro Retail Pty Ltd', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Flexible Pricing', u'retailerPhone': u'1800 010 648', u'isPartDual': False, u'retailerId': u'15902', u'isTouOffer': False, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'1666', u'exitFeeCount': 0, u'timeDefinition': u'Local time', u'retailerImageUrl': u'img/retailers/big/pachydro.jpg'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.energythatcould.com.au', u'offerId': u'PAC33679SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1340, u'offerType': u'Standing offer', u'offerName': u'Vic Home Flex', u'conditionalPrice': 1340, u'fullDiscountedPrice': 1340, u'greenPower': 0, u'retailerName': u'Pacific Hydro Retail Pty Ltd', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Single rate', u'retailerPhone': u'1800 010 648', u'isPartDual': False, u'retailerId': u'15902', u'isTouOffer': False, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'1680', u'exitFeeCount': 0, u'timeDefinition': u'Local time', u'retailerImageUrl': u'img/retailers/big/pachydro.jpg'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 10, u'retailerUrl': u'www.commander.com', u'offerId': u'M2E30367MR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': True, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1370, u'offerType': u'Market offer', u'offerName': u'Citipower Commander Residential Market Offer (CE3CPR-MAT1 + PF1/TF1/GF1)', u'conditionalPrice': 1370, u'fullDiscountedPrice': 1160, u'greenPower': 0, u'retailerName': u'Commander Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': True, u'greenpowerChargeType': None, u'tariffType': u'Single rate', u'retailerPhone': u'13 39 14', u'isPartDual': False, u'retailerId': u'13667', u'isTouOffer': False, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'2384', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/commanderpowergas.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 10, u'retailerUrl': u'www.commander.com', u'offerId': u'M2E30359MR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': True, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1330, u'offerType': u'Market offer', u'offerName': u'Citipower Commander Residential Market Offer (Flexible Pricing (Peak, Shoulder and Off Peak) (CE3CPR-MCFP1 + PF1/TF1/GF1)', u'conditionalPrice': 1330, u'fullDiscountedPrice': 1140, u'greenPower': 0, u'retailerName': u'Commander Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': True, u'greenpowerChargeType': None, u'tariffType': u'Time of use', u'retailerPhone': u'13 39 14', u'isPartDual': False, u'retailerId': u'13667', u'isTouOffer': True, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'2386', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/commanderpowergas.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 10, u'retailerUrl': u'www.commander.com', u'offerId': u'M2E33241MR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': True, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1300, u'offerType': u'Market offer', u'offerName': u'Citipower Commander Residential Market Offer (Peak / Off Peak) (CE3CPR-MPK1OP1)', u'conditionalPrice': 1300, u'fullDiscountedPrice': 1100, u'greenPower': 0, u'retailerName': u'Commander Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': True, u'greenpowerChargeType': None, u'tariffType': u'Time of use', u'retailerPhone': u'13 39 14', u'isPartDual': False, u'retailerId': u'13667', u'isTouOffer': True, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'2389', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/commanderpowergas.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.commander.com', u'offerId': u'M2E30379SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1370, u'offerType': u'Standing offer', u'offerName': u'Citipower Commander Residential Standing Offer (CE3CPR-SAT1 + PF1/TF1/GF1)', u'conditionalPrice': 1370, u'fullDiscountedPrice': 1370, u'greenPower': 0, u'retailerName': u'Commander Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Single rate', u'retailerPhone': u'13 39 14', u'isPartDual': False, u'retailerId': u'13667', u'isTouOffer': False, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'2391', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/commanderpowergas.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.commander.com', u'offerId': u'M2E30369SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1330, u'offerType': u'Standing offer', u'offerName': u'Citipower Commander Residential Standing Offer (Flexible Pricing (Peak, Shoulder and Off Peak) (CE3CPR-SCFP1 + PF1/TF1/GF1)', u'conditionalPrice': 1330, u'fullDiscountedPrice': 1330, u'greenPower': 0, u'retailerName': u'Commander Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Time of use', u'retailerPhone': u'13 39 14', u'isPartDual': False, u'retailerId': u'13667', u'isTouOffer': True, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'2393', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/commanderpowergas.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.commander.com', u'offerId': u'M2E30375SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1300, u'offerType': u'Standing offer', u'offerName': u'Citipower Commander Residential Standing Offer (Peak / Off Peak) (CE3CPR-SPK1OP1)', u'conditionalPrice': 1300, u'fullDiscountedPrice': 1300, u'greenPower': 0, u'retailerName': u'Commander Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType': u'Time of use', u'retailerPhone': u'13 39 14', u'isPartDual': False, u'retailerId': u'13667', u'isTouOffer': True, u'solarType': None, u'estimatedSolarCredit': 0, u'offerKey': u'2395', u'exitFeeCount': 0, u'timeDefinition': u'AEST only', u'retailerImageUrl': u'img/retailers/big/commanderpowergas.png'}], u'isClosed': False, u'isChecked': False, u'offerFuelType': 0}, {u'offerDetails': [{u'coolingOffPeriod': 0, u'retailerUrl': u'www.dodo.com/powerandgas', u'offerId': u'DOD32903SR', u'contractLengthCount': 1, u'exitFee': [0], u'hasIncentive': False, u'tariffDetails': {}, u'greenpowerAmount': 0, u'isDirectDebitOnly': False, u'basePrice': 1320, u'offerType': u'Standing offer', u'offerName': u'Citipower Res No Term Standing Offer (Common Form Flex Plan) (E3CPR-SCFP1)', u'conditionalPrice': 1320, u'fullDiscountedPrice': 1320, u'greenPower': 0, u'retailerName': u'Dodo Power & Gas', u'intrinsicGreenpowerPercentage': u'0.0000', u'contractLength': [u'None'], u'hasPayOnTimeDiscount': False, u'greenpowerChargeType': None, u'tariffType':
```
Then if you look at the requests later, for example when you click the compare selected button on the results page, there is a request like:
```
https://compare.switchon.vic.gov.au/service/offer/tariff/9090/9092
```
So you may be able to mimic what happens by filtering using the tariff or some variation.
You can actually get all the data as json, if you enter the same values as below into the forms:
```
form1 = {"energy_category": "electricity",
"location": "home",
"location-home": "shift",
"distributor": "7",
"postcode": "3000",
"energy_concession": "0",
"solar": "0",
"disclaimer_chkbox": "disclaimer_selected"
}
form2 = {"person-count":"1",
"room-count":"1",
"refrigerator-count":"1",
"gas-type":"4",
"pool-heating":"0",
"spaceheating[]":"none",
"spacecooling[]":"none",
"cloth-dryer":"0",
"cloth-dryer-freq-weekday":"",
"waterheating[]":"other"}
import json
with requests.Session() as s:
s.post(sub_url, data=form1)
r = (s.get("https://compare.switchon.vic.gov.au/energy_questionnaire"))
s.post("https://compare.switchon.vic.gov.au/energy_questionnaire/submit",
data=form2)
js = s.get("https://compare.switchon.vic.gov.au/service/offers").json()["offersList"]
by_discount = sorted(js, key=lambda d: d["offerDetails"][0]["fullDiscountedPrice"])
```
If we just pull the first two values from the list ordered by the total discount price:
```
from pprint import pprint as pp
pp(by_discount[:2])
```
You will get:
```
[{u'isChecked': False,
u'isClosed': False,
u'offerDetails': [{u'basePrice': 980,
u'conditionalPrice': 980,
u'contractLength': [u'None'],
u'contractLengthCount': 1,
u'coolingOffPeriod': 10,
u'estimatedSolarCredit': 0,
u'exitFee': [0],
u'exitFeeCount': 1,
u'fullDiscountedPrice': 660,
u'greenPower': 0,
u'greenpowerAmount': 0,
u'greenpowerChargeType': None,
u'hasIncentive': False,
u'hasPayOnTimeDiscount': True,
u'intrinsicGreenpowerPercentage': u'0.0000',
u'isDirectDebitOnly': False,
u'isPartDual': False,
u'isTouOffer': False,
u'offerId': u'GLO40961MR',
u'offerKey': u'7636',
u'offerName': u'GLO SWITCH',
u'offerType': u'Market offer',
u'retailerId': u'31206',
u'retailerImageUrl': u'img/retailers/big/globird.jpg',
u'retailerName': u'GloBird Energy',
u'retailerPhone': u'(03) 8560 4199',
u'retailerUrl': u'http://www.globirdenergy.com.au/switchon/',
u'solarType': None,
u'tariffDetails': {},
u'tariffType': u'Single rate',
u'timeDefinition': u'Local time'}],
u'offerFuelType': 0},
{u'isChecked': False,
u'isClosed': False,
u'offerDetails': [{u'basePrice': 1080,
u'conditionalPrice': 1080,
u'contractLength': [u'None'],
u'contractLengthCount': 1,
u'coolingOffPeriod': 10,
u'estimatedSolarCredit': 0,
u'exitFee': [0],
u'exitFeeCount': 1,
u'fullDiscountedPrice': 720,
u'greenPower': 0,
u'greenpowerAmount': 0,
u'greenpowerChargeType': None,
u'hasIncentive': False,
u'hasPayOnTimeDiscount': True,
u'intrinsicGreenpowerPercentage': u'0.0000',
u'isDirectDebitOnly': False,
u'isPartDual': False,
u'isTouOffer': True,
u'offerId': u'GLO41009MR',
u'offerKey': u'7642',
u'offerName': u'GLO SWITCH',
u'offerType': u'Market offer',
u'retailerId': u'31206',
u'retailerImageUrl': u'img/retailers/big/globird.jpg',
u'retailerName': u'GloBird Energy',
u'retailerPhone': u'(03) 8560 4199',
u'retailerUrl': u'http://www.globirdenergy.com.au/switchon/',
u'solarType': None,
u'tariffDetails': {},
u'tariffType': u'Time of use',
u'timeDefinition': u'Local time'}],
u'offerFuelType': 0}]
```
Which should match what you see on the page when you click the `"DISCOUNTED PRICE"` filter button.
For the normal view it seems to be ordered by `conditionalPrice` or `basePrice`, again pulling just the two first values should match what you see on the webpage:
```
base = sorted(js, key=lambda d: d["offerDetails"][0]["conditionalPrice"])
from pprint import pprint as pp
pp(base[:2])
[{u'isChecked': False,
u'isClosed': False,
u'offerDetails': [{u'basePrice': 740,
u'conditionalPrice': 740,
u'contractLength': [u'None'],
u'contractLengthCount': 1,
u'coolingOffPeriod': 0,
u'estimatedSolarCredit': 0,
u'exitFee': [0],
u'exitFeeCount': 0,
u'fullDiscountedPrice': 740,
u'greenPower': 0,
u'greenpowerAmount': 0,
u'greenpowerChargeType': None,
u'hasIncentive': False,
u'hasPayOnTimeDiscount': False,
u'intrinsicGreenpowerPercentage': u'0.0000',
u'isDirectDebitOnly': False,
u'isPartDual': False,
u'isTouOffer': False,
u'offerId': u'NEX42694SR',
u'offerKey': u'9092',
u'offerName': u'Citpower Single Rate Residential',
u'offerType': u'Standing offer',
u'retailerId': u'35726',
u'retailerImageUrl': u'img/retailers/big/nextbusinessenergy.jpg',
u'retailerName': u'Next Business Energy Pty Ltd',
u'retailerPhone': u'1300 466 398',
u'retailerUrl': u'http://www.nextbusinessenergy.com.au/',
u'solarType': None,
u'tariffDetails': {},
u'tariffType': u'Single rate',
u'timeDefinition': u'Local time'}],
u'offerFuelType': 0},
{u'isChecked': False,
u'isClosed': False,
u'offerDetails': [{u'basePrice': 780,
u'conditionalPrice': 780,
u'contractLength': [u'None'],
u'contractLengthCount': 1,
u'coolingOffPeriod': 0,
u'estimatedSolarCredit': 0,
u'exitFee': [0],
u'exitFeeCount': 0,
u'fullDiscountedPrice': 780,
u'greenPower': 0,
u'greenpowerAmount': 0,
u'greenpowerChargeType': None,
u'hasIncentive': False,
u'hasPayOnTimeDiscount': False,
u'intrinsicGreenpowerPercentage': u'0.0000',
u'isDirectDebitOnly': False,
u'isPartDual': False,
u'isTouOffer': False,
u'offerId': u'NEX42699SR',
u'offerKey': u'9090',
u'offerName': u'Citpower Residential Flexible Pricing',
u'offerType': u'Standing offer',
u'retailerId': u'35726',
u'retailerImageUrl': u'img/retailers/big/nextbusinessenergy.jpg',
u'retailerName': u'Next Business Energy Pty Ltd',
u'retailerPhone': u'1300 466 398',
u'retailerUrl': u'http://www.nextbusinessenergy.com.au/',
u'solarType': None,
u'tariffDetails': {},
u'tariffType': u'Flexible Pricing',
u'timeDefinition': u'Local time'}],
u'offerFuelType': 0}]
```
You can see all the json returned in firebug console if you click the `https://compare.switchon.vic.gov.au/service/offers` get entry then hit r*esponse*:
[](http://i.stack.imgur.com/emUkl.png)
You should be able to pull each field that you want from that.
The output actually has a few extras results which you don't see on the page unless you toggle the tou button below:
[](http://i.stack.imgur.com/9G5vx.png)
You can filter those from the results so you exactly match the default output or give an option to include with a helper function:
```
def order_by(l, k, is_tou=False):
if not is_tou:
filt = filter(lambda x: not x["offerDetails"][0]["isTouOffer"], l)
return sorted(filt, key=lambda d: d["offerDetails"][0][k])
return sorted(l, key=lambda d: d["offerDetails"][0][k])
import json
with requests.Session() as s:
s.post(sub_url, data=form1)
r = (s.get("https://compare.switchon.vic.gov.au/energy_questionnaire"))
s.post("https://compare.switchon.vic.gov.au/energy_questionnaire/submit",
data=form2)
js = s.get("https://compare.switchon.vic.gov.au/service/offers").json()["offersList"]
by_price = by_discount(js, "conditionalPrice", False)
print(by_price[:3)
```
If you check the output you will see origin energy third with a price of 840 in the results with the switch on or 860 for AGL when it is off, you can apply the same to the discount output:
[](http://i.stack.imgur.com/fMc4X.png)
[](http://i.stack.imgur.com/26CdD.png)
The regular output also seems to be ordered by *conditionalPrice* if you check the source the two js functions that get called for ordering are:
```
ng-click="changeSortingField('conditionalPrice')"
ng-click="changeSortingField('fullDiscountedPrice')"
```
So that should now definitely completely match the site output. |
sine calculation orders of magnitude slower than cosine | 35,815,093 | 25 | 2016-03-05T14:06:14Z | 35,818,398 | 17 | 2016-03-05T18:56:22Z | [
"python",
"numpy",
"scipy",
"signal-processing"
] | ## tl;dr
Of the same `numpy` array, calculating `np.cos` takes 3.2 seconds, wheras `np.sin` runs 548 seconds *(nine minutes)* on Linux Mint.
See [this repo](https://gitlab.com/Finwood/numpy-sine.git) for full code.
---
I've got a pulse signal (see image below) which I need to modulate onto a HF-carrier, simulating a [Laser Doppler Vibrometer](https://en.wikipedia.org/wiki/Laser_Doppler_vibrometer). Therefore signal and its time basis need to be resampled to match the carrier's higher sampling rate.

In the following demodulation process both the in-phase carrier `cos(omega * t)` and the phase-shifted carrier `sin(omega * t)` are needed.
Oddly, the time to evaluate these functions depends highly on the way the time vector has been calculated.
The time vector `t1` is being calculated using `np.linspace` directly, `t2` uses the [method implemented in `scipy.signal.resample`](https://github.com/scipy/scipy/blob/v0.17.0/scipy/signal/signaltools.py#L1754).
```
pulse = np.load('data/pulse.npy') # 768 samples
pulse_samples = len(pulse)
pulse_samplerate = 960 # 960 Hz
pulse_duration = pulse_samples / pulse_samplerate # here: 0.8 s
pulse_time = np.linspace(0, pulse_duration, pulse_samples,
endpoint=False)
carrier_freq = 40e6 # 40 MHz
carrier_samplerate = 100e6 # 100 MHz
carrier_samples = pulse_duration * carrier_samplerate # 80 million
t1 = np.linspace(0, pulse_duration, carrier_samples)
# method used in scipy.signal.resample
# https://github.com/scipy/scipy/blob/v0.17.0/scipy/signal/signaltools.py#L1754
t2 = np.arange(0, carrier_samples) * (pulse_time[1] - pulse_time[0]) \
* pulse_samples / float(carrier_samples) + pulse_time[0]
```
As can be seen in the picture below, the time vectors are not identical. At 80 million samples the difference `t1 - t2` reaches `1e-8`.

Calculating the in-phase and shifted carrier of `t1` takes *3.2 seconds* each on my machine.
**With `t2`, however, calculating the shifted carrier takes *540 seconds*. Nine minutes. For nearly the same 80 million values.**
```
omega_t1 = 2 * np.pi * carrier_frequency * t1
np.cos(omega_t1) # 3.2 seconds
np.sin(omega_t1) # 3.3 seconds
omega_t2 = 2 * np.pi * carrier_frequency * t2
np.cos(omega_t2) # 3.2 seconds
np.sin(omega_t2) # 9 minutes
```
I can reproduce this bug on both my 32-bit laptop and my 64-bit tower, both running *Linux Mint 17*. On my flat mate's MacBook, however, the "slow sine" takes as little time as the other three calculations.
---
I run a *Linux Mint 17.03* on a 64-bit AMD processor and *Linux Mint 17.2* on 32-bit Intel processor. | I don't think numpy has anything to do with this: I think you're tripping across a performance bug in the C math library on your system, one which affects sin near large multiples of pi. (I'm using "bug" in a pretty broad sense here -- for all I know, since the sine of large floats is poorly defined, the "bug" is actually the library behaving correctly to handle corner cases!)
On linux, I get:
```
>>> %timeit -n 10000 math.sin(6e7*math.pi)
10000 loops, best of 3: 191 µs per loop
>>> %timeit -n 10000 math.sin(6e7*math.pi+0.12)
10000 loops, best of 3: 428 ns per loop
```
and other Linux-using types from the [Python chatroom](http://chat.stackoverflow.com/rooms/6/python) report
```
10000 loops, best of 3: 49.4 µs per loop
10000 loops, best of 3: 206 ns per loop
```
and
```
In [3]: %timeit -n 10000 math.sin(6e7*math.pi)
10000 loops, best of 3: 116 µs per loop
In [4]: %timeit -n 10000 math.sin(6e7*math.pi+0.12)
10000 loops, best of 3: 428 ns per loop
```
but a Mac user reported
```
In [3]: timeit -n 10000 math.sin(6e7*math.pi)
10000 loops, best of 3: 300 ns per loop
In [4]: %timeit -n 10000 math.sin(6e7*math.pi+0.12)
10000 loops, best of 3: 361 ns per loop
```
for no order-of-magnitude difference. As a workaround, you might try taking things mod 2 pi first:
```
>>> new = np.sin(omega_t2[-1000:] % (2*np.pi))
>>> old = np.sin(omega_t2[-1000:])
>>> abs(new - old).max()
7.83773902468434e-09
```
which has better performance:
```
>>> %timeit -n 1000 new = np.sin(omega_t2[-1000:] % (2*np.pi))
1000 loops, best of 3: 63.8 µs per loop
>>> %timeit -n 1000 old = np.sin(omega_t2[-1000:])
1000 loops, best of 3: 6.82 ms per loop
```
---
Note that as expected, a similar effect happens for *cos*, just shifted:
```
>>> %timeit -n 1000 np.cos(6e7*np.pi + np.pi/2)
1000 loops, best of 3: 37.6 µs per loop
>>> %timeit -n 1000 np.cos(6e7*np.pi + np.pi/2 + 0.12)
1000 loops, best of 3: 2.46 µs per loop
``` |
Python - resort list based on another list | 35,823,979 | 3 | 2016-03-06T06:41:05Z | 35,824,021 | 8 | 2016-03-06T06:49:15Z | [
"python",
"python-2.7"
] | I have 2 lists:
```
a=[45, 41, 42, 43, 44]
b=[41, 42, 43, -44, -45]
```
i want to sort `b` based on `a` without the the negative sign,
so after resort it should be like :
```
a=[45, 41, 42, 43, 44]
b=[-45, 41, 42, 43, -44]
```
i tried to compare elements but i face problem in negative sign
thanks | ```
>>> sorted(b, key=lambda x: a.index(abs(x)))
[-45, 41, 42, 43, -44]
```
Or if you want to sort `b` in its place
```
b.sort(key=lambda x: a.index(abs(x)))
``` |
`object in list` behaves different from `object in dict`? | 35,826,534 | 12 | 2016-03-06T11:52:55Z | 35,826,785 | 16 | 2016-03-06T12:24:49Z | [
"python",
"list",
"if-statement",
"dictionary",
"cpython"
] | I've got an iterator with some objects in it and I wanted to create a collection of uniqueUsers in which I only list every user once. So playing around a bit I tried it with both a list and a dict:
```
>>> for m in ms: print m.to_user # let's first look what's inside ms
...
Pete Kramer
Pete Kramer
Pete Kramer
>>>
>>> uniqueUsers = [] # Create an empty list
>>> for m in ms:
... if m.to_user not in uniqueUsers:
... uniqueUsers.append(m.to_user)
...
>>> uniqueUsers
[Pete Kramer] # This is what I would expect
>>>
>>> uniqueUsers = {} # Now let's create a dict
>>> for m in ms:
... if m.to_user not in uniqueUsers:
... uniqueUsers[m.to_user] = 1
...
>>> uniqueUsers
{Pete Kramer: 1, Pete Kramer: 1, Pete Kramer: 1}
```
So I tested it by converting the dict to a list when doing the if statement, and that works as I would expect it to:
```
>>> uniqueUsers = {}
>>> for m in ms:
... if m.to_user not in list(uniqueUsers):
... uniqueUsers[m.to_user] = 1
...
>>> uniqueUsers
{Pete Kramer: 1}
```
and I can get a similar result by testing against `uniqueUsers.keys()`.
The thing is that I don't understand why this difference occurs. I always thought that if you do `if object in dict`, it simply creates a list of the dicts keys and tests agains that, but that's obviously not the case.
Can anybody explain how `object in dict` internally works and why it doesn't behave similar to `object in list` (as I would expect it to)? | In order to understand whatâs going on, you have to understand how the `in` operator, the [membership test](https://docs.python.org/3/reference/expressions.html#membership-test-details), behaves for the different types.
For lists, this is pretty simple due to what lists fundamentally are: Ordered arrays that do not care about duplicates. The only possible way to peform a membership test here is to iterate over the list and check every item on *equality*. Something like this:
```
# x in lst
for item in lst:
if x == item:
return True
return False
```
Dictionaries are a bit different: They are hash tables were keys are meant to be unique. Hash tables require the keys to be *hashable* which essentially means that there needs to be an explicit function that converts the object into an integer. This hash value is then used to put the key/value mapping somewhere into the hash table.
Since the hash value determines where in the hash table an item is placed, itâs critical that objects which are meant to be identical produce the same hash value. So the following implication has to be true: `x == y => hash(x) == hash(y)`. The reverse does not need to be true though; itâs perfectly valid to have different objects produce the same hash value.
When a membership test on a dictionary is performed, then the dictionary will first look for the hash value. If it can find it, then it will perform an equality check on all items it found; if it didnât find the hash value, then it assumes that itâs a different object:
```
# x in dct
h = hash(x)
items = getItemsForHash(dct, h)
for item in items:
if x == item:
return True
# items is empty, or no match inside the loop
return False
```
Since you get the desired result when using a membership test against a list, that means that your object implements the equality comparison ([`__eq__`](https://docs.python.org/3/reference/datamodel.html#object.__eq__)) correctly. But since you do not get the correct result when using a dictionary, there seems to be a [`__hash__`](https://docs.python.org/3/reference/datamodel.html#object.__hash__) implementation that is out of sync with the equality comparison implementation:
```
>>> class SomeType:
def __init__ (self, x):
self.x = x
def __eq__ (self, other):
return self.x == other.x
def __hash__ (self):
# bad hash implementation
return hash(id(self))
>>> l = [SomeType(1)]
>>> d = { SomeType(1): 'x' }
>>> x = SomeType(1)
>>> x in l
True
>>> x in d
False
```
Note that for new-style classes in Python 2 (classes that inherit from `object`), this âbad hash implementationâ (which is based on the object id) is the default. So when you do not implement your own `__hash__` function, it still uses that one. This ultimately means that unless your `__eq__` only performs an identity check (the default), the hash function *will* be out of sync.
So the solution is to implement `__hash__` in a way that it aligns with the rules used in `__eq__`. For example, if you compare two members `self.x` and `self.y`, then you should use a compound hash over those two members. The easiest way to do that is to return the hash value of a tuple of those values:
```
class SomeType (object):
def __init__ (self, x, y):
self.x = x
self.y = y
def __eq__ (self, other):
return self.x == other.x and self.y == other.y
def __hash__ (self):
return hash((self.x, self.y))
```
Note that you should not make an object hashable if it is mutable:
> If a class defines mutable objects and implements an `__eq__()` method, it should not implement `__hash__()`, since the implementation of hashable collections requires that a keyâs hash value is immutable (if the objectâs hash value changes, it will be in the wrong hash bucket). |
`object in list` behaves different from `object in dict`? | 35,826,534 | 12 | 2016-03-06T11:52:55Z | 35,827,339 | 8 | 2016-03-06T13:21:52Z | [
"python",
"list",
"if-statement",
"dictionary",
"cpython"
] | I've got an iterator with some objects in it and I wanted to create a collection of uniqueUsers in which I only list every user once. So playing around a bit I tried it with both a list and a dict:
```
>>> for m in ms: print m.to_user # let's first look what's inside ms
...
Pete Kramer
Pete Kramer
Pete Kramer
>>>
>>> uniqueUsers = [] # Create an empty list
>>> for m in ms:
... if m.to_user not in uniqueUsers:
... uniqueUsers.append(m.to_user)
...
>>> uniqueUsers
[Pete Kramer] # This is what I would expect
>>>
>>> uniqueUsers = {} # Now let's create a dict
>>> for m in ms:
... if m.to_user not in uniqueUsers:
... uniqueUsers[m.to_user] = 1
...
>>> uniqueUsers
{Pete Kramer: 1, Pete Kramer: 1, Pete Kramer: 1}
```
So I tested it by converting the dict to a list when doing the if statement, and that works as I would expect it to:
```
>>> uniqueUsers = {}
>>> for m in ms:
... if m.to_user not in list(uniqueUsers):
... uniqueUsers[m.to_user] = 1
...
>>> uniqueUsers
{Pete Kramer: 1}
```
and I can get a similar result by testing against `uniqueUsers.keys()`.
The thing is that I don't understand why this difference occurs. I always thought that if you do `if object in dict`, it simply creates a list of the dicts keys and tests agains that, but that's obviously not the case.
Can anybody explain how `object in dict` internally works and why it doesn't behave similar to `object in list` (as I would expect it to)? | TL;DR: The `in` test calls `__eq__` for lists. For dicts, it first calls `__hash__` and if the hash matches, then calls `__eq__`.
1. The `in` test only calls `__eq__` for lists.
* Without an `__eq__`, the *in-ness* comparison is always `False`.
2. For dicts, you need a correctly implemented `__hash__` *and* `__eq__` to be able to compare objects in it *correctly*:
* First gets the object's hash from `__hash__`
+ Without `__hash__`, for new-style classes, it uses `id()` which is unique for all objects created and hence never matches an existing one unless it's *the same* object.
+ And as @poke pointed out in a comment:
> In Python 2, new style classes (inheriting from `object`) inherit objectâs `__hash__` implementation which is based on `id()`, so thatâs where that comes from.
* If the hash matches, *then* `__eq__` is called for that object with the *`other`*.
+ The result then depends on what `__eq__` returns.
* If the hash *does not* match, then `__eq__` is *not called*.
**So the `in` test calls `__eq__` for lists and for dicts...*but for dicts, only after `__hash__`* returns a matching hash.** And not having a `__hash__` doesn't return `None`, doesn't throw an error and doesn't make it "unhashable". ...in Python 2. To use your `to_user` class correctly as dict keys, you do need to have a [`__hash__` method](https://docs.python.org/2/glossary.html#term-hashable) which is implemented correctly, in sync with `__eq__`.
Details:
The check for `m.to_user not in uniqueUsers` "object in list" worked correctly because you have probably implemented an `__eq__` method, as @poke pointed out. (And it appears `to_user` returns an object, not a string.)
The same check doesn't work for "object in dict" either because:
(a) `__hash__` in that class is badly implemented, as @poke also pointed out.
(b) **Or** you have not implemented `__hash__` at all. This doesn't raise an error in Python2 new-style classes.
Using [the class in this answer](http://stackoverflow.com/a/17445665/1431750) as a starting point:
```
>>> class Test2(object):
... def __init__(self, name):
... self.name = name
...
... def __eq__(self, other):
... return self.name == other.name
...
>>> test_Dict = {}
>>> test_List = []
>>>
>>> obj1 = Test2('a')
>>> obj2 = Test2('a')
>>>
>>> test_Dict[obj1] = 'x'
>>> test_Dict[obj2] = 'y'
>>>
>>> test_List.append(obj1)
>>> test_List.append(obj2)
>>>
>>> test_Dict
{<__main__.Test2 object at 0x0000000002EFC518>: 'x', <__main__.Test2 object at 0x0000000002EFC940>: 'y'}
>>> test_List
[<__main__.Test2 object at 0x0000000002EFC518>, <__main__.Test2 object at 0x0000000002EFC940>]
>>>
>>> Test2('a') in test_Dict
False
>>> Test2('a') in test_List
True
``` |
set thousands separators in iPython without string formatting | 35,826,844 | 4 | 2016-03-06T12:30:42Z | 35,827,175 | 8 | 2016-03-06T13:07:58Z | [
"python",
"formatting",
"ipython"
] | I'm looking for an answer to the following question for over 4 hours already. Most pages indicate string formatting methods. This is not what I want.
I want to set a parameter in IPython for thousands separators for integers and floats. The option should only affect how numbers are displayed in my interactive session. I want to set a parameter once. All solutions where I need to do some formatting for every new output do not cover my need at all. I do some exploratory data analysis and don't want to bother with number formatting for every line of code.
The format should be used with all integers and floats, including those stored in numpy arrays or pandas dataframes.
For those familiar with Mathematica I indicate how one would do this in Mathematica: go to preferences => appearance => numbers => formatting. There you can "enable automatic number formatting" and chose a "digit block separator".
Example: if I type "600 + 600" into my ipython session, I want the following output: 1'200 (where ' would be my thousands separator).
I use IPython consoles in Spyder and IPython notebooks. Thank you. | If you used `str.format` and `numpy.set_printoptions` you could set it globally once:
```
import numpy as np
import IPython
frm = get_ipython().display_formatter.formatters['text/plain']
def thousands(arg, p, cycle):
p.text("{:,}".format(arg).replace(",","'"))
frm.for_type(int, thousands)
frm.for_type(float, thousands)
np.set_printoptions(formatter={'int_kind': lambda x: '{:,}'.format(x).replace(",","'")})
np.set_printoptions(formatter={'float_kind': lambda x: '{:,}'.format(x).replace(",","'")})
frm = get_ipython().display_formatter.formatters['text/plain']
frm.for_type(int, thousands)
frm.for_type(float, thousands)
```
It does not cover all bases but you can add more logic:
```
In [2]: arr = np.array([12345,12345])
In [3]: arr
Out[3]: array([12'345, 12'345])
In [4]: 123456
Out[4]: 123'456
In [5]: 123456.343
Out[5]: 123'456.343
```
You can add it to a startup.py script maing sure you set [PYTHONSTARTUP](https://docs.python.org/2/tutorial/appendix.html#the-interactive-startup-file) to point to the file so it loads when you start ipython:
```
~$ ipython2
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
Type "copyright", "credits" or "license" for more information.
IPython 4.0.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
(.startup.py)
(imported datetime, os, pprint, re, sys, time,np,pd)
In [1]: arr = np.array([12345,12345])
In [2]: arr
Out[2]: array([12'345, 12'345])
In [3]: 12345
Out[3]: "12'345"
```
For pandas it seems you can set [display.float\_format](http://pandas.pydata.org/pandas-docs/stable/options.html#list-of-options) with set\_option
```
In [22]: pd.set_option("display.float_format",lambda x: "{:,}".format(x).replace(",","'"))
In [23]: pd.DataFrame([[12345.3,12345.4]])
Out[23]:
0 1
0 12'345.3 12'345.4
```
Based on [this answer](http://stackoverflow.com/a/29663750/2141635) it seems for later versions of pandas we need to change [`pandas.core.format.IntArrayFormatter`](https://github.com/pydata/pandas/blob/master/pandas/core/format.py#L2208):
So the full startup script would be something like:
```
import IPython
import numpy as np
import pandas as pd
# numpy
np.set_printoptions(formatter={'float_kind': lambda x: '{:,}'.format(x).replace(",", "'"),
'int_kind': lambda x: '{:,}'.format(x).replace(",", "'")})
# pandas
class IntFormatter(pd.core.format.GenericArrayFormatter):
pd.set_option("display.float_format", lambda x: "{:,}".format(x).replace(",", "'"))
def _format_strings(self):
formatter = self.formatter or (lambda x: ' {:,}'.format(x).replace(",", "'"))
fmt_values = [formatter(x) for x in self.values]
return fmt_values
pd.core.format.IntArrayFormatter = IntFormatter
# general
def thousands(arg, p, cycle):
p.text("{:,}".format(arg).replace(",","'"))
frm = get_ipython().display_formatter.formatters['text/plain']
frm.for_type(int, thousands)
frm.for_type(float, thousands)
```
Which seems to cover most of what you want:
```
IPython 4.0.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
(.startup.py)
(imported datetime, os, pprint, re, sys, time,np,pd)
In [1]: pd.DataFrame([[12345,12345]])
Out[1]:
0 1
0 12'345 12'345
In [2]: pd.DataFrame([[12345,12345.345]])
Out[2]:
0 1
0 12'345 12'345.345
In [3]: np.array([12345,678910])
Out[3]: array([12'345, 678'910])
In [4]: np.array([12345.321,678910.123])
Out[4]: array([12'345.321, 678'910.123])
In [5]: 100000
Out[5]: 100'000
In [6]: 100000.123
Out[6]: 100'000.123
In [7]: 10000000
Out[7]: 10'000'000
``` |
'if' not followed by conditional statement | 35,826,987 | 5 | 2016-03-06T12:45:31Z | 35,827,051 | 7 | 2016-03-06T12:54:10Z | [
"python",
"if-statement",
"conditional"
] | I'm going through Zed's ["Learn Python The Hard Way"](http://learnpythonthehardway.org/) and I'm on ex49. I'm quite confused by the following code he gives:
```
def peek(word_list):
if word_list: # this gives me trouble
word = word_list[0]
return word[0]
else:
return None
```
The condition of the `if` statement is giving me trouble, as commented. I'm not sure what this means as `word_list` is an object, not a conditional statement. How can `word_list`, just by itself, follow `if`? | The `if` statement applies the built-in **[`bool()`](https://docs.python.org/2/library/functions.html#bool)** function to the expression which follows. In your case, the code-block inside the `if` statement only runs if `bool(word_list)` is `True`.
Different objects in Python evaluate to either `True` or `False` in a Boolean context. These objects are considered to be 'Truthy' or 'Falsy'. For example:
```
In [180]: bool('abc')
Out[180]: True
In [181]: bool('')
Out[181]: False
In [182]: bool([1, 2, 4])
Out[182]: True
In [183]: bool([])
Out[183]: False
In [184]: bool(None)
Out[184]: False
```
The above are examples of the fact that:
* strings of length `>= 1` are Truthy.
* empty strings are Falsy.
* lists of length `>= 1` are Truthy.
* empty lists are Falsy.
* `None` is Falsy.
So: `if word_list` will evaluate to `True` if it is a non-empty list. However, if it is an empty list or `None` it will evaluate to `False`. |
Python 3: super() raises TypeError unexpectedly | 35,829,643 | 15 | 2016-03-06T16:51:44Z | 35,829,679 | 12 | 2016-03-06T16:55:19Z | [
"python",
"inheritance",
"typeerror",
"superclass",
"super"
] | Coming from Java, I'm struggling a bit getting down inheritance, abstract classes, static methods and similar concepts of OO programming in Python.
I have an implementation of an expression tree class, given (simplified) by
```
# Generic node class
class Node(ABC):
@abstractmethod
def to_expr(self):
pass
@staticmethod
def bracket_complex(child):
s = child.to_expr()
return s if isinstance(child, Leaf) or isinstance(child, UnaryOpNode) else "(" + s + ")"
# Leaf class - used for values and variables
class Leaf(Node):
def __init__(self, val):
self.val = val
def to_expr(self):
return str(self.val)
# Unary operator node
class UnaryOpNode(Node):
def __init__(self, op, child):
self.op = op
self.child = child
def to_expr(self):
return str(self.op) + super().bracket_complex(self.child)
# Binary operator node
class BinaryOpNode(Node):
def __init__(self, op, lchild, rchild):
self.op = op
self.lchild = lchild
self.rchild = rchild
def to_expr(self):
return super().bracket_complex(self.lchild) + " " + str(self.op) + " " + super().bracket_complex(self.rchild)
# Variadic operator node (arbitrary number of arguments)
# Assumes commutative operator
class VariadicOpNode(Node):
def __init__(self, op, list_):
self.op = op
self.children = list_
def to_expr(self):
return (" " + str(self.op) + " ").join(super().bracket_complex(child) for child in self.children)
```
The method `to_expr()` works fine when called on instances of `Leaf`, `UnaryOpNode` and `BinaryOpNode`, but raises a `TypeError` when called on an instance of `VariadicOpNode`:
```
TypeError: super(type, obj): obj must be an instance or subtype of type
```
What am I doing wrong in that specific class that `super()` is suddenly not working?
In Java the static method would get inherited so I wouldn't even need the super call, but in Python this does not seem to be the case. | You're using `super()` without arguments in a generator expression. The `super()` is magic - it relies on information in the caller frame. Since the generator expression creates an additional function, `super()` without arguments does not work there. However since your superclass is not probable to change *in the middle of execution of a method*, you can move it out of the generator expression - this should also speed things up:
```
def to_expr(self):
bracket_complex = super().bracket_complex
return (" " + str(self.op) + " ").join(bracket_complex(child) for child in self.children)
```
However as static methods are "inherited" in Python, you could call the super method via `self` provided that you didn't override it in a subclass. Thus in this simple case you can write:
```
def to_expr(self):
return (" " + str(self.op) + " ").join(self.bracket_complex(child) for child in self.children)
```
---
The implementation detail is that if no arguments are provided, the first argument shall be the value that is in the `__class__` cell of the caller frame, and second shall be the first argument given to the caller function. Usually you just get a `SystemError` when using `super` in a wrong place, but generator expressions are wrapped inside an implicit generator function which creates another call frame. Unfortunately this function gets an argument, which leads the `super()` to complain with this exception.
So normally `super()` would be passed `Foo` there as the first argument, but within generator expression, a generator object was passed - and thus it is obvious that `TypeError` needs to be raised. |
Python 3: super() raises TypeError unexpectedly | 35,829,643 | 15 | 2016-03-06T16:51:44Z | 35,829,773 | 9 | 2016-03-06T17:04:20Z | [
"python",
"inheritance",
"typeerror",
"superclass",
"super"
] | Coming from Java, I'm struggling a bit getting down inheritance, abstract classes, static methods and similar concepts of OO programming in Python.
I have an implementation of an expression tree class, given (simplified) by
```
# Generic node class
class Node(ABC):
@abstractmethod
def to_expr(self):
pass
@staticmethod
def bracket_complex(child):
s = child.to_expr()
return s if isinstance(child, Leaf) or isinstance(child, UnaryOpNode) else "(" + s + ")"
# Leaf class - used for values and variables
class Leaf(Node):
def __init__(self, val):
self.val = val
def to_expr(self):
return str(self.val)
# Unary operator node
class UnaryOpNode(Node):
def __init__(self, op, child):
self.op = op
self.child = child
def to_expr(self):
return str(self.op) + super().bracket_complex(self.child)
# Binary operator node
class BinaryOpNode(Node):
def __init__(self, op, lchild, rchild):
self.op = op
self.lchild = lchild
self.rchild = rchild
def to_expr(self):
return super().bracket_complex(self.lchild) + " " + str(self.op) + " " + super().bracket_complex(self.rchild)
# Variadic operator node (arbitrary number of arguments)
# Assumes commutative operator
class VariadicOpNode(Node):
def __init__(self, op, list_):
self.op = op
self.children = list_
def to_expr(self):
return (" " + str(self.op) + " ").join(super().bracket_complex(child) for child in self.children)
```
The method `to_expr()` works fine when called on instances of `Leaf`, `UnaryOpNode` and `BinaryOpNode`, but raises a `TypeError` when called on an instance of `VariadicOpNode`:
```
TypeError: super(type, obj): obj must be an instance or subtype of type
```
What am I doing wrong in that specific class that `super()` is suddenly not working?
In Java the static method would get inherited so I wouldn't even need the super call, but in Python this does not seem to be the case. | Answering your implied question:
> In Java the static method would get inherited so I wouldn't even need
> the super call, but in Python this does not seem to be the case.
`staticmethod`s **are** inherited:
```
class A:
@staticmethod
def a():
print('Hello')
class B(A):
def b(self):
self.a()
b = B()
b.a()
b.b()
```
outputs:
```
Hello
Hello
```
Note that you *cannot* simply write:
```
class B(A):
def b(self):
a()
```
Python will **never** resolve a simple name to a method/staticmethod; for Python `a()` *must* be a function call, local or global. You must either reference the instance using `self.a` or the class using `B.a`.
In python the `self` is *explicit* as is the current class reference. Do not confuse with the implicit `this` of Java. |
Is there a one line code to find maximal value in a matrix? | 35,837,346 | 11 | 2016-03-07T06:17:18Z | 35,837,378 | 8 | 2016-03-07T06:19:38Z | [
"python",
"for-loop",
"max"
] | To find the maximal value in a matrix of numbers, we can code 5 lines to solve the problem:
```
ans = matrix[0][0]
for x in range(len(matrix)):
for y in range(len(matrix[0])):
ans = max(ans, matrix[x][y])
return ans
```
Is there a one line solution for this problem?
The one that I came up with is pretty awkward actually:
```
return max(max(matrix, key=max))
```
or
```
return max(map(max, matrix))
``` | By matrix, I assume you mean a 2d-list.
```
max([max(i) for i in matrix])
``` |
Is there a one line code to find maximal value in a matrix? | 35,837,346 | 11 | 2016-03-07T06:17:18Z | 35,837,818 | 14 | 2016-03-07T06:53:30Z | [
"python",
"for-loop",
"max"
] | To find the maximal value in a matrix of numbers, we can code 5 lines to solve the problem:
```
ans = matrix[0][0]
for x in range(len(matrix)):
for y in range(len(matrix[0])):
ans = max(ans, matrix[x][y])
return ans
```
Is there a one line solution for this problem?
The one that I came up with is pretty awkward actually:
```
return max(max(matrix, key=max))
```
or
```
return max(map(max, matrix))
``` | You can use [generator expression](https://www.python.org/dev/peps/pep-0289/) to find the maximum in your matrix. That way you can avoid building the full list of matrix elements in memory.
```
maximum = max(max(row) for row in matrix)
```
instead of list comprehension as given in a previous answer [here](http://stackoverflow.com/a/35837378/3375713)
```
maximum = max([max(row) for row in matrix])
```
This is from PEP (the [rationale](https://www.python.org/dev/peps/pep-0289/#rationale) section):
> ...many of the use cases do not need to have a full list created in
> memory. Instead, **they only need to iterate over the elements one at a
> time.**
>
> ...
>
> Generator expressions are especially useful with functions like sum(), min(), and max() that reduce an iterable input to a single value
>
> ...
>
> The utility of generator expressions is greatly enhanced when combined with reduction functions like sum(), min(), and **max()**.
Also, take a look at this SO post: [Generator Expressions vs. List Comprehension](http://stackoverflow.com/questions/47789/generator-expressions-vs-list-comprehension). |
RuntimeError: 'list' must be None or a list, not <class 'str'> while trying to start celery worker | 35,838,989 | 6 | 2016-03-07T08:12:44Z | 37,029,454 | 12 | 2016-05-04T13:35:20Z | [
"python",
"django",
"celery"
] | I am trying to add a celery task while following [First Steps With Django](http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html) but I get the following error:
```
Traceback (most recent call last):
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/bin/celery", line 11, in <module>
sys.exit(main())
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/__main__.py", line 30, in main
main()
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/bin/celery.py", line 81, in main
cmd.execute_from_commandline(argv)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/bin/celery.py", line 770, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/bin/base.py", line 311, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/bin/celery.py", line 762, in handle_argv
return self.execute(command, argv)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/bin/celery.py", line 694, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/bin/worker.py", line 179, in run_from_argv
return self(*args, **options)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/bin/base.py", line 274, in __call__
ret = self.run(*args, **kwargs)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/bin/worker.py", line 212, in run
state_db=self.node_format(state_db, hostname), **kwargs
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/worker/__init__.py", line 95, in __init__
self.app.loader.init_worker()
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/loaders/base.py", line 128, in init_worker
self.import_default_modules()
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/loaders/base.py", line 116, in import_default_modules
signals.import_modules.send(sender=self.app)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/utils/dispatch/signal.py", line 166, in send
response = receiver(signal=self, sender=sender, **named)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/amqp/utils.py", line 42, in __call__
self.set_error_state(exc)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/amqp/utils.py", line 39, in __call__
**dict(self.kwargs, **kwargs) if self.kwargs else kwargs
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/app/base.py", line 330, in _autodiscover_tasks
self.loader.autodiscover_tasks(packages, related_name)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/loaders/base.py", line 252, in autodiscover_tasks
related_name) if mod)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/loaders/base.py", line 273, in autodiscover_tasks
return [find_related_module(pkg, related_name) for pkg in packages]
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/loaders/base.py", line 273, in <listcomp>
return [find_related_module(pkg, related_name) for pkg in packages]
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/loaders/base.py", line 295, in find_related_module
_imp.find_module(related_name, pkg_path)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/imp.py", line 270, in find_module
"not {}".format(type(name)))
RuntimeError: 'list' must be None or a list, not <class 'str'>
```
This is my project structure:
* project
+ config
- settings
* base.py
* local.py
* production.py
- celery.py # has a celery app
- urls.py
- wsgi.py
+ miscellaneous
- models.py
- views.py
- tasks.py # has a celery task
This is my config/celery.py:
```
from __future__ import absolute_import
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings.local')
from django.conf import settings
app = Celery('config')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
```
This is my config/settings/base.py:
```
THIRD_PARTY_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.postgres',
'django.contrib.gis',
'oauth2_provider',
'rest_framework',
'rest_framework_gis',
'import_export',
'braces',
'social.apps.django_app.default',
'rest_framework_social_oauth2',
]
CUSTOM_APPS = [
'miscellaneous',
# more apps
]
INSTALLED_APPS = THIRD_PARTY_APPS + CUSTOM_APPS
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Asia/Kolkata'
```
This is my config/settings/local.py:
```
from .base import *
LOCAL_APPS = [
'debug_toolbar',
]
INSTALLED_APPS.extend(LOCAL_APPS)
```
This is my miscellaneous/tasks.py:
```
from celery import shared_task
@shared_task
def log_request_meta(request_meta):
# this is not complete yet
return {"a": "b"}
```
Another strange thing is that when I comment out this line:
```
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
```
in config/celery.py and try to start celery worker, the worker starts! (and also discovers the celery task in miscellaneous/tasks.py)
But then, when I try to call this task from a middleware, I get the following error:
```
Traceback (most recent call last):
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/django/core/handlers/base.py", line 123, in get_response
response = middleware_method(request)
File "/Users/amrullahzunzunia/Developer/flyrobe-django/project/miscellaneous/middleware.py", line 12, in process_request
self.__send_to_queue(request_meta)
File "/Users/amrullahzunzunia/Developer/flyrobe-django/project/miscellaneous/middleware.py", line 27, in __send_to_queue
log_request_meta.delay(request_meta)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/app/task.py", line 453, in delay
return self.apply_async(args, kwargs)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/app/task.py", line 565, in apply_async
**dict(self._get_exec_options(), **options)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/app/base.py", line 354, in send_task
reply_to=reply_to or self.oid, **options
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/celery/app/amqp.py", line 305, in publish_task
**kwargs
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/kombu/messaging.py", line 165, in publish
compression, headers)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/kombu/messaging.py", line 241, in _prepare
body) = dumps(body, serializer=serializer)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/kombu/serialization.py", line 164, in dumps
payload = encoder(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/kombu/serialization.py", line 59, in _reraise_errors
reraise(wrapper, wrapper(exc), sys.exc_info()[2])
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/kombu/five.py", line 131, in reraise
raise value.with_traceback(tb)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/kombu/serialization.py", line 55, in _reraise_errors
yield
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/kombu/serialization.py", line 164, in dumps
payload = encoder(data)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/anyjson/__init__.py", line 141, in dumps
return implementation.dumps(value)
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/anyjson/__init__.py", line 89, in dumps
raise TypeError(TypeError(*exc.args)).with_traceback(sys.exc_info()[2])
File "/Users/amrullahzunzunia/virtualenvs/flyrobe_new/lib/python3.5/site-packages/anyjson/__init__.py", line 87, in dumps
return self._encode(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/__init__.py", line 230, in dumps
return _default_encoder.encode(obj)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/encoder.py", line 180, in default
raise TypeError(repr(o) + " is not JSON serializable")
kombu.exceptions.EncodeError: <class 'wsgiref.util.FileWrapper'> is not JSON serializable
```
I am using python-3.5 and django 1.9. I am unable to figure out what did I miss or where am I mistaken, because I did exactly what was in the tutorial mentioned above.
**Update:**
Celery version is 3.1.20 | This is how I solved the problem:
I saw that one of my apps was missing `__init__.py`, which caused problem with `app.autodiscover_tasks(settings.INSTALLED_APPS)`
I added the missing `__init__.py` and celery worker started without any issues |
How Gunicorn forward request to flask | 35,845,730 | 3 | 2016-03-07T14:02:39Z | 35,846,251 | 7 | 2016-03-07T14:28:38Z | [
"python",
"flask",
"gunicorn"
] | Can anyone describe the process of how `Gunicorn` forward the request to `Flask` `internally`?
It would be great if someone explains each and every step involved in the process from receiving the request by Gunicorn to forwarding it to Flask and the reverse way.
Please keep in mind while explaining that I am a newbee in this area. | Gunicorn and Flask talks through [WSGI](https://en.wikipedia.org/wiki/Web_Server_Gateway_Interface), which has two sides: server side and the application side.
on the application(framework) side, we need to provide a callable, the simplest example:
```
def application(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/plain')])
return ['Hello World']
```
the server will call this application and providing environment information and a callback function which is used to indicate start of a response. when the server get the response, it will return it to browser.
so, for gunicorn and flask:
```
from flask import Flask
app = Flask(__name__)
```
when you do this, you've actually got an WSGI compatible application, `app` is a callable:
```
class Flask(object):
...
def __call__(self, environ, start_response):
"""Shortcut for :attr:`wsgi_app`."""
return self.wsgi_app(environ, start_response)
[source](https://github.com/mitsuhiko/flask/blob/master/flask/app.py#L1976)
```
and when you run `gunicorn app:app`, you're telling gunicorn where to load your application, [source](https://github.com/benoitc/gunicorn/blob/master/gunicorn/app/wsgiapp.py#L52)
when a request comes, gunicorn parses it, construct a dict `environ`, which is defined [here](https://www.python.org/dev/peps/pep-0333/#environ-variables), contains information like `REQUEST_METHOD`, `QUERY_STRING` etc, then call the application(a Flask object!) with it: `app(environ, start_repsonse)` [source](https://github.com/benoitc/gunicorn/blob/master/gunicorn/workers/sync.py#L171), `start_repsonse` is a callback in Gunicorn to get the reponse status and headers, and the return value of the `app` call will be send as response body. |
Why it's not ok for variables to be global but it's ok for functions? | 35,851,455 | 5 | 2016-03-07T18:43:48Z | 35,851,566 | 7 | 2016-03-07T18:49:02Z | [
"python",
"function",
"variables",
"closures",
"global"
] | I was writing some Python code and, as usual, I try to make my functions small and give them a clear name (although sometimes a little too long). I get to the point where there are no global variables and everything a function needs is passed to it.
But I thought, in this case, every function has access to any other function. Why not limit their access to other functions just like we limit the access to other variables.
I was thinking to use nested functions but that implies closures and that's even worse for my purpose.
I was also thinking about using objects and I think this is the point of OOP, although it'll be a little too much boilerplate in my case.
Has anyone got this problem on her/his mind and what's the solution. | It is not a good idea to have global *mutable* data, e.g. variables. The mutability is the key here. You can have constants and functions to your hearts content.
But as soon as you write functions that rely on globally mutable state it limits the reusability of your functions - they're always bound to that one shared state. |
How to use openCV's connected components with stats in python? | 35,854,197 | 3 | 2016-03-07T21:16:44Z | 35,854,198 | 7 | 2016-03-07T21:16:44Z | [
"python",
"opencv",
"connected-components"
] | I am looking for an example of how to use OpenCV's ConnectedComponentsWithStats() function in python, note this is only available with OpenCV 3 or newer. The official documentation only shows the API for C++, even though the function exists when compiled for python. I could not find it anywhere online. | The function works as follows:
```
# Import the cv2 library
import cv2
# Read the image you want connected components of
src = cv2.imread('/directorypath/image.bmp')
# Threshold it so it becomes binary
ret, thresh = cv2.threshold(src,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# You need to choose 4 or 8 for connectivity type
connectivity = 4
# Perform the operation
output = cv2.connectedComponentsWithStats(thresh, connectivity, cv2.CV_32S)
# Get the results
# The first cell is the number of labels
num_labels = output[0]
# The second cell is the label matrix
labels = output[1]
# The third cell is the stat matrix
stats = output[2]
# The fourth cell is the centroid matrix
centroids = output[3]
```
**Labels** is a matrix the size of the input image where each element has a value equal to its label.
**Stats** is a matrix of the stats that the function calculates. It has a length equal to the number of labels and a width equal to the number of stats. It can be used with the OpenCV documentation for it:
> Statistics output for each label, including the background label, see
> below for available statistics. Statistics are accessed via
> **stats[label, COLUMN]** where available columns are defined below.
>
> * **cv2.CC\_STAT\_LEFT** The leftmost (x) coordinate which is the inclusive start of the bounding box in the horizontal direction.
> * **cv2.CC\_STAT\_TOP** The topmost (y) coordinate which is the inclusive start of the bounding box in the vertical direction.
> * **cv2.CC\_STAT\_WIDTH** The horizontal size of the bounding box
> * **cv2.CC\_STAT\_HEIGHT** The vertical size of the bounding box
> * **cv2.CC\_STAT\_AREA** The total area (in pixels) of the connected component
**Centroids** is a matrix with the x and y locations of each centroid. The row in this matrix corresponds to the label number. |
What's the best way to "periodically" replace characters in a string in Python? | 35,854,392 | 8 | 2016-03-07T21:28:05Z | 35,854,462 | 10 | 2016-03-07T21:31:33Z | [
"python",
"python-3.x",
"replace"
] | I have a string where a character ('@') needs to be replaced by characters from a list of one or more characters "in order" and "periodically".
So for example I have
`'ab@cde@@fghi@jk@lmno@@@p@qrs@tuvwxy@z'`
and want
`'ab1cde23fghi1jk2lmno312p3qrs1tuvwxy2z'`
for `replace_chars = ['1', '2', '3']`
The problem is that in this example there are more @ in the string
than I have replacers.
This is my try:
```
result = ''
replace_chars = ['1', '2', '3']
string = 'ab@cde@@fghi@jk@lmno@@@p@qrs@tuvwxy@z'
i = 0
for char in string:
if char == '@':
result += replace_chars[i]
i += 1
else:
result += char
print(result)
```
but this only works of course if there are not more than three @ in the original string and otherwise I get **IndexError**.
Edit: Thanks for the answers! | Your code could be fixed by adding the line `i = i%len(replace_chars)` as the last line of your `if` clause. This way you will be taking the remainder from the division of `i` by the length of your list of replacement characters.
The shorter solution is to use a generator that periodically spits out replacement characters.
```
>>> from itertools import cycle
>>> s = 'ab@cde@@fghi@jk@lmno@@@p@qrs@tuvwxy@z'
>>> replace_chars = ['1', '2', '3']
>>>
>>> replacer = cycle(replace_chars)
>>> ''.join([next(replacer) if c == '@' else c for c in s])
'ab1cde23fghi1jk2lmno312p3qrs1tuvwxy2z'
```
For each character `c` in your string `s`, we get the next replacement character from the `replacer` generator if the character is an `'@'`, otherwise it just gives you the original character.
For an explanation why I used a list comprehension instead of a generator expression, read [this](http://stackoverflow.com/questions/9060653/list-comprehension-without-python/9061024#9061024). |
Efficiently count word frequencies in python | 35,857,519 | 16 | 2016-03-08T01:52:21Z | 35,857,833 | 23 | 2016-03-08T02:30:21Z | [
"python",
"nlp",
"scikit-learn",
"word-count",
"frequency-distribution"
] | I'd like to count frequencies of all words in a text file.
```
>>> countInFile('test.txt')
```
should return `{'aaa':1, 'bbb': 2, 'ccc':1}` if the target text file is like:
```
# test.txt
aaa bbb ccc
bbb
```
I've implemented it with pure python following [some posts](http://stackoverflow.com/questions/12117576/how-to-count-word-frequencies-within-a-file-in-python). However, I've found out pure-python ways are insufficient due to huge file size (> 1GB).
I think borrowing sklearn's power is a candidate.
If you let CountVectorizer count frequencies for each line, I guess you will get word frequencies by summing up each column. But, it sounds a bit indirect way.
What is the most efficient and straightforward way to count words in a file with python?
### Update
My (very slow) code is here:
```
from collections import Counter
def get_term_frequency_in_file(source_file_path):
wordcount = {}
with open(source_file_path) as f:
for line in f:
line = line.lower().translate(None, string.punctuation)
this_wordcount = Counter(line.split())
wordcount = add_merge_two_dict(wordcount, this_wordcount)
return wordcount
def add_merge_two_dict(x, y):
return { k: x.get(k, 0) + y.get(k, 0) for k in set(x) | set(y) }
``` | The most succinct approach is to use the tools Python gives you.
```
from future_builtins import map # Only on Python 2
from collections import Counter
from itertools import chain
def countInFile(filename):
with open(filename) as f:
return Counter(chain.from_iterable(map(str.split, f)))
```
That's it. `map(str.split, f)` is making a generator that returns `list`s of words from each line. Wrapping in `chain.from_iterable` converts that to a single generator that produces a word at a time. `Counter` takes an input iterable and counts all unique values in it. At the end, you `return` a `dict`-like object (a `Counter`) that stores all unique words and their counts, and during creation, you only store a line of data at a time and the total counts, not the whole file at once.
In theory, you might do slightly better looping over the chained results yourself and using a `dict` or `collections.defaultdict(int)` to count (because `Counter` is implemented in Python, which can make it slower in some cases), but letting `Counter` do the work is simpler and more self-documenting (I mean, the whole goal is counting, so use a `Counter`).
**Update:** You seem to want punctuation stripped and case-insensitivity, so here's a variant of my earlier code that does that:
```
from string import punctuation
def countInFile(filename):
with open(filename) as f:
linewords = (line.translate(None, punctuation).lower().split() for line in f)
return Counter(chain.from_iterable(linewords))
```
Your code runs much more slowly because it's creating and destroying many small `Counter` and `set` objects, rather than `.update`-ing a single `Counter` once per line (which, while slightly slower than what I gave in the updated code block, would be at least algorithmically similar in scaling factor). |
Create empty conda environment | 35,860,436 | 13 | 2016-03-08T06:23:18Z | 37,216,638 | 29 | 2016-05-13T18:07:29Z | [
"python",
"anaconda"
] | I can create a new conda environment, with program `biopython` with this:
```
conda create --name snowflakes biopython
```
What if I do not want to install any program? It seems I can not do that:
```
» conda create --name tryout
Error: too few arguments, must supply command line package specs or --file
You can specify one or more default packages to install when creating
an environment. Doing so allows you to call conda create without
explicitly providing any package names.
To set the provided packages, call conda config like this:
conda config --add create_default_packages PACKAGE_NAME
``` | You can give a package name of just "python" to get a base, empty install.
```
conda create --name myenv python
conda create --name myenv python=3.4
``` |
NLTK Lookup Error | 35,861,482 | 3 | 2016-03-08T07:29:33Z | 35,862,172 | 7 | 2016-03-08T08:12:44Z | [
"python",
"python-2.7",
"nltk"
] | While running a Python script using NLTK I got this:
```
Traceback (most recent call last):
File "cpicklesave.py", line 56, in <module>
pos = nltk.pos_tag(words)
File "/usr/lib/python2.7/site-packages/nltk/tag/__init__.py", line 110, in pos_tag
tagger = PerceptronTagger()
File "/usr/lib/python2.7/site-packages/nltk/tag/perceptron.py", line 140, in __init__
AP_MODEL_LOC = str(find('taggers/averaged_perceptron_tagger/'+PICKLE))
File "/usr/lib/python2.7/site-packages/nltk/data.py", line 641, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource u'taggers/averaged_perceptron_tagger/averaged_perceptro
n_tagger.pickle' not found. Please use the NLTK Downloader to
obtain the resource: >>> nltk.download()
Searched in:
- '/root/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
**********************************************************************
```
Can anyone explain the problem? | Use
```
>>> nltk.download()
```
to install the missing module (the *Perceptron Tagger*).
(check also the answers to [Failed loading english.pickle with nltk.data.load](http://stackoverflow.com/questions/4867197/failed-loading-english-pickle-with-nltk-data-load)) |
Cleanest way to obtain the numeric prefix of a string | 35,866,847 | 24 | 2016-03-08T12:01:50Z | 35,867,020 | 52 | 2016-03-08T12:09:41Z | [
"python",
"string",
"python-2.7",
"python-3.x"
] | What is the cleanest way to obtain the numeric prefix of a string in Python?
By "clean" I mean simple, short, readable. I couldn't care less about performance, and I suppose that it is hardly measurable in Python anyway.
For example:
Given the string `'123abc456def'`, what is the cleanest way to obtain the string `'123'`?
The code below obtains `'123456'`:
```
input = '123abc456def'
output = ''.join(c for c in input if c in '0123456789')
```
So I am basically looking for some way to replace the `if` with a `while`. | You can use [`itertools.takewhile`](https://docs.python.org/3/library/itertools.html#itertools.takewhile) which will iterate over your string (the iterable argument) until it encounters the first item which returns `False` (by passing to predictor function):
```
>>> from itertools import takewhile
>>> input = '123abc456def'
>>> ''.join(takewhile(str.isdigit, input))
'123'
``` |
Cleanest way to obtain the numeric prefix of a string | 35,866,847 | 24 | 2016-03-08T12:01:50Z | 35,867,041 | 13 | 2016-03-08T12:10:43Z | [
"python",
"string",
"python-2.7",
"python-3.x"
] | What is the cleanest way to obtain the numeric prefix of a string in Python?
By "clean" I mean simple, short, readable. I couldn't care less about performance, and I suppose that it is hardly measurable in Python anyway.
For example:
Given the string `'123abc456def'`, what is the cleanest way to obtain the string `'123'`?
The code below obtains `'123456'`:
```
input = '123abc456def'
output = ''.join(c for c in input if c in '0123456789')
```
So I am basically looking for some way to replace the `if` with a `while`. | This is the simplest way to extract a list of numbers from a string:
```
>>> import re
>>> input = '123abc456def'
>>> re.findall('\d+', s)
['123','456']
```
If you need a list of int's then you might use the following code:
```
>>> map(int, re.findall('\d+', input ))
[123,456]
```
And now you can access the first element [0] from the above list |
python: break list into pieces when an empty entry is found | 35,869,725 | 3 | 2016-03-08T14:15:30Z | 35,869,835 | 8 | 2016-03-08T14:20:25Z | [
"python",
"list"
] | Lets say I have a list that looks like:
```
[
[],
['blah','blah'],
['a','b'],
[],
['abc','2'],
['ff','a'],
['test','a'],
[],
['123','1'],
[]
]
```
How do I break this list into list of lists when it encounters an empty item
so list[0] would have :
```
['blah','blah']
['a','b']
```
list[1] would have:
```
['abc','2']
['ff','a']
['test','a']
``` | You can use [itertools.groupby](https://docs.python.org/3.5/library/itertools.html#itertools.groupby), using bool as the key:
```
from itertools import groupby
lst = [list(v) for k,v in groupby(l, key=bool) if k]
```
Demo:
```
In [22]: from itertools import groupby
In [23]: lst = [list(v) for k,v in groupby(l,key=bool) if k]
In [24]: lst[1]
Out[24]: [['abc', '2'], ['ff', 'a'], ['test', 'a']]
In [25]: lst[0]
Out[25]: [['blah', 'blah'], ['a', 'b']]
```
`k` will be False for each empty list and True for all non-empty lists.
```
In [26]: bool([])
Out[26]: False
In [27]: bool([1])
Out[27]: True
In [28]: bool([1,1,3])
Out[28]: True
``` |
Remove all elements of a set that contain a specific char | 35,877,933 | 4 | 2016-03-08T21:01:54Z | 35,878,019 | 8 | 2016-03-08T21:07:25Z | [
"python",
"set"
] | I have a set of a few thousand primes generated from a generator:
```
primes = set(primegen()) = set([..., 89, 97, 101, 103, ...])
```
Some of those primes have a zero in them. I would like to get rid of them. Is there a way to do this all at once?
Currently I am removing elements as I loop through primes, with a regex match:
```
import re
zero = re.compile('.+0.+')
while primes:
p = str(primes.pop())
if zero.match(p):
continue
# do other stuff
```
I think this is the best way, but am curious if I'm wrong. | You could use a set comprehension to filter your existing set of primes.
```
primes = {p for p in primes if '0' not in str(p)}
``` |
Reset print function in python | 35,893,962 | 2 | 2016-03-09T14:12:08Z | 35,893,969 | 10 | 2016-03-09T14:12:44Z | [
"python",
"python-3.x"
] | I just started looking into the Python and need help on this.
In the shell I did
```
>>> print = 1
```
Now when I tried to print anything like
```
>>> print ("hello")
```
I am getting `"TypeError: 'int' object is not callable`, obviously because print in now a int
I am able to figure out that if I restart the shell, print starts working fine again.
What I want to know that how can I reset the `print` to its original state i.e. print to console without restarting the shell? | You created a global that *masks* the built-in name. Use `del` to remove the new global; Python will then find the built-in again:
```
del print
```
Python looks for `print` through the current scope (in functions that includes locals and any parent scopes), then globals, then the [built-in namespace](https://docs.python.org/3/library/builtins.html), and it is in the latter that the `print()` function lives. |
how to print 3x3 array in python? | 35,903,828 | 15 | 2016-03-09T22:19:05Z | 35,903,934 | 8 | 2016-03-09T22:27:06Z | [
"python"
] | I need to print a 3 x 3 array for a game called TicTackToe.py. I know we can print stuff from a list in a horizontal or vertical way by using
```
listA=['a','b','c','d','e','f','g','h','i','j']
# VERTICAL PRINTING
for item in listA:
print item
```
Output:
```
a
b
c
```
or
```
# HORIZONTAL PRINTING
for item in listA:
print item,
```
Output:
```
a b c d e f g h i j
```
How can I print a mix of both, e.g. printing a 3x3 box
like
```
a b c
d e f
g h i
``` | A simple approach would be to use the modulo operator:
```
listA=['a','b','c','d','e','f','g','h','i','j']
count = 0
for item in listA:
if not count % 3:
print
print item,
count += 1
```
As pointed out by Peter Wood, you can use the enumerator operator, to avoid the count variable:
```
listA=['a','b','c','d','e','f','g','h','i','j']
listB = enumerate(listA)
for item in listB:
if not item[0] % 3:
print
print item[1],
``` |
how to print 3x3 array in python? | 35,903,828 | 15 | 2016-03-09T22:19:05Z | 35,903,956 | 11 | 2016-03-09T22:28:10Z | [
"python"
] | I need to print a 3 x 3 array for a game called TicTackToe.py. I know we can print stuff from a list in a horizontal or vertical way by using
```
listA=['a','b','c','d','e','f','g','h','i','j']
# VERTICAL PRINTING
for item in listA:
print item
```
Output:
```
a
b
c
```
or
```
# HORIZONTAL PRINTING
for item in listA:
print item,
```
Output:
```
a b c d e f g h i j
```
How can I print a mix of both, e.g. printing a 3x3 box
like
```
a b c
d e f
g h i
``` | You can use the logic from the [grouper recipe](https://docs.python.org/2/library/itertools.html#recipes):
```
listA=['a','b','c','d','e','f','g','h','i','j']
print("\n".join(map(" ".join, zip(*[iter(listA)] * 3))))
a b c
d e f
g h i
```
If you don't want to lose elements use `izip_longest` with an empty string as a fillvalue:
```
listA=['a','b','c','d','e','f','g','h','i','j']
from itertools import izip_longest
print("\n".join(map(" ".join, izip_longest(*[iter(listA)] * 3,fillvalue=""))))
```
Which differs in that it keeps the j:
```
a b c
d e f
g h i
j
```
You can put the logic in a function and call it when you want to print, passing in whatever values you want.
```
from itertools import izip_longest
def print_matrix(m,n, fill):
print( "\n".join(map(" ".join, izip_longest(*[iter(m)] * n, fillvalue=fill))))
```
Or without itertools just chunk and join, you can also take a `sep` arg to use as the delimiter:
```
def print_matrix(m,n, sep):
print( "\n".join(map("{}".format(sep).join, (m[i:i+n] for i in range(0, len(m), n)))))
```
You just need to pass the list and the size for each row:
```
In [13]: print_matrix(listA, 3, " ")
a b c
d e f
g h i
j
In [14]: print_matrix(listA, 3, ",")
a,b,c
d,e,f
g,h,i
j
In [15]: print_matrix(listA, 4, ",")
a,b,c,d
e,f,g,h
i,j
In [16]: print_matrix(listA, 4, ";")
a;b;c;d
e;f;g;h
i;j
``` |
how to print 3x3 array in python? | 35,903,828 | 15 | 2016-03-09T22:19:05Z | 35,904,127 | 23 | 2016-03-09T22:39:48Z | [
"python"
] | I need to print a 3 x 3 array for a game called TicTackToe.py. I know we can print stuff from a list in a horizontal or vertical way by using
```
listA=['a','b','c','d','e','f','g','h','i','j']
# VERTICAL PRINTING
for item in listA:
print item
```
Output:
```
a
b
c
```
or
```
# HORIZONTAL PRINTING
for item in listA:
print item,
```
Output:
```
a b c d e f g h i j
```
How can I print a mix of both, e.g. printing a 3x3 box
like
```
a b c
d e f
g h i
``` | You can [**`enumerate`**](https://docs.python.org/2/library/functions.html#enumerate) the items, and print a newline only every third item:
```
for index, item in enumerate('abcdefghij', start=1):
print item,
if not index % 3:
print
```
Output:
```
a b c
d e f
g h i
j
```
`enumerate` starts counting from zero by default, so I set `start=1`.
As @arekolek comments, if you're using Python 3, or have imported the print function from the future for Python 2, you can specify the line ending all in one go, instead of the two steps above:
```
for index, item in enumerate('abcdefghij', start=1):
print(item, end=' ' if index % 3 else '\n')
``` |
Python - create an EXE that runs code as written, not as it was when compiled | 35,906,523 | 16 | 2016-03-10T02:30:10Z | 36,112,722 | 7 | 2016-03-20T10:40:41Z | [
"python",
"pygame",
"executable",
"launcher",
"dynamic-import"
] | I'm making a pygame program that is designed to be modular. I am building an exe with pygame2exe of the file main.py, which basically just imports the real main game and runs it. What I'm hoping for is a sort of launcher that will execute Python scripts from an EXE, rather than a single program containing all immutable files.
What is the best way to go about this? I've tried using imp to dynamically import all modules at runtime instead of implicitly importing them, but that seems to break object inheritance. | After some experiments I've found a solution.
1. Create a separate folder `source` in the main folder of the application. Here will be placed source files. Also place file `__init__.py` to the folder. Lets name a main file like `main_module.py`.
2. Add all of its contents as a data files to the py2exe configuration `setup.py`. Now after compiling the program, these files will be placed in the dist folder.
```
data_files += [('source', glob('source/*.py'),)]
setup(
data_files=data_files,
.... # other options
windows=[
{
"script": "launcher.py",
"icon_resources": [(0, "resources/favicon.ico")]
}
)
```
3. Make `launcher.py` - it's task is to import all system and required libraries like pygame, pyqt and so on. Then run you program:
```
import sys, time, os, hashlib, atexit # std modules
import PyQt5, ... # foreign libraries
sys.path.insert(0, 'source')
exec('import main_module')
```
4. Now `main_module.py` will be imported, if it imports your modules, they will be imported too in their places in hierarchy. For example head of the `main_module.py` can be like this:
```
import user_tweaks
from user_data import parser
```
These files `user_tweaks.py` and `user_data.py` should be located in `source` folder at appropriate paths relative to `main_module.py`.
You may change contents of `source` folder without recompilation program itself. Any time program runs it uses fresh contents of `source`.
As a result you have an application folder with:
* A separate launcher - simple .exe file
* All required modules
* Your application with all its modules. |
Python Iterate through list of list to make a new list in index sequence | 35,928,350 | 11 | 2016-03-10T22:10:23Z | 35,928,434 | 11 | 2016-03-10T22:15:52Z | [
"python"
] | How would you iterate through a list of lists, such as:
```
[[1,2,3,4], [5,6], [7,8,9]]
```
and construct a new list by grabbing the first item of each list, then the second, etc. So the above becomes this:
```
[1, 5, 7, 2, 6, 8, 3, 9, 4]
``` | You can use a list comprehension along with [`itertools.izip_longest`](https://docs.python.org/2/library/itertools.html#itertools.izip_longest) (or `zip_longest` in Python 3)
```
from itertools import izip_longest
a = [[1,2,3,4], [5,6], [7,8,9]]
[i for sublist in izip_longest(*a) for i in sublist if i is not None]
# [1, 5, 7, 2, 6, 8, 3, 9, 4]
``` |
Error running basic tensorflow example | 35,953,210 | 16 | 2016-03-12T02:55:20Z | 35,963,479 | 50 | 2016-03-12T21:22:44Z | [
"python",
"tensorflow"
] | I have just reinstalled latest tensorflow on ubuntu:
```
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl
[sudo] password for ubuntu:
The directory '/home/ubuntu/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/ubuntu/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting tensorflow==0.7.1 from https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl
Downloading https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl (13.8MB)
100% |ââââââââââââââââââââââââââââââââ| 13.8MB 32kB/s
Requirement already up-to-date: six>=1.10.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow==0.7.1)
Requirement already up-to-date: protobuf==3.0.0b2 in /usr/local/lib/python2.7/dist-packages (from tensorflow==0.7.1)
Requirement already up-to-date: wheel in /usr/local/lib/python2.7/dist-packages (from tensorflow==0.7.1)
Requirement already up-to-date: numpy>=1.8.2 in /usr/local/lib/python2.7/dist-packages (from tensorflow==0.7.1)
Requirement already up-to-date: setuptools in /usr/local/lib/python2.7/dist-packages (from protobuf==3.0.0b2->tensorflow==0.7.1)
Installing collected packages: tensorflow
Found existing installation: tensorflow 0.7.1
Uninstalling tensorflow-0.7.1:
Successfully uninstalled tensorflow-0.7.1
Successfully installed tensorflow-0.7.1
```
When following the directions to test it fails with **cannot import name pywrap\_tensorflow**:
```
$ ipython
/git/tensorflow/tensorflow/__init__.py in <module>()
21 from __future__ import print_function
22
---> 23 from tensorflow.python import *
/git/tensorflow/tensorflow/python/__init__.py in <module>()
43 _default_dlopen_flags = sys.getdlopenflags()
44 sys.setdlopenflags(_default_dlopen_flags | ctypes.RTLD_GLOBAL)
---> 45 from tensorflow.python import pywrap_tensorflow
46 sys.setdlopenflags(_default_dlopen_flags)
47
ImportError: cannot import name pywrap_tensorflow
```
Is there an additional change needed to my python or ubuntu/bash environment? | From the path in your stack trace (`/git/tensorflow/tensorflow/â¦`), it looks like your Python path may be loading the tensorflow libraries from the source directory, rather than the version that you have installed. As a result, it is unable to find the (compiled) `pywrap_tensorflow` library, which is installed in a different directory.
A common solution is to `cd` out of the `/git/tensorflow` directory before starting `python` or `ipython`. |
Subtract two lists in Python without using classes | 35,953,685 | 2 | 2016-03-12T04:17:32Z | 35,953,703 | 11 | 2016-03-12T04:20:23Z | [
"python",
"list",
"difference"
] | I have two numeric lists, **a**, **b** that I am trying to subtract like this; **b - a**.
I want this to be easy for a beginner to understand so I don't want to import classes or libraries.
**This is what I have tried, and it works:**
```
a = [420, 660, 730, 735]
b = [450, 675, 770, 930]
i = 0
j = len(a)
difference = []
while i < j:
difference.append(b[i] - a[i])
i += 1
print (difference)
>>[30, 15, 40, 195] **the correct result**
```
However, there must be a simpler way of doing this without importing classes or libraries that I am missing. | A simple way to show this would be:
```
a = [420, 660, 730, 735]
b = [450, 675, 770, 930]
print([v2 - v1 for v1, v2 in zip(a, b)])
```
`zip` will create a tuple between each of the elements in your list. So if you run zip alone you will have this:
```
zip(a, b)
[(420, 450), (660, 675), (730, 770), (735, 930)]
```
Then, to further analyze what is happening in the answer I provided, what you are doing is iterating over each element in your list, and then specifying that `v1` and `v2` are each item in your tuple. Then the `v2 - v1` is pretty much doing your math operation. And all of this is wrapped inside what is called a list comprehension.
If you are still convinced that you still don't want to use zip at all, and if your example is using two equal lists, then what I suggest is to drop the while loop and use a for instead. And your solution will be very similar to what you already have, but as such:
```
n = []
for i, v in enumerate(a):
n.append(b[i] - v)
print(n)
```
So, you have to create a new list that will hold your new data. Use `enumerate` so you get your index and value through each iteration, and append your math operation to your new list. |
Simple Python String (Backward) Slicing | 35,956,304 | 5 | 2016-03-12T10:11:27Z | 35,956,419 | 7 | 2016-03-12T10:22:42Z | [
"python",
"python-3.x",
"slice"
] | Yeah I know there are a lot of similar questions up there. But I just cannot find what I was looking for.
My confusion is about the backward slicing.
```
my_jumble = ['jumbly', 'wumbly', 'number', 5]
print(my_jumble[:1:-1])
```
Now I have found that the result will be
```
[5, 'number']
```
So I thought that maybe I will test it out by changing the ends in that string slicing.
```
print(my_jumble[:2:-1])
```
I was really sure that Python would give me something like
```
[5, 'number', 'wumbly']
```
Instead it gave me this which made me totally lost...
```
[5]
```
Can someone explain what is going on here? I am new to Python and find this very confusing.. Thanks for any help. | I think one of the easiest ways to understand what is going in the code is by understanding that backward slicing *reverses the way you index* your list (*visually*, it is *like* reversing your list) *before* getting sliced **but** the indexes of the elements in the list themselves *do not* change.
Thus when you have a list like this:
```
['jumbly', 'wumbly', 'number', 5]
0 1 2 3 #<-- index
```
by making it backward reading (adding `-1` as the third indexer), you make it *looks* like this (because it is now indexing from the last to the first, instead of from the first to the last):
```
[5, 'number', 'wumbly', 'jumbly']
3 2 1 0 #<-- index
```
and then when you slice from the "beginning" to one (`:1`), you get everything from the "beginning" (now the "beginning" is `3`) and stop when seeing `1`:
```
[5, 'number', 'wumbly', 'jumbly']
3 2 1 0 #<-- index
^ ^ x
grab! grab! nope!
```
Thus you got your return:
```
[5, 'number']
```
The same principle applies when you backward slice with `[:2:-1]`:
```
[5, 'number', 'wumbly', 'jumbly']
3 2 1 0 #<-- index
^ x
grab! nope!
```
Thus you got your result:
```
[5]
```
Now, using that principle, you know what to put as the second indexer if you want to return what you want: **zero!** --> `[:0:-1]`:
```
[5, 'number', 'wumbly', 'jumbly']
3 2 1 0 #<-- index
^ ^ ^ x
grab! grab! grab! nope!
```
Then, you will get the result that you want:
```
[5, 'number', 'wumbly']
``` |
Converting an array to 0s and 1s | 35,964,936 | 3 | 2016-03-13T00:09:18Z | 35,965,007 | 7 | 2016-03-13T00:18:36Z | [
"python",
"multidimensional-array"
] | Suppose I have an array `A = [13, 15, 17]`. I want to create a new array `B` such that all entries apart from its 13th, 15th and 17th entries are `0`, and each of these three are `1`'s. How can I do this? | Use a list comprehension:
```
B = [int(i+1 in A) for i in range(max(A))]
```
For each number in the range from `0` to the highest number in `A`, we take `int(i+1 in A)`. `i+1 in A` will be a boolean value. If that number is in `A`, the result will be `True`. Otherwise it will be `False`. Since `bool` inherits from `int`, we can easily convert it to a normal integer with `int()`. |
Removing duplicates from list of lists by using list comprehension | 35,965,474 | 2 | 2016-03-13T01:38:13Z | 35,965,490 | 7 | 2016-03-13T01:40:16Z | [
"python",
"list-comprehension"
] | I was curious if you could remove duplicates from list of lists and return uniques as a list. I was trying this:
```
def do_list( lists ):
res = [ [ one for one in temp if one not in res ] for temp in lists ]
return res
```
So for example, if:
```
lists = [ [ "a","b","c" ],[ "d","a" ],[ "c","a","f" ] ]
```
the result should be:
```
[ "a","b,"c","d","f" ]
```
But it gives me error that I refference variable res before assigment. | You could do this:
```
set(itertools.chain.from_iterable(lists))
```
`set` will remove all duplicates, inside of the `set` is just flattening your list(s) to a single list. |
Python 'with' not deleting object | 36,002,705 | 7 | 2016-03-15T04:28:25Z | 36,002,771 | 16 | 2016-03-15T04:35:39Z | [
"python",
"with-statement"
] | Trying to properly delete a Python object. I'm creating an object and then supposedly deleting it with a 'with' statement. But when I do a print out after the 'with' statement is closed.... the object is still there:
```
class Things(object):
def __init__(self, clothes, food, money):
self.clothes = clothes
self.food = food
self.money = money
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print('object deleted')
with Things('socks','food',12) as stuff:
greg = stuff.clothes
print(greg)
print(stuff.clothes)
```
returns :
```
socks
object deleted
socks
``` | Python's `with` statement is not about deleting objects - it's about resource management. The `__enter__` and `__exit__` methods are for you to supply resource initialization and destruction code i.e. you may choose to delete something there, but there is no implicit deletion of objects. Have a read of this [`with` article](http://effbot.org/zone/python-with-statement.htm) to get a better understanding of how to use it.
The object stays in scope after the `with` statement. You could call `del` on it if that's what you want. Since it's in scope you can query it after it's underlying resources have been closed. Consider this psuedo code:
```
class DatabaseConnection(object):
def __init__(self, connection):
self.connection = connection
self.error = None
def __enter__(self):
self.connection.connect()
def __exit__(self, exc_type, exc_val, exc_tb):
self.connection.disconnect()
def execute(self, query):
try
self.connection.execute(query)
except e:
self.error = e
with DatabaseConnection(connection) as db:
db.execute('SELECT * FROM DB')
if db.error:
print(db.error)
del db
```
We wouldn't want to keep a database connection hanging around longer than we need (another thread/client might need it), so instead we allow the resource to be freed (implicitly at the end of the `with` block), but then we can continue to query the object after that. I've then added an explicit `del` to tell the runtime that the code is finished with the variable. |
Json.dump failing with 'must be unicode, not str' TypeError | 36,003,023 | 8 | 2016-03-15T05:01:05Z | 36,003,774 | 9 | 2016-03-15T06:04:52Z | [
"python",
"json",
"python-2.7",
"unicode",
"encoding"
] | I have a json file which happens to have a multitude of Chinese and Japanese (and other language) characters. I'm loading it into my python 2.7 script using `io.open` as follows:
```
with io.open('multiIdName.json', encoding="utf-8") as json_data:
cards = json.load(json_data)
```
I add a new property to the json, all good. Then I attempt to write it back out to another file:
```
with io.open("testJson.json",'w',encoding="utf-8") as outfile:
json.dump(cards, outfile, ensure_ascii=False)
```
That's when I get the error `TypeError: must be unicode, not str`
I tried writing the outfile as a binary (`with io.open("testJson.json",'wb') as outfile:`), but I end up with stuff this:
```
{"multiverseid": 262906, "name": "\u00e6\u00b8\u00b8\u00e9\u009a\u00bc\u00e7\u008b\u00ae\u00e9\u00b9\u00ab", "language": "Chinese Simplified"}
```
I thought opening and writing it in the same encoding would be enough, as well as the ensure\_ascii flag, but clearly not. I just want to preserve the characters that existed in the file before I run my script, without them turning into \u's. | Can you try the following?
```
with io.open("testJson.json",'w',encoding="utf-8") as outfile:
outfile.write(unicode(json.dumps(cards, ensure_ascii=False)))
``` |
Sum of multiple list of lists index wise | 36,003,967 | 6 | 2016-03-15T06:18:29Z | 36,004,028 | 7 | 2016-03-15T06:22:25Z | [
"python",
"list"
] | Consider I have a list of lists as:
```
[[5, 10, 30, 24, 100], [1, 9, 25, 49, 81]]
[[15, 10, 10, 16, 70], [10, 1, 25, 11, 19]]
[[34, 20, 10, 10, 30], [9, 20, 25, 30, 80]]
```
Now I want the sum of all indexes of first list's index wise and then the 2nd list `5+15+34=54 10+10+20=40`
and so on as:
```
[54,40,50, 50,200], [20,30,75,90,180]
```
I tried:
```
for res in results:
print [sum(j) for j in zip(*res)]
```
Here `results` is the list of lists.
But it gives sum of each list item as:
```
[6,19,55,73,181]
[25,11,35,27,89]
[43,40,35,40,110]
``` | You are almost correct, you need to unpack `results` and zip it as well.
```
>>> data = [[[5, 10, 30, 24, 100], [1, 9, 25, 49, 81]],
... [[15, 10, 10, 16, 70], [10, 1, 25, 11, 19]],
... [[34, 20, 10, 10, 30], [9, 20, 25, 30, 80]]]
>>> for res in zip(*data):
... print [sum(j) for j in zip(*res)]
...
[54, 40, 50, 50, 200]
[20, 30, 75, 90, 180]
```
You can simply write this with list comprehension as
```
>>> [[sum(item) for item in zip(*items)] for items in zip(*data)]
[[54, 40, 50, 50, 200], [20, 30, 75, 90, 180]]
``` |
How does sympy calculate pi? | 36,004,680 | 4 | 2016-03-15T07:08:57Z | 36,018,371 | 7 | 2016-03-15T17:40:29Z | [
"python",
"sympy",
"pi"
] | What is the numerical background of sympy to calculate pi?
I know that SymPy uses mpmath in the background, which makes it possible to perform computations using arbitrary-precision arithmetic. That way, some special constants, like e, pi, oo, are treated as symbols and can be evaluated with arbitrary precision.
But how does Sympy determine the any number of decimal places? How does Sympy do it numerically? | mpmath computes pi using the Chudnovsky formula (<https://en.wikipedia.org/wiki/Chudnovsky_algorithm>).
Pi is approximated by an infinite series whose terms decrease like (1/151931373056000)^n, so each term adds roughly 14.18 digits. This makes it easy to select a number of terms *N* so that a desired accuracy is achieved.
The actual computation is done with integer arithmetic: that is, for a precision of *prec* bits, an approximation of pi \* 2^(*prec*) is computed.
Here is the code, extracted from mpmath/libmp/libelefun.py (<https://github.com/fredrik-johansson/mpmath/blob/master/mpmath/libmp/libelefun.py>):
```
# Constants in Chudnovsky's series
CHUD_A = MPZ(13591409)
CHUD_B = MPZ(545140134)
CHUD_C = MPZ(640320)
CHUD_D = MPZ(12)
def bs_chudnovsky(a, b, level, verbose):
"""
Computes the sum from a to b of the series in the Chudnovsky
formula. Returns g, p, q where p/q is the sum as an exact
fraction and g is a temporary value used to save work
for recursive calls.
"""
if b-a == 1:
g = MPZ((6*b-5)*(2*b-1)*(6*b-1))
p = b**3 * CHUD_C**3 // 24
q = (-1)**b * g * (CHUD_A+CHUD_B*b)
else:
if verbose and level < 4:
print(" binary splitting", a, b)
mid = (a+b)//2
g1, p1, q1 = bs_chudnovsky(a, mid, level+1, verbose)
g2, p2, q2 = bs_chudnovsky(mid, b, level+1, verbose)
p = p1*p2
g = g1*g2
q = q1*p2 + q2*g1
return g, p, q
@constant_memo
def pi_fixed(prec, verbose=False, verbose_base=None):
"""
Compute floor(pi * 2**prec) as a big integer.
This is done using Chudnovsky's series (see comments in
libelefun.py for details).
"""
# The Chudnovsky series gives 14.18 digits per term
N = int(prec/3.3219280948/14.181647462 + 2)
if verbose:
print("binary splitting with N =", N)
g, p, q = bs_chudnovsky(0, N, 0, verbose)
sqrtC = isqrt_fast(CHUD_C<<(2*prec))
v = p*CHUD_C*sqrtC//((q+CHUD_A*p)*CHUD_D)
return v
```
This is just ordinary Python code, except that it depends on an extra function `isqrt_fast()` which computes the square root of a big integer. MPZ is the big integer type used: gmpy.fmpz if this is available, and Python's builtin long type otherwise. The `@constant_memo` decorator caches the computed value (pi is often needed repeatedly in a numerical calculation, so it makes sense to store it).
You can see that it computes pi by doing a radix conversion as follows:
```
>>> pi_fixed(53) * 10**16 // 2**53
mpz(31415926535897932)
```
The crucial trick to make the Chudnovsky formula fast is called *binary splitting*. The terms in the infinite series satisfy a recurrence relation with small coefficients (the recurrence equation can be seen in the b-a == 1 case in the bs\_chudnovsky function). Instead of computing the terms sequentially, the sum is repeatedly split in two halves; the two halves are evaluated recursively, and the results are combined. In the end, one has two large integers *p* and *q* such that the sum of the first *N* terms of the series is exactly equal to *p* / *q*. Note that there is no rounding error in the binary splitting process, which is a very nice feature of the algorithm; the only roundings occur when computing the square root and doing the division at the very end.
Most fast programs that compute pi to high precision use a very similar strategy, though there are some complicated tricks that can speed up the process a bit further.
(Note: I'm the author of the code.) |
Edit the value of every Nth item in a list | 36,011,478 | 13 | 2016-03-15T12:32:52Z | 36,011,574 | 15 | 2016-03-15T12:36:51Z | [
"python",
"list",
"python-3.x",
"edit",
"value"
] | What's the most pythonic way of performing an arithmetic operation on every nth value in a list? For example, if I start with list1:
```
list1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
I would like to add 1 to every second item, which would give:
```
list2 = [1, 3, 3, 5, 5, 7, 7, 9, 9, 11]
```
I've tried:
```
list1[::2]+1
```
and also:
```
for x in list1:
x=2
list2 = list1[::x] + 1
``` | You could use `slicing` with a list comprehension as follows:
```
In [26]: list1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
In [27]: list1[1::2] = [x+1 for x in list1[1::2]]
In [28]: list1
Out[28]: [1, 3, 3, 5, 5, 7, 7, 9, 9, 11]
``` |
Edit the value of every Nth item in a list | 36,011,478 | 13 | 2016-03-15T12:32:52Z | 36,011,604 | 8 | 2016-03-15T12:38:54Z | [
"python",
"list",
"python-3.x",
"edit",
"value"
] | What's the most pythonic way of performing an arithmetic operation on every nth value in a list? For example, if I start with list1:
```
list1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
I would like to add 1 to every second item, which would give:
```
list2 = [1, 3, 3, 5, 5, 7, 7, 9, 9, 11]
```
I've tried:
```
list1[::2]+1
```
and also:
```
for x in list1:
x=2
list2 = list1[::x] + 1
``` | [`numpy`](http://www.numpy.org/) allows you to use `+=` operation with slices too:
```
In [15]: import numpy as np
In [16]: l = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
In [17]: l[1::2] += 1
In [18]: l
Out[18]: array([ 1, 3, 3, 5, 5, 7, 7, 9, 9, 11])
``` |
comparing two lists and finding indices of changes | 36,023,728 | 5 | 2016-03-15T22:49:52Z | 36,023,789 | 7 | 2016-03-15T22:55:25Z | [
"python",
"list"
] | I'm trying to compare two lists and find the position and changed character at that position. For example, these are two lists:
```
list1 = ['I', 'C', 'A', 'N', 'R', 'U', 'N']
list2 = ['I', 'K', 'A', 'N', 'R', 'U', 'T']
```
I want to be able to output the position and change for the differences in the two lists. As you can see, a letter can be repeated multiple times at a different index position. This is the code that I have tried, but I can't seem to print out the second location accurately.
```
for indexing in range(0, len(list1)):
if list1[indexing] != list2[indexing]:
dontuseindex = indexing
poschange = indexing + 1
changecharacter = list2[indexing]
for indexingagain in range(dontuseindex + 1, len(list1)):
if list1[indexingagain] != list2[indexingagain]:
secondposchange = indexingagain + 1
secondchangecharacter = list2[indexingagain]
```
Is there a better way to solve this problem or any suggestions to the code I have?
My expected output would be:
```
2 K
7 T
``` | ```
for index, (first, second) in enumerate(zip(list1, list2)):
if first != second:
print(index, second)
```
Output:
```
1 K
6 T
```
If you want the output you gave, we need to count from `1` instead of the usual `0`:
```
for index, (first, second) in enumerate(zip(list1, list2), start=1):
``` |
How to use a Scala class inside Pyspark | 36,023,860 | 4 | 2016-03-15T23:01:09Z | 36,024,707 | 7 | 2016-03-16T00:23:03Z | [
"python",
"scala",
"apache-spark",
"pyspark",
"spark-dataframe"
] | I've been searching for a while if there is any way to use a `Scala` class in `Pyspark`, and I haven't found any documentation nor guide about this subject.
Let's say I create a simple class in `Scala` that uses some libraries of `apache-spark`, something like:
```
class SimpleClass(sqlContext: SQLContext, df: DataFrame, column: String) {
def exe(): DataFrame = {
import sqlContext.implicits._
df.select(col(column))
}
}
```
* Is there any possible way to use this class in `Pyspark`?
* Is it too tough?
* Do I have to create a `.py` file?
* Is there any guide that shows how to do that?
By the way I also looked at the `spark` code and I felt a bit lost, and I was incapable of replicating their functionality for my own purpose. | Yes it is possible although can be far from trivial. Typically you want a Java (friendly) wrapper so you don't have to deal with Scala features which cannot be easily expressed using plain Java and as a result don't play well with Py4J gateway.
Assuming your class is int the package `com.example` and have Python `DataFrame` called `df`
```
df = ... # Python DataFrame
```
you'll have to:
1. Build a jar using [your favorite build tool](http://www.scala-sbt.org/).
2. Include it in the driver classpath for example using `--driver-class-path` argument for PySpark shell / `spark-submit`. Depending on the exact code you may have to pass it using `--jars` as well
3. Extract JVM instance from a Python `SparkContext` instance:
```
jvm = sc._jvm
```
4. Extract Scala `SQLContext` from a `SQLContext` instance:
```
ssqlContext = sqlContext._ssql_ctx
```
5. Extract Java `DataFrame` from the `df`:
```
jdf = df._jdf
```
6. Create new instance of `SimpleClass`:
```
simpleObject = jvm.com.example.SimpleClass(ssqlContext, jdf, "v")
```
7. Call`exe` method and wrap the result using Python `DataFrame`:
```
from pyspark.sql import DataFrame
DataFrame(simpleObject.exe(), ssqlContext)
```
The result should be a valid PySpark `DataFrame`. You can of course combine all the steps into a single call.
**Important**: This approach is possible only if Python code is executed solely on the driver. It cannot be used inside Python action or transformation. See [How to use Java/Scala function from an action or a transformation?](http://stackoverflow.com/q/31684842/1560062) for details. |
Have extra while loop conditions ... based on a condition? | 36,025,026 | 6 | 2016-03-16T01:00:32Z | 36,025,181 | 10 | 2016-03-16T01:18:08Z | [
"python",
"while-loop"
] | The variable `a` can take any number of values. The value of `a` is the amount of extra pre-defined conditions to have for the while loop.
This can be done with multiple `elif` statements but is there a cleaner way of doing this?
```
if a == 0:
while condition_1:
...
elif a == 1:
while condition_1 or condition_2:
...
elif a == 2:
while condition_1 or condition_2 or condition_3:
...
``` | A general way of doing what other languages do with a `switch` statement is to create a dictionary containing a function for each of your cases:
```
conds = {
0: lambda: condition_1,
1: lambda: condition_1 or condition_2,
2: lambda: condition_1 or condition_2 or condition_3
}
```
Then:
```
while conds[a]():
# do stuff
```
By using lambdas (or named functions if your conditions are particularly complex) the appropriate condition can be evaluated each time through the loop, instead of once when the dictionary is defined.
In this simple case where your `a` has sequential integer values starting at 0, you could use a list and save a bit of typing. To further simplify, you could define each of your conditions in terms of the previous one, since you're just adding a condition each time:
```
conds = [
lambda: condition_1,
lambda: conds[0]() or condition_2,
lambda: conds[1]() or condition_3
]
```
Or, as suggested by Julien in a comment:
```
conds = [
lambda: condition_1,
lambda: condition_2,
lambda: condition_3
]
while any(cond() for cond in conds[:a+1]):
# do stuff
``` |
What is the purpose of compares in indices in Python? | 36,035,248 | 5 | 2016-03-16T12:06:45Z | 36,035,278 | 7 | 2016-03-16T12:08:43Z | [
"python",
"python-3.x",
"compare",
"slice"
] | I encountered the following:
```
r = random.randint(1,6)
C = "o "
s = '-----\n|' + C[r<1] + ' ' + C[r<3] + '|\n|' + C[r<5]
print(s + C[r&1] + s[::-1])
```
When executed in IDLE, this outputs an ASCII die with a random value.
How does it work,and more specifically, what do the compare symbols (`<` and `&`) accomplish inside the indices? | Someone is code-golfing here, and using hacky tricks to minimise the amount of code used.
* `<` is a regular comparison operator; it returns `True` or `False` based on the two operands. The Python `bool` type is a subclass of `int` and `True` is `1`, `False` is `0` when interpreted as integers. As such `C[r<1]` either picks `C[0]` or `C[1]`.
* `&` is a *bit-wise* operator, not a comparison operator; `& 1` is masking the number to the last bit, effectively testing if a number is odd or even (the last bit is set or not). So if `r` is odd, `C[1]` is used, otherwise `C[0]` is.
Breaking this down:
* `C` is a string with the `o` and space characters
* `C[r<1]` picks either `o` or a space based on wether it is smaller than 1. It never is (`random.randint(1,6)` ensures this), so that is *always* an `o`. This appears to be a bug or oversight in the code.
* `C[r<3]` picks a space for 1 and 2, an `o` otherwise.
* `C[r<5]` picks an `o` for 5 or 6, a space otherwise.
* `C[r&1]` picks an `o` for 2, 4 and 6, a space otherwise.
In all, it prints `r` *plus one* as a die. `r = 1` gives you two pips, while `r = 6` results in *seven* pips, perhaps as a stylised one?
Fixing the code for that requires incrementing all `r` tests and inverting the odd/even test:
```
s = '-----\n|' + C[r<2] + ' ' + C[r<4] + '|\n|' + C[r<6]
print(s + C[1-r&1] + s[::-1])
```
Demo (wrapping the string building in a function):
```
>>> import random
>>> def dice(r, C='o '):
... s = '-----\n|' + C[r<2] + ' ' + C[r<4] + '|\n|' + C[r<6]
... print(s + C[1-r&1] + s[::-1])
...
>>> for i in range(1, 7):
... dice(i)
...
-----
| |
| o |
| |
-----
-----
|o |
| |
| o|
-----
-----
|o |
| o |
| o|
-----
-----
|o o|
| |
|o o|
-----
-----
|o o|
| o |
|o o|
-----
-----
|o o|
|o o|
|o o|
-----
``` |
Automatically add newline on save in PyCharm? | 36,043,061 | 5 | 2016-03-16T17:40:31Z | 37,260,222 | 9 | 2016-05-16T17:56:24Z | [
"python",
"newline",
"pycharm"
] | PyCharm 5 complains of a missing newline at the end of the file:
[](http://i.stack.imgur.com/G2cjA.png)
How do I tell PyCharm to add the newline (if missing) automatically whenever I save a file? | This can be enabled in the `Editor > General` settings:
From the File menu open the `Settings` and select `Editor > General`. Under the `Other` section check the `Ensure line feed at file end on Save` setting. |
Is there a need to close files that have no reference to them? | 36,046,167 | 47 | 2016-03-16T20:20:26Z | 36,046,243 | 55 | 2016-03-16T20:25:35Z | [
"python",
"python-2.7",
"file"
] | As a complete beginner to programming, I am trying to understand the basic concepts of opening and closing files. One exercise I am doing is creating a script that allows me to copy the contents from one file to another.
```
in_file = open(from_file)
indata = in_file.read()
out_file = open(to_file, 'w')
out_file.write(indata)
out_file.close()
in_file.close()
```
I have tried to shorten this code and came up with this:
```
indata = open(from_file).read()
open(to_file, 'w').write(indata)
```
This works and looks a bit more efficient to me. However, this is also where I get confused. I think I left out the references to the opened files; there was no need for the in\_file and out\_file variables. However, does this leave me with two files that are open, but have nothing referring to them? How do I close these, or is there no need to?
Any help that sheds some light on this topic is much appreciated. | The pythonic way to deal with this is to use the [`with` context manager](https://docs.python.org/2.7/reference/compound_stmts.html#the-with-statement):
```
with open(from_file) as in_file, open(to_file, 'w') as out_file:
indata = in_file.read()
out_file.write(indata)
```
Used with files like this, `with` will ensure all the necessary cleanup is done for you, even if `read()` or `write()` throw errors. |
Is there a need to close files that have no reference to them? | 36,046,167 | 47 | 2016-03-16T20:20:26Z | 36,046,257 | 7 | 2016-03-16T20:26:07Z | [
"python",
"python-2.7",
"file"
] | As a complete beginner to programming, I am trying to understand the basic concepts of opening and closing files. One exercise I am doing is creating a script that allows me to copy the contents from one file to another.
```
in_file = open(from_file)
indata = in_file.read()
out_file = open(to_file, 'w')
out_file.write(indata)
out_file.close()
in_file.close()
```
I have tried to shorten this code and came up with this:
```
indata = open(from_file).read()
open(to_file, 'w').write(indata)
```
This works and looks a bit more efficient to me. However, this is also where I get confused. I think I left out the references to the opened files; there was no need for the in\_file and out\_file variables. However, does this leave me with two files that are open, but have nothing referring to them? How do I close these, or is there no need to?
Any help that sheds some light on this topic is much appreciated. | It is good practice to use the **`with`** keyword when dealing with file objects. This has the advantage that the file is properly closed after its suite finishes, even if an exception is raised on the way. It is also much shorter than writing equivalent try-finally blocks:
```
>>> with open('workfile', 'r') as f:
... read_data = f.read()
>>> f.closed
True
``` |
Is there a need to close files that have no reference to them? | 36,046,167 | 47 | 2016-03-16T20:20:26Z | 36,047,108 | 33 | 2016-03-16T21:16:01Z | [
"python",
"python-2.7",
"file"
] | As a complete beginner to programming, I am trying to understand the basic concepts of opening and closing files. One exercise I am doing is creating a script that allows me to copy the contents from one file to another.
```
in_file = open(from_file)
indata = in_file.read()
out_file = open(to_file, 'w')
out_file.write(indata)
out_file.close()
in_file.close()
```
I have tried to shorten this code and came up with this:
```
indata = open(from_file).read()
open(to_file, 'w').write(indata)
```
This works and looks a bit more efficient to me. However, this is also where I get confused. I think I left out the references to the opened files; there was no need for the in\_file and out\_file variables. However, does this leave me with two files that are open, but have nothing referring to them? How do I close these, or is there no need to?
Any help that sheds some light on this topic is much appreciated. | The default python interpeter, CPython, uses reference counting. This means that once there are no references to an object, it gets garbage collected, i.e. cleaned up.
In your case, doing
```
open(to_file, 'w').write(indata)
```
will create a file object for `to_file`, but not asign it to a name - this means there is no reference to it. You cannot possibly manipulate the object after this line.
CPython will detect this, and clean up the object after it has been used. In the case of a file, this means closing it automatically. In principle, this is fine, and your program won't leak memory.
The "problem" is this mechanism is an implementation detail of the CPython interpreter. The language standard *explicitly* gives no guarantee for it! If you are using an alternate interpreter such as pypy, automatic closing of files may be delayed *indefinitely*. This includes other implicit actions such as flushing writes on close.
This problem also applies to other resources, e.g. network sockets. It is good practice to *always* explicitly handle such external resources. Since python 2.6, the `with` statement makes this elegant:
```
with open(to_file, 'w') as out_file:
out_file.write(in_data)
```
---
TLDR: It works, but please don't do it. |
Is there a need to close files that have no reference to them? | 36,046,167 | 47 | 2016-03-16T20:20:26Z | 36,062,180 | 8 | 2016-03-17T13:36:45Z | [
"python",
"python-2.7",
"file"
] | As a complete beginner to programming, I am trying to understand the basic concepts of opening and closing files. One exercise I am doing is creating a script that allows me to copy the contents from one file to another.
```
in_file = open(from_file)
indata = in_file.read()
out_file = open(to_file, 'w')
out_file.write(indata)
out_file.close()
in_file.close()
```
I have tried to shorten this code and came up with this:
```
indata = open(from_file).read()
open(to_file, 'w').write(indata)
```
This works and looks a bit more efficient to me. However, this is also where I get confused. I think I left out the references to the opened files; there was no need for the in\_file and out\_file variables. However, does this leave me with two files that are open, but have nothing referring to them? How do I close these, or is there no need to?
Any help that sheds some light on this topic is much appreciated. | The answers so far are absolutely correct *when working in python*. You should use the `with open()` context manager. It's a great built-in feature, and helps shortcut a common programming task (opening and closing a file).
However, since you are a beginner and won't have access to [context managers](http://stackoverflow.com/a/36046243/2348587)
and [automated reference counting](http://stackoverflow.com/a/36047108/2348587) for the entirety of your career, I'll address the question from a *general programming* stance.
The first version of your code is perfectly fine. You open a file, save the reference, read from the file, then close it. This is how a lot of code is written when the language doesn't provide a shortcut for the task. The only thing I would improve is to move `close()` to where you are opening and reading the file. Once you open and read the file, you have the contents in memory and no longer need the file to be open.
```
in_file = open(from_file)
indata = in_file.read()
out_file.close()
out_file = open(to_file, 'w')
out_file.write(indata)
in_file.close()
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.