title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Can you create a Python list from a string, while keeping characters in specific keywords together? | 35,259,465 | 31 | 2016-02-07T21:40:56Z | 35,259,497 | 16 | 2016-02-07T21:44:51Z | [
"python",
"python-2.7"
] | I want to create a list from the characters in a string, but keep specific keywords together.
For example:
keywords: car, bus
INPUT:
```
"xyzcarbusabccar"
```
OUTPUT:
```
["x", "y", "z", "car", "bus", "a", "b", "c", "car"]
``` | ```
s = "xyzcarbusabccar"
import re
print re.findall("bus|car|\w", s)
['x', 'y', 'z', 'car', 'bus', 'a', 'b', 'c', 'car']
```
Or maybe `\S` for any non whitespace chars:
```
s = "xyzcarbusabccar!"
import re
print re.findall("bus|car|\S", s)
['x', 'y', 'z', 'car', 'bus', 'a', 'b', 'c', 'car', '!']
```
Just make sure you get the order correct putting longer words first if you want the longest matches.
```
In [7]: s = "xyzcarsbusabccar!"
In [8]: re.findall("bus|car|cars|\S", s)
Out[8]: ['x', 'y', 'z', 'car', 's', 'bus', 'a', 'b', 'c', 'car', '!']
In [9]: re.findall("bus|cars|car|\S", s)
Out[9]: ['x', 'y', 'z', 'cars', 'bus', 'a', 'b', 'c', 'car', '!']
``` |
Why did conda and pip just stop working? 'CompiledFFI' object has no attribute 'def_extern' | 35,259,538 | 7 | 2016-02-07T21:49:02Z | 35,620,223 | 8 | 2016-02-25T06:45:27Z | [
"python",
"python-2.7",
"openssl",
"pip",
"anaconda"
] | I just installed/upgraded the following packages on my system (Mac OSX 10.7.5, using python 2.7.11).
```
package | build
---------------------------|-----------------
enum34-1.1.2 | py27_0 55 KB
idna-2.0 | py27_0 123 KB
ipaddress-1.0.14 | py27_0 27 KB
pyasn1-0.1.9 | py27_0 54 KB
pycparser-2.14 | py27_0 147 KB
cffi-1.2.1 | py27_0 167 KB
cryptography-1.0.2 | py27_0 370 KB
pyopenssl-0.14 | py27_0 122 KB
ndg-httpsclient-0.3.3 | py27_0 30 KB
------------------------------------------------------------
Total: 1.1 MB
```
Afterwards, I get the following error when trying to call pip or anaconda:
```
'CompiledFFI' object has no attribute 'def_extern'
```
What's going on and how do I fix this?
Here's the full error message:
```
$ conda
Traceback (most recent call last):
File "/Users/User/miniconda/bin/conda", line 5, in <module>
sys.exit(main())
File "/Users/User/miniconda/lib/python2.7/site-packages/conda/cli/main.py", line 118, in main
from conda.cli import main_search
File "/Users/User/miniconda/lib/python2.7/site-packages/conda/cli/main_search.py", line 12, in <module>
from conda.misc import make_icon_url
File "/Users/User/miniconda/lib/python2.7/site-packages/conda/misc.py", line 19, in <module>
from conda.api import get_index
File "/Users/User/miniconda/lib/python2.7/site-packages/conda/api.py", line 10, in <module>
from conda.fetch import fetch_index
File "/Users/User/miniconda/lib/python2.7/site-packages/conda/fetch.py", line 24, in <module>
from conda.connection import CondaSession, unparse_url, RETRIES
File "/Users/User/miniconda/lib/python2.7/site-packages/conda/connection.py", line 24, in <module>
import requests
File "/Users/User/miniconda/lib/python2.7/site-packages/requests/__init__.py", line 53, in <module>
from .packages.urllib3.contrib import pyopenssl
File "/Users/User/miniconda/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 54, in <module>
import OpenSSL.SSL
File "/Users/User/miniconda/lib/python2.7/site-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import rand, crypto, SSL
File "/Users/User/miniconda/lib/python2.7/site-packages/OpenSSL/rand.py", line 11, in <module>
from OpenSSL._util import (
File "/Users/User/miniconda/lib/python2.7/site-packages/OpenSSL/_util.py", line 6, in <module>
from cryptography.hazmat.bindings.openssl.binding import Binding
File "/Users/User/miniconda/lib/python2.7/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 68, in <module>
error=-1)
File "/Users/User/miniconda/lib/python2.7/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 57, in wrapper
ffi.def_extern(name=name, **kwargs)(func)
AttributeError: 'CompiledFFI' object has no attribute 'def_extern'
``` | I had this error too, but I've resolved it by upgrading cffi like so:
```
pip install --upgrade cffi
``` |
How to use pip with python3.5 after upgrade from 3.4? | 35,261,468 | 6 | 2016-02-08T02:02:35Z | 35,261,593 | 7 | 2016-02-08T02:28:28Z | [
"python",
"pip",
"python-3.5"
] | I'm on Ubuntu and I have python2.7, (it came pre-installed) python3.4, (used before today) and python3.5, which I upgraded to today, installed in parallel. They all work fine on their own.
However, I want to use `pip` to install some packages, and I can't figure out how to do this for my 3.5 installation because `pip` installs for 2.7 and `pip3` installs python 3.4 packages.
For instance, I have asyncio installed on 3.4, but I can't import it from 3.5. When I do `pip3 install aysncio`, it tells me the requirement is already satisfied.
I'm a bit of a newbie, but I did some snooping around install directories and couldn't find anything and I've googled to no avail. | I suppose you can run `pip` through Python until this is sorted out. (<https://docs.python.org/dev/installing/>)
A quick googling seems to indicate that this is indeed a bug. Try this and report back:
```
python3.4 -m pip --version
python3.5 -m pip --version
```
If they report different versions then I guess you're good to go. Just run `python3.5 -m pip install package` instead of `pip3 install package` to install 3.5 packages. |
How to nicely handle [:-0] slicing? | 35,264,670 | 2 | 2016-02-08T07:50:40Z | 35,264,751 | 7 | 2016-02-08T07:56:39Z | [
"python",
"numpy",
"slice"
] | In implementing an autocorrelation function I have a term like
```
for k in range(start,N):
c[k] = np.sum(f[:-k] * f[k:])/(N-k)
```
Now everything works fine if `start = 1` but I'd like to handle nicely the start at `0` case without a conditional.
Obviously it doesn't work as it is because `f[:-0] == f[:0]` and returns an empty array, while I'd want the full array in that case. | Don't use negative indices in this case
```
f[:len(f)-k]
```
For `k == 0` it returns the whole array. For any other positive `k` it's equivalent to `f[:-k]` |
Print letters in specific pattern in Python | 35,266,225 | 20 | 2016-02-08T09:32:26Z | 35,266,290 | 15 | 2016-02-08T09:35:30Z | [
"python",
"regex",
"string"
] | I have the follwing string and I split it:
```
>>> st = '%2g%k%3p'
>>> l = filter(None, st.split('%'))
>>> print l
['2g', 'k', '3p']
```
Now I want to print the g letter two times, the k letter one time and the p letter three times:
```
ggkppp
```
How is it possible? | You could use `generator` with `isdigit()` to check wheter your first symbol is digit or not and then return following string with appropriate count. Then you could use `join` to get your output:
```
''.join(i[1:]*int(i[0]) if i[0].isdigit() else i for i in l)
```
Demonstration:
```
In [70]: [i[1:]*int(i[0]) if i[0].isdigit() else i for i in l ]
Out[70]: ['gg', 'k', 'ppp']
In [71]: ''.join(i[1:]*int(i[0]) if i[0].isdigit() else i for i in l)
Out[71]: 'ggkppp'
```
**EDIT**
Using `re` module when first number is with several digits:
```
''.join(re.search('(\d+)(\w+)', i).group(2)*int(re.search('(\d+)(\w+)', i).group(1)) if re.search('(\d+)(\w+)', i) else i for i in l)
```
Example:
```
In [144]: l = ['12g', '2kd', 'h', '3p']
In [145]: ''.join(re.search('(\d+)(\w+)', i).group(2)*int(re.search('(\d+)(\w+)', i).group(1)) if re.search('(\d+)(\w+)', i) else i for i in l)
Out[145]: 'ggggggggggggkdkdhppp'
```
**EDIT2**
For your input like:
```
st = '%2g_%3k%3p'
```
You could replace `_` with empty string and then add `_` to the end if the work from list endswith the `_` symbol:
```
st = '%2g_%3k%3p'
l = list(filter(None, st.split('%')))
''.join((re.search('(\d+)(\w+)', i).group(2)*int(re.search('(\d+)(\w+)', i).group(1))).replace("_", "") + '_' * i.endswith('_') if re.search('(\d+)(\w+)', i) else i for i in l)
```
Output:
```
'gg_kkkppp'
```
**EDIT3**
Solution without `re` module but with usual loops working for 2 digits. You could define functions:
```
def add_str(ind, st):
if not st.endswith('_'):
return st[ind:] * int(st[:ind])
else:
return st[ind:-1] * int(st[:ind]) + '_'
def collect(l):
final_str = ''
for i in l:
if i[0].isdigit():
if i[1].isdigit():
final_str += add_str(2, i)
else:
final_str += add_str(1, i)
else:
final_str += i
return final_str
```
And then use them as:
```
l = ['12g_', '3k', '3p']
print(collect(l))
gggggggggggg_kkkppp
``` |
Print letters in specific pattern in Python | 35,266,225 | 20 | 2016-02-08T09:32:26Z | 35,266,372 | 13 | 2016-02-08T09:40:52Z | [
"python",
"regex",
"string"
] | I have the follwing string and I split it:
```
>>> st = '%2g%k%3p'
>>> l = filter(None, st.split('%'))
>>> print l
['2g', 'k', '3p']
```
Now I want to print the g letter two times, the k letter one time and the p letter three times:
```
ggkppp
```
How is it possible? | One-liner Regex way:
```
>>> import re
>>> st = '%2g%k%3p'
>>> re.sub(r'%|(\d*)(\w+)', lambda m: int(m.group(1))*m.group(2) if m.group(1) else m.group(2), st)
'ggkppp'
```
`%|(\d*)(\w+)` regex matches all `%` and captures zero or moredigit present before any word character into one group and the following word characters into another group. On replacement all the matched chars should be replaced with the value given in the replacement part. So this should loose `%` character.
or
```
>>> re.sub(r'%(\d*)(\w+)', lambda m: int(m.group(1))*m.group(2) if m.group(1) else m.group(2), st)
'ggkppp'
``` |
Print letters in specific pattern in Python | 35,266,225 | 20 | 2016-02-08T09:32:26Z | 35,266,403 | 11 | 2016-02-08T09:42:15Z | [
"python",
"regex",
"string"
] | I have the follwing string and I split it:
```
>>> st = '%2g%k%3p'
>>> l = filter(None, st.split('%'))
>>> print l
['2g', 'k', '3p']
```
Now I want to print the g letter two times, the k letter one time and the p letter three times:
```
ggkppp
```
How is it possible? | Assumes you are always printing single letter, but preceding number may be longer than single digit in base 10.
```
seq = ['2g', 'k', '3p']
result = ''.join(int(s[:-1] or 1) * s[-1] for s in seq)
assert result == "ggkppp"
``` |
get count of values associated with key in dict python | 35,269,374 | 4 | 2016-02-08T12:10:46Z | 35,269,393 | 9 | 2016-02-08T12:11:54Z | [
"python",
"python-2.7",
"dictionary"
] | list of dict is like .
```
[{'id': 19, 'success': True, 'title': u'apple'},
{'id': 19, 'success': False, 'title': u'some other '},
{'id': 19, 'success': False, 'title': u'dont know'}]
```
I want count of how many dict have `success` as `True`.
I have tried,
```
len(filter(lambda x: x, [i['success'] for i in s]))
```
How can I make it more elegant using pythonic way ? | You could use `sum()` to add up your boolean values; `True` is 1 in a numeric context, `False` is 0:
```
sum(d['success'] for d in s)
```
This works because the Python `bool` type is a subclass of `int`, for historic reasons.
If you wanted to make it explicit, you could use a conditional expression, but readability is not improved with that in my opinion:
```
sum(1 if d['success'] else 0 for d in s)
``` |
What could cause NetworkX & PyGraphViz to work fine alone but not together? | 35,279,733 | 14 | 2016-02-08T21:32:10Z | 35,280,794 | 32 | 2016-02-08T22:44:35Z | [
"python",
"graph",
"graphviz",
"networkx",
"pygraphviz"
] | I'm working to learning some Python graph visualization. I found a few blog posts doing [some](https://kenmxu.wordpress.com/2013/06/12/network-x-4-8-2/) [things](http://www.randomhacks.net/2009/12/29/visualizing-wordnet-relationships-as-graphs/) I wanted to try. Unfortunately I didn't get too far, encountering this error: `AttributeError: 'module' object has no attribute 'graphviz_layout'`
The simplest snip of code which **reproduces the error** on my system is this,
```
In [1]: import networkx as nx
In [2]: G=nx.complete_graph(5)
In [3]: nx.draw_graphviz(G)
------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-481ad1c1771c> in <module>()
----> 1 nx.draw_graphviz(G)
/usr/lib/python2.7/site-packages/networkx/drawing/nx_pylab.pyc in draw_graphviz(G, prog, **kwargs)
982 See networkx.draw_networkx() for a description of optional keywords.
983 """
--> 984 pos = nx.drawing.graphviz_layout(G, prog)
985 draw(G, pos, **kwargs)
986
AttributeError: 'module' object has no attribute 'graphviz_layout'
```
I found a similar [questions](http://stackoverflow.com/questions/33859001/how-do-you-get-networkx-graphviz-layout-to-work), and [posts](http://www.jhreview.com/tech-stack/questions/22698227/python-installation-issues-with-pygraphviz-and-graphviz) having difficulty with this combo, but not quite the same error. One was [close](https://stackoverflow.com/questions/32587445/drawing-graph-in-graphviz-layout-in-python-using-nx-draw-graphviz-gives-error), but it automagically resolved itself.
**First, I verified all the required packages** for [NetworkX](https://networkx.readthedocs.org/en/stable/install.html#requirements) and PyGraphViz (which lists similar requirements to [Scipy](http://scipy.org/install.html#individual-binary-and-source-packages)) were installed.
**Next, I looked for snips to test my installation of these modules in Python.** The first two examples are from the [NetworkX Reference Documentation](https://networkx.readthedocs.org/en/stable/reference/drawing.html#module-networkx.drawing.nx_agraph). This lists a few example snips using both MatPlotLib and GraphViz.
**The MatPlotLib code example works for me (renders an image to the screen)**,
```
In [11]: import networkx as nx
In [12]: G=nx.complete_graph(5)
In [13]: import matplotlib.pyplot as plt
In [13]: nx.draw(G)
In [13]: plt.show()
```
However, the **GraphViz snips also produce similar errors,**
```
In [16]: import networkx as nx
In [17]: G=nx.complete_graph(5)
In [18]: H=nx.from_agraph(A)
------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-18-808fa68cefaa> in <module>()
----> 1 H=nx.from_agraph(A)
AttributeError: 'module' object has no attribute 'from_agraph'
In [19]: A=nx.to_agraph(G)
------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-19-32d1616bb41a> in <module>()
----> 1 A=nx.to_agraph(G)
AttributeError: 'module' object has no attribute 'to_agraph'
In [20]: print G
complete_graph(5)
```
**Then I tried PyGraphViz's tutorial page** on [Layout & Drawing](http://pygraphviz.github.io/documentation/development/tutorial.html#layout-and-drawing). This has some snips as well. **PyGraphViz passed** with Neato (default), PyDot, and Circo Post Script output (viewed using Gimp). (The only difference is these PyGraphViz examples are not rendered to the display, but to files).
```
In [1]: import pygraphviz as pgv
In [2]: d={'1': {'2': None}, '2': {'1': None, '3': None}, '3': {'2': None}}
In [3]: A=pgv.AGraph(d)
In [4]: A.write("pygraphviz_test_01.dot")
In [5]: A.layout()
In [6]: A.draw('pygraphviz_test_01.png')
```
**Adding to the complexity,** PyGraphViz [requires GraphViz](http://pygraphviz.github.io/documentation/pygraphviz-1.3rc1/install.html#requirements) package binaries in order to work. I'm using Arch Linux, and installed that distro's version. Arch Linux has an [example to test installation](https://wiki.archlinux.org/index.php/Graphviz) (again, output to file) **which also passed**.
What am I missing? **What could cause NetworkX & PyGraphViz to work fine alone but not together?** | There is a small bug in the `draw_graphviz` function in networkx-1.11 triggered by the change that the graphviz drawing tools are no longer imported into the top level namespace of networkx.
The following is a workaround
```
In [1]: import networkx as nx
In [2]: G = nx.complete_graph(5)
In [3]: from networkx.drawing.nx_agraph import graphviz_layout
In [4]: pos = graphviz_layout(G)
In [5]: nx.draw(G, pos)
```
To use the other functions such as `to_agraph`, `write_dot`, etc you will need to explicitly use the longer path name
```
nx.drawing.nx_agraph.write_dot()
```
or import the function into the top-level namespace
```
from networkx.drawing.nx_agraph import write_dot()
write_dot()
``` |
In python, how do I cast a class object to a dict | 35,282,222 | 33 | 2016-02-09T01:02:20Z | 35,282,286 | 26 | 2016-02-09T01:08:11Z | [
"python"
] | Let's say I've got a simple class in python
```
class Wharrgarbl(object):
def __init__(self, a, b, c, sum, version='old'):
self.a = a
self.b = b
self.c = c
self.sum = 6
self.version = version
def __int__(self):
return self.sum + 9000
def __what_goes_here__(self):
return {'a': self.a, 'b': self.b, 'c': self.c}
```
I can cast it to an integer very easily
```
>>> w = Wharrgarbl('one', 'two', 'three', 6)
>>> int(w)
9006
```
Which is great! But, now I want to cast it to a dict in a similar fashion
```
>>> w = Wharrgarbl('one', 'two', 'three', 6)
>>> dict(w)
{'a': 'one', 'c': 'three', 'b': 'two'}
```
What do I need to define for this to work? I tried substituting both `__dict__` and `dict` for `__what_goes_here__`, but `dict(w)` resulted in a `TypeError: Wharrgarbl object is not iterable` in both cases. I don't think simply making the class iterable will solve the problem. I also attempted many googles with as many different wordings of "python cast object to dict" as I could think of but couldn't find anything relevant :{
Also! Notice how calling `w.__dict__` won't do what I want because it's going to contain `w.version` and `w.sum`. I want to customize the cast to `dict` in the same way that I can customize the cast to `int` by using `def int(self)`.
I know that I could just do something like this
```
>>> w.__what_goes_here__()
{'a': 'one', 'c': 'three', 'b': 'two'}
```
But I am assuming there is a pythonic way to make `dict(w)` work since it is the same type of thing as `int(w)` or `str(w)`. If there isn't a more pythonic way, that's fine too, just figured I'd ask. Oh! I guess since it matters, this is for python 2.7, but super bonus points for a 2.4 old and busted solution as well.
There is another question [Overloading \_\_dict\_\_() on python class](http://stackoverflow.com/questions/23252370/overloading-dict-on-python-class) that is similar to this one but may be different enough to warrant this not being a duplicate. I believe that OP is asking how to cast all the data in his class objects as dictionaries. I'm looking for a more customized approach in that I don't want everything in `__dict__` included in the dictionary returned by `dict()`. Something like public vs private variables may suffice to explain what I'm looking for. The objects will be storing some values used in calculations and such that I don't need/want to show up in the resulting dictionaries.
UPDATE:
I've chosen to go with the `asdict` route suggested but it was a tough choice selecting what I wanted to be the answer to the question. Both @RickTeachey and @jpmc26 provided the answer I'm going to roll with but the former had more info and options and landed on the same result as well and was upvoted more so I went with it. Upvotes all around though and thanks for the help. I've lurked long and hard on stackoverflow and I'm trying to get my toes in the water more. | You need to override `' __iter__'`.
Like this, for example:
```
def __iter__(self):
yield 'a', self.a
yield 'b', self.b
yield 'c', self.c
```
Now you can just do:
```
dict(my_object)
```
I would also suggest looking into the `'collections.abc`' module. This answer might be helpful:
<http://stackoverflow.com/a/27803404/2437514>
Specifically, you'll want to look at the `'Mapping'` and `'MutableMapping'` objects. If you use that module and inherit your object from one of the `dict`-like abcs, you can cast your object to a `dict` just as you require.
As noted in the comments below: it's worth mentioning that doing this the abc way essentially turns your object class into a `dict`-like class. So everything you would be able to do with `dict`, you could do with your own class object. This may be, or may not be, desirable. It would also mean there would probably be little reason (because of duck typing) to bother casting your object to a `dict` in the first place.
Also consider looking at the numerical abcs in the `numbers` module:
<https://docs.python.org/3/library/numbers.html>
Since you're also casting your object to an `int`, it might make more sense to essentially turn your class into a full fledged `int` so that casting isn't necessary.
However, after thinking about this a bit more, I would very much consider the `asdict` way of doing things suggested by other answers. It does not appear that your object is really much of a collection. Using the iter or abc method could be confusing for others unless it is very obvious exactly which object members would and would not be iterated. |
In python, how do I cast a class object to a dict | 35,282,222 | 33 | 2016-02-09T01:02:20Z | 35,282,292 | 8 | 2016-02-09T01:08:34Z | [
"python"
] | Let's say I've got a simple class in python
```
class Wharrgarbl(object):
def __init__(self, a, b, c, sum, version='old'):
self.a = a
self.b = b
self.c = c
self.sum = 6
self.version = version
def __int__(self):
return self.sum + 9000
def __what_goes_here__(self):
return {'a': self.a, 'b': self.b, 'c': self.c}
```
I can cast it to an integer very easily
```
>>> w = Wharrgarbl('one', 'two', 'three', 6)
>>> int(w)
9006
```
Which is great! But, now I want to cast it to a dict in a similar fashion
```
>>> w = Wharrgarbl('one', 'two', 'three', 6)
>>> dict(w)
{'a': 'one', 'c': 'three', 'b': 'two'}
```
What do I need to define for this to work? I tried substituting both `__dict__` and `dict` for `__what_goes_here__`, but `dict(w)` resulted in a `TypeError: Wharrgarbl object is not iterable` in both cases. I don't think simply making the class iterable will solve the problem. I also attempted many googles with as many different wordings of "python cast object to dict" as I could think of but couldn't find anything relevant :{
Also! Notice how calling `w.__dict__` won't do what I want because it's going to contain `w.version` and `w.sum`. I want to customize the cast to `dict` in the same way that I can customize the cast to `int` by using `def int(self)`.
I know that I could just do something like this
```
>>> w.__what_goes_here__()
{'a': 'one', 'c': 'three', 'b': 'two'}
```
But I am assuming there is a pythonic way to make `dict(w)` work since it is the same type of thing as `int(w)` or `str(w)`. If there isn't a more pythonic way, that's fine too, just figured I'd ask. Oh! I guess since it matters, this is for python 2.7, but super bonus points for a 2.4 old and busted solution as well.
There is another question [Overloading \_\_dict\_\_() on python class](http://stackoverflow.com/questions/23252370/overloading-dict-on-python-class) that is similar to this one but may be different enough to warrant this not being a duplicate. I believe that OP is asking how to cast all the data in his class objects as dictionaries. I'm looking for a more customized approach in that I don't want everything in `__dict__` included in the dictionary returned by `dict()`. Something like public vs private variables may suffice to explain what I'm looking for. The objects will be storing some values used in calculations and such that I don't need/want to show up in the resulting dictionaries.
UPDATE:
I've chosen to go with the `asdict` route suggested but it was a tough choice selecting what I wanted to be the answer to the question. Both @RickTeachey and @jpmc26 provided the answer I'm going to roll with but the former had more info and options and landed on the same result as well and was upvoted more so I went with it. Upvotes all around though and thanks for the help. I've lurked long and hard on stackoverflow and I'm trying to get my toes in the water more. | something like this would probably work
```
class MyClass:
def __init__(self,x,y,z):
self.x = x
self.y = y
self.z = z
def __iter__(self): #overridding this to return tuples of (key,value)
return iter([('x',self.x),('y',self.y),('z',self.z)])
dict(MyClass(5,6,7)) # because dict knows how to deal with tuples of (key,value)
``` |
In python, how do I cast a class object to a dict | 35,282,222 | 33 | 2016-02-09T01:02:20Z | 35,286,073 | 14 | 2016-02-09T07:19:43Z | [
"python"
] | Let's say I've got a simple class in python
```
class Wharrgarbl(object):
def __init__(self, a, b, c, sum, version='old'):
self.a = a
self.b = b
self.c = c
self.sum = 6
self.version = version
def __int__(self):
return self.sum + 9000
def __what_goes_here__(self):
return {'a': self.a, 'b': self.b, 'c': self.c}
```
I can cast it to an integer very easily
```
>>> w = Wharrgarbl('one', 'two', 'three', 6)
>>> int(w)
9006
```
Which is great! But, now I want to cast it to a dict in a similar fashion
```
>>> w = Wharrgarbl('one', 'two', 'three', 6)
>>> dict(w)
{'a': 'one', 'c': 'three', 'b': 'two'}
```
What do I need to define for this to work? I tried substituting both `__dict__` and `dict` for `__what_goes_here__`, but `dict(w)` resulted in a `TypeError: Wharrgarbl object is not iterable` in both cases. I don't think simply making the class iterable will solve the problem. I also attempted many googles with as many different wordings of "python cast object to dict" as I could think of but couldn't find anything relevant :{
Also! Notice how calling `w.__dict__` won't do what I want because it's going to contain `w.version` and `w.sum`. I want to customize the cast to `dict` in the same way that I can customize the cast to `int` by using `def int(self)`.
I know that I could just do something like this
```
>>> w.__what_goes_here__()
{'a': 'one', 'c': 'three', 'b': 'two'}
```
But I am assuming there is a pythonic way to make `dict(w)` work since it is the same type of thing as `int(w)` or `str(w)`. If there isn't a more pythonic way, that's fine too, just figured I'd ask. Oh! I guess since it matters, this is for python 2.7, but super bonus points for a 2.4 old and busted solution as well.
There is another question [Overloading \_\_dict\_\_() on python class](http://stackoverflow.com/questions/23252370/overloading-dict-on-python-class) that is similar to this one but may be different enough to warrant this not being a duplicate. I believe that OP is asking how to cast all the data in his class objects as dictionaries. I'm looking for a more customized approach in that I don't want everything in `__dict__` included in the dictionary returned by `dict()`. Something like public vs private variables may suffice to explain what I'm looking for. The objects will be storing some values used in calculations and such that I don't need/want to show up in the resulting dictionaries.
UPDATE:
I've chosen to go with the `asdict` route suggested but it was a tough choice selecting what I wanted to be the answer to the question. Both @RickTeachey and @jpmc26 provided the answer I'm going to roll with but the former had more info and options and landed on the same result as well and was upvoted more so I went with it. Upvotes all around though and thanks for the help. I've lurked long and hard on stackoverflow and I'm trying to get my toes in the water more. | There is no magic method that will do what you want. The answer is simply name it appropriately. `asdict` is a reasonable choice for a plain conversion to `dict`, inspired primarily by `namedtuple`. However, your method will obviously contain special logic that might not be immediately obvious from that name; you are returning only a subset of the class' state. If you can come up with with a slightly more verbose name that communicates the concepts clearly, all the better.
Other answers suggest using `__iter__`, but unless your object is truly iterable (represents a series of elements), this really makes little sense and constitutes an awkward abuse of the method. The fact that you want to filter out some of the class' state makes this approach even more dubious. |
How and operator works in python? | 35,288,845 | 2 | 2016-02-09T09:54:48Z | 35,289,231 | 7 | 2016-02-09T10:10:50Z | [
"python"
] | Could you please explain me how and works in python?
I know when
```
x y and
0 0 0 (returns x)
0 1 0 (x)
1 0 1 (y)
1 1 1 (y)
```
In interpreter
```
>> lis = [1,2,3,4]
>> 1 and 5 in lis
```
output gives FALSE
but,
```
>>> 6 and 1 in lis
```
output is TRUE
how does it work?
what to do in such case where in my program I have to enter if condition only when both the values are there in the list? | Despite lots of arguments to the contrary,
```
6 and 1 in lis
```
means
```
6 and (1 in lis)
```
It does *not* mean:
```
(6 and 1) in lis
```
The [page](https://docs.python.org/3/reference/expressions.html#operator-precedence) that Maroun Maroun linked to in his comments indicates that `and` has a lower precedence than `in`.
You can test it like this:
```
0 and 1 in [0]
```
If this means `(0 and 1) in [0]` then it will evaluate to true, because `0` is in `[0]`.
If it means `0 and (1 in [0])` then it will evaluate to `0`, because `0` is false.
It evaluates to `0`. |
pythonic way for axis-wise winner-take-all in numpy | 35,291,189 | 9 | 2016-02-09T11:42:39Z | 35,291,507 | 10 | 2016-02-09T11:57:55Z | [
"python",
"numpy"
] | I am wondering what the most concise and pythonic way to keep only the maximum element in each line of a 2D numpy array while setting all other elements to zeros. Example:
given the following numpy array:
```
a = [ [1, 8, 3 ,6],
[5, 5, 60, 1],
[63,9, 9, 23] ]
```
I want the answer to be:
```
b = [ [0, 8, 0, 0],
[0, 0, 60, 0],
[63,0, 0, 0 ] ]
```
I can think of several ways to solve that, but what interests me is whether there are python functions to so this just quickly
Thank you in advance | You can use [`np.max`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.amax.html#numpy.amax) to take the maximum along one axis, then use [`np.where`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html) to zero out the non-maximal elements:
```
np.where(a == a.max(axis=1, keepdims=True), a, 0)
```
The `keepdims=True` argument keeps the singleton dimension after taking the max (i.e. so that `a.max(1, keepdims=True).shape == (3, 1)`), which simplifies broadcasting it against `a`. |
Tensorflow embedding_lookup | 35,295,191 | 2 | 2016-02-09T14:57:35Z | 35,296,384 | 11 | 2016-02-09T15:54:13Z | [
"python",
"python-2.7",
"machine-learning",
"tensorflow",
"word-embedding"
] | I am trying to learn the word representation of the imdb dataset "from scratch" through the TensorFlow `tf.nn.embedding_lookup()` function. If I understand it correctly, I have to set up an embedding layer before the other hidden layer, and then when I perform gradient descent, the layer will "learn" a word representation in the weights of this layer. However, when I try to do this, I get a shape error between my embedding layer and the first fully-connected layer of my network.
```
def multilayer_perceptron(_X, _weights, _biases):
with tf.device('/cpu:0'), tf.name_scope("embedding"):
W = tf.Variable(tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0),name="W")
embedding_layer = tf.nn.embedding_lookup(W, _X)
layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(embedding_layer, _weights['h1']), _biases['b1']))
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, _weights['h2']), _biases['b2']))
return tf.matmul(layer_2, weights['out']) + biases['out']
x = tf.placeholder(tf.int32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
pred = multilayer_perceptron(x, weights, biases)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred,y))
train_step = tf.train.GradientDescentOptimizer(0.3).minimize(cost)
init = tf.initialize_all_variables()
```
The error I get is:
```
ValueError: Shapes TensorShape([Dimension(None), Dimension(300), Dimension(128)])
and TensorShape([Dimension(None), Dimension(None)]) must have the same rank
``` | The shape error arises because you are using a two-dimensional tensor, `x` to index into a two-dimensional embedding tensor `W`. Think of [`tf.nn.embedding_lookup()`](https://www.tensorflow.org/versions/master/api_docs/python/nn.html#embedding_lookup) (and its close cousin [`tf.gather()`](https://www.tensorflow.org/versions/master/api_docs/python/array_ops.html#gather)) as taking each integer value `i` in `x` and replacing it with the row `W[i, :]`. From the error message, one can infer that `n_input = 300` and `embedding_size = 128`. In general, the result of `tf.nn.embedding_lookup()` number of dimensions equal to `rank(x) + rank(W) - 1`… in this case, 3. The error arises when you try to multiply this result by `_weights['h1']`, which is a (two-dimensional) matrix.
To fix this code, it depends on what you're trying to do, and why you are passing in a matrix of inputs to the embedding. One common thing to do is to *aggregate* the embedding vectors for each input example into a single row per example using an operation like [`tf.reduce_sum()`](https://www.tensorflow.org/versions/master/api_docs/python/math_ops.html#reduce_sum). For example, you might do the following:
```
W = tf.Variable(
tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0) ,name="W")
embedding_layer = tf.nn.embedding_lookup(W, _X)
# Reduce along dimension 1 (`n_input`) to get a single vector (row)
# per input example.
embedding_aggregated = tf.reduce_sum(embedding_layer, [1])
layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(
embedding_aggregated, _weights['h1']), _biases['b1']))
``` |
How can I refactor this code to be more concise? | 35,296,348 | 2 | 2016-02-09T15:52:37Z | 35,296,428 | 7 | 2016-02-09T15:56:08Z | [
"python",
"python-2.7",
"google-analytics",
"google-analytics-api"
] | I'm using Python to extract some data from the Google Analytics Core Reporting API. I've managed to use functions to make different calls to the API (below are just two examples) but I'm wondering how can I refactor this to make it even shorter? There's quite a lot of duplicate code still.
```
def get_pvs(service, profile_id, start_date, end_date, type, data):
if type == "url":
return service.data().ga().get(
ids = 'ga:' + profile_id,
start_date = start_date,
end_date = end_date,
metrics = 'ga:pageviews',
dimensions = 'ga:pagePath',
filters = 'ga:pageviews!=0',
sort = '-ga:pageviews',
max_results = '10000').execute()
elif type == "author":
return service.data().ga().get(
ids = 'ga:' + profile_id,
start_date = start_date,
end_date = end_date,
metrics = 'ga:pageviews',
# Post Author
dimensions = 'ga:dimension2',
sort = '-ga:pageviews',
max_results = '100').execute()
``` | Create a dictionary with type-specific arguments, then apply that with `**kw`:
```
def get_pvs(service, profile_id, start_date, end_date, type, data):
if type == 'url':
kwargs = {
'dimensions': 'ga:pagePath',
'filters': 'ga:pageviews!=0',
'maxresults': 100
}
elif type == 'author':
kwargs = {
'dimensions': 'ga:dimension2',
'max_results': '100'
}
return service.data().ga().get(
ids = 'ga:' + profile_id,
start_date = start_date,
end_date = end_date,
metrics = 'ga:pageviews',
sort = '-ga:pageviews',
**kwargs).execute()
```
I left the common arguments in place. if `type` can have more values, then you probably need to add a `else: return None` too. |
"freeze" some variables/scopes in tensorflow: stop_gradient vs passing variables to minimize | 35,298,326 | 11 | 2016-02-09T17:26:36Z | 35,304,001 | 16 | 2016-02-09T23:08:58Z | [
"python",
"tensorflow"
] | I am trying to implement [Adversarial NN](http://arxiv.org/abs/1406.2661), which requires to 'freeze' one or the other part of the graph during alternating training minibatches. I.e. there two sub-networks: G and D.
```
G( Z ) -> Xz
D( X ) -> Y
```
where loss function of `G` depends on `D[G(Z)], D[X]`.
First I need to train parameters in D with all G parameters fixed, and then parameters in G with parameters in D fixed. Loss function in first case will be negative loss function in the second case and the update will have to apply to the parameters of whether first or second subnetwork.
I saw that tensorflow has `tf.stop_gradient` function. For purpose of training the D (downstream) subnetwork I can use this function to block the gradient flow to
```
Z -> [ G ] -> tf.stop_gradient(Xz) -> [ D ] -> Y
```
The `tf.stop_gradient` is very succinctly annotated with no in-line example (and example `seq2seq.py` is too long and not that easy to read), but looks like it must be called during the graph creation. **Does it imply that if I want to block/unblock gradient flow in alternating batches, I need to re-create and re-initialize the graph model?**
Also it seems that **one cannot block the gradient flowing through the G (upstream) network by means of `tf.stop_gradient`, right?**
As an alternative I saw that one can pass the list of variables to the optimizer call as `opt_op = opt.minimize(cost, <list of variables>)`, which would be an easy solution if one could get all variables in the scopes of each subnetwork. **Can one get a `<list of variables>` for a tf.scope?** | The easiest way to achieve this, as you mention in your question, is to create two optimizer operations using separate calls to `opt.minimize(cost, ...)`. By default, the optimizer will use all of the variables in [`tf.trainable_variables()`](https://www.tensorflow.org/versions/master/api_docs/python/state_ops.html#trainable_variables). If you want to filter the variables to a particular scope, you can use the optional `scope` argument to [`tf.get_collection()`](https://www.tensorflow.org/versions/master/api_docs/python/framework.html#get_collection) as follows:
```
optimizer = tf.train.AdagradOptimzer(0.01)
first_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
"scope/prefix/for/first/vars")
first_train_op = optimizer.minimize(cost, var_list=first_train_vars)
second_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
"scope/prefix/for/second/vars")
second_train_op = optimizer.minimize(cost, var_list=second_train_vars)
``` |
Count the frequency of a recurring list -- inside a list of lists | 35,316,186 | 3 | 2016-02-10T13:08:34Z | 35,316,236 | 9 | 2016-02-10T13:11:02Z | [
"python",
"list",
"python-2.7",
"counter"
] | I have a list of lists in python and I need to find how many times each sub-list has occurred. Here is a sample,
```
from collections import Counter
list1 = [[ 1., 4., 2.5], [ 1., 2.66666667, 1.33333333],
[ 1., 2., 2.], [ 1., 2.66666667, 1.33333333], [ 1., 4., 2.5],
[ 1., 2.66666667, 1.33333333]]
c = Counter(x for x in iter(list1))
print c
```
I above code will work, if the elements of the list were hashable (say int), but in this case they are lists and I get an error
```
TypeError: unhashable type: 'list'
```
How can I count these lists so I get something like
```
[ 1., 2.66666667, 1.33333333], 3
[ 1., 4., 2.5], 2
[ 1., 2., 2.], 1
``` | Just convert the lists to `tuple`:
```
>>> c = Counter(tuple(x) for x in iter(list1))
>>> c
Counter({(1.0, 2.66666667, 1.33333333): 3, (1.0, 4.0, 2.5): 2, (1.0, 2.0, 2.0): 1})
```
Remember to do the same for lookup:
```
>>> c[tuple(list1[0])]
2
``` |
How to sort an array of integers faster than quicksort? | 35,317,442 | 11 | 2016-02-10T14:09:58Z | 35,317,443 | 14 | 2016-02-10T14:09:58Z | [
"python",
"algorithm",
"performance",
"sorting",
"numpy"
] | Sorting an array of integers with numpy's quicksort has become the
bottleneck of my algorithm. Unfortunately, numpy does not have
[radix sort yet](https://github.com/numpy/numpy/issues/6050).
Although [counting sort](https://en.wikipedia.org/wiki/Counting_sort) would be a one-liner in numpy:
```
np.repeat(np.arange(1+x.max()), np.bincount(x))
```
see the accepted answer to the [How can I vectorize this python count sort so it is absolutely as fast as it can be?](http://stackoverflow.com/q/18501867/341970) question, the integers
in my application can run from `0` to `2**32`.
Am I stuck with quicksort?
---
This post was primarily motivated by the
[Numpy grouping using itertools.groupby performance](http://stackoverflow.com/q/4651683/341970)
question.
Also note that
[it is not merely OK to ask and answer your own question, it is explicitly encouraged.](http://blog.stackoverflow.com/2011/07/its-ok-to-ask-and-answer-your-own-questions/) | No, you are not stuck with quicksort. You could use, for example,
`integer_sort` from
[Boost.Sort](http://www.boost.org/doc/libs/1_60_0/libs/sort/doc/html/)
or `u4_sort` from [usort](https://bitbucket.org/ais/usort). When sorting this array:
```
array(randint(0, high=1<<32, size=10**8), uint32)
```
I get the following results:
```
NumPy quicksort: 8.636 s 1.0 (baseline)
Boost.Sort integer_sort: 4.327 s 2.0x speedup
usort u4_sort: 2.065 s 4.2x speedup
```
I would not jump to conclusions based on this single experiment and use
`usort` blindly. I would test with my actual data and measure what happens.
Your mileage ***will*** vary depending on your data and on your machine. The
`integer_sort` in Boost.Sort has a rich set of options for tuning, see the
[documentation](http://www.boost.org/doc/libs/1_60_0/libs/sort/doc/html/).
Below I describe two ways to call a native C or C++ function from Python. Despite the long description, it's fairly easy to do it.
---
**Boost.Sort**
Put these lines into the spreadsort.cpp file:
```
#include <cinttypes>
#include "boost/sort/spreadsort/spreadsort.hpp"
using namespace boost::sort::spreadsort;
extern "C" {
void spreadsort(std::uint32_t* begin, std::size_t len) {
integer_sort(begin, begin + len);
}
}
```
It basically instantiates the templated `integer_sort` for 32 bit
unsigned integers; the `extern "C"` part ensures C linkage by disabling
name mangling.
Assuming you are using gcc and that the necessary include files of boost
are under the `/tmp/boost_1_60_0` directory, you can compile it:
```
g++ -O3 -std=c++11 -march=native -DNDEBUG -shared -fPIC -I/tmp/boost_1_60_0 spreadsort.cpp -o spreadsort.so
```
The key flags are `-fPIC` to generate
[position-independet code](https://en.wikipedia.org/wiki/Position-independent_code)
and `-shared` to generate a
[shared object](https://en.wikipedia.org/wiki/Library_%28computing%29#Shared_libraries)
.so file. (Read the docs of gcc for further details.)
Then, you wrap the `spreadsort()` C++ function
in Python using [`ctypes`](https://docs.python.org/2/library/ctypes.html):
```
from ctypes import cdll, c_size_t, c_uint32
from numpy import uint32
from numpy.ctypeslib import ndpointer
__all__ = ['integer_sort']
# In spreadsort.cpp: void spreadsort(std::uint32_t* begin, std::size_t len)
lib = cdll.LoadLibrary('./spreadsort.so')
sort = lib.spreadsort
sort.restype = None
sort.argtypes = [ndpointer(c_uint32, flags='C_CONTIGUOUS'), c_size_t]
def integer_sort(arr):
assert arr.dtype == uint32, 'Expected uint32, got {}'.format(arr.dtype)
sort(arr, arr.size)
```
Alternatively, you can use [cffi](https://cffi.readthedocs.org/en/latest/overview.html):
```
from cffi import FFI
from numpy import uint32
__all__ = ['integer_sort']
ffi = FFI()
ffi.cdef('void spreadsort(uint32_t* begin, size_t len);')
C = ffi.dlopen('./spreadsort.so')
def integer_sort(arr):
assert arr.dtype == uint32, 'Expected uint32, got {}'.format(arr.dtype)
begin = ffi.cast('uint32_t*', arr.ctypes.data)
C.spreadsort(begin, arr.size)
```
At the `cdll.LoadLibrary()` and `ffi.dlopen()` calls I assumed that the
path to the `spreadsort.so` file is `./spreadsort.so`. Alternatively,
you can write
```
lib = cdll.LoadLibrary('spreadsort.so')
```
or
```
C = ffi.dlopen('spreadsort.so')
```
if you append the path to `spreadsort.so` to the `LD_LIBRARY_PATH` environment
variable. See also [Shared Libraries](http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html).
**Usage.** In both cases you simply call the above Python wrapper function `integer_sort()`
with your numpy array of 32 bit unsigned integers.
---
**usort**
As for `u4_sort`, you can compile it as follows:
```
cc -DBUILDING_u4_sort -I/usr/include -I./ -I../ -I../../ -I../../../ -I../../../../ -std=c99 -fgnu89-inline -O3 -g -fPIC -shared -march=native u4_sort.c -o u4_sort.so
```
Issue this command in the directory where the `u4_sort.c` file is located.
(Probably there is a less hackish way but I failed to figure that out. I
just looked into the deps.mk file in the usort directory to find out
the necessary compiler flags and include paths.)
Then, you can wrap the C function as follows:
```
from cffi import FFI
from numpy import uint32
__all__ = ['integer_sort']
ffi = FFI()
ffi.cdef('void u4_sort(unsigned* a, const long sz);')
C = ffi.dlopen('u4_sort.so')
def integer_sort(arr):
assert arr.dtype == uint32, 'Expected uint32, got {}'.format(arr.dtype)
begin = ffi.cast('unsigned*', arr.ctypes.data)
C.u4_sort(begin, arr.size)
```
In the above code, I assumed that the path to `u4_sort.so` has been
appended to the `LD_LIBRARY_PATH` environment variable.
**Usage.** As before with Boost.Sort, you simply call the above Python wrapper function `integer_sort()` with your numpy array of 32 bit unsigned integers. |
can't group by anagram correctly | 35,321,508 | 4 | 2016-02-10T17:06:09Z | 35,321,685 | 9 | 2016-02-10T17:14:45Z | [
"python",
"string",
"algorithm",
"hashtable"
] | I wrote a python function to group a list of words by anagram:
```
def groupByAnagram(list):
dic = {}
for x in list:
sort = ''.join(sorted(x))
if sort in dic == True:
dic[sort].append(x)
else:
dic[sort] = [x]
for y in dic:
for z in dic[y]:
print z
groupByAnagram(['cat','tac','dog','god','aaa'])
```
but this only returns:
aaa
god
tac
what am I doing wrong? | ```
if sort in dic == True:
```
Thanks to [operator chaining](https://docs.python.org/3.5/reference/expressions.html#comparisons), this line is equivalent to
```
if (sort in dic) and (dic == True):
```
But dic is a dictionary, so it will never compare equal to True. Just drop the == True comparison entirely.
```
if sort in dic:
``` |
Is there a way to sandbox test execution with pytest, especially filesystem access? | 35,322,452 | 3 | 2016-02-10T17:51:38Z | 35,579,433 | 7 | 2016-02-23T13:53:31Z | [
"python",
"unit-testing",
"testing",
"docker",
"py.test"
] | I'm interested in executing potentially untrusted tests with pytest in some kind of sandbox, like docker, similarly to what continuous integration services do.
I understand that to properly sandbox a python process you need OS-level isolation, like running the tests in a disposable chroot/container, but in my use case I don't need to protect against intentionally malicious code, only from dangerous behaviour of pairing "randomly" functions with arguments. So lesser strict sandboxing may still be acceptable. But I didn't find any plugin that enables any form of sandboxing.
What is the best way to sandbox tests execution in pytest?
**Update**: This question is not about [python sandboxing in general](http://stackoverflow.com/questions/3068139/how-can-i-sandbox-python-in-pure-python) as the tests' code is run by pytest and I can't change the way it is executed to use `exec` or `ast` or whatever. Also using pypy-sandbox is not an option unfortunately as it is "a prototype only" as per the [PyPy feature page](http://pypy.org/features.html).
**Update 2**: Hoger Krekel on the pytest-dev mailing list [suggests using a dedicated testuser via pytest-xdist](https://mail.python.org/pipermail/pytest-dev/2016-February/003394.html) for user-level isolation:
```
py.test --tx ssh=OTHERUSER@localhost --dist=each
```
which [made me realise](https://mail.python.org/pipermail/pytest-dev/2016-February/003399.html) that for my CI-like use case:
> having a "disposable" environment is as important as having a isolated
> one, so that every test or every session runs from the same initial
> state and it is not influenced by what older sessions might have left
> on folders writable by the *testuser* (/home/testuser, /tmp, /var/tmp,
> etc).
So the testuser+xdist is close to a solution, but not quite there.
Just for context I need isolation to run [pytest-nodev](https://pytest-nodev.readthedocs.org). | After quite a bit of research I didn't find any ready-made way for pytest to run a project tests with OS-level isolation and in a disposable environment. Many approaches are possible and have advantages and disadvantages, but most of them have more moving parts that I would feel comfortable with.
The absolute minimal (but opinionated) approach I devised is the following:
* build a python docker image with:
+ a dedicated non-root user: `pytest`
+ all project dependencies from `requirements.txt`
+ the project installed in develop mode
* run py.test in a container that mounts the project folder on the host as the home of `pytest` user
To implement the approach add the following `Dockerfile` to the top folder of the project you want to test next to the `requirements.txt` and `setup.py` files:
```
FROM python:3
# setup pytest user
RUN adduser --disabled-password --gecos "" --uid 7357 pytest
COPY ./ /home/pytest
WORKDIR /home/pytest
# setup the python and pytest environments
RUN pip install --upgrade pip setuptools pytest
RUN pip install --upgrade -r requirements.txt
RUN python setup.py develop
# setup entry point
USER pytest
ENTRYPOINT ["py.test"]
```
Build the image once with:
```
docker build -t pytest .
```
Run py.test inside the container mounting the project folder as volume on /home/pytest with:
```
docker run --rm -it -v `pwd`:/home/pytest pytest [USUAL_PYTEST_OPTIONS]
```
Note that `-v` mounts the volume as uid 1000 so host files are not writable by the pytest user with uid forced to 7357.
Now you should be able to develop and test your project with OS-level isolation.
**Update:** If you also run the test on the host you may need to remove the python and pytest caches that are not writable inside the container. On the host run:
```
rm -rf .cache/ && find . -name __pycache__ | xargs rm -rf
``` |
List of tensor names in graph in Tensorflow | 35,336,648 | 10 | 2016-02-11T10:24:57Z | 35,337,827 | 10 | 2016-02-11T11:15:22Z | [
"python",
"tensorflow"
] | The graph object in Tensorflow has a method called "get\_tensor\_by\_name(name)". Is there anyway to get a list of valid tensor names?
If not, does anyone know the valid names for the pretrained model inception-v3 [from here](https://www.tensorflow.org/versions/v0.6.0/tutorials/image_recognition/index.html)? From their example, pool\_3, is one valid tensor but a list of all of them would be nice. I looked at [the paper referred to](http://arxiv.org/abs/1512.00567) and some of the layers seems to correspond to the sizes in table 1 but not all of them. | The paper is not accurately reflecting the model. If you download the source from arxiv it has an accurate model description as model.txt, and the names in there correlate strongly with the names in the released model.
To answer your first question, sess.graph.get\_operations() gives you a list of operations. For an op, op.name gives you the name and op.values() gives you a list of tensors it produces (in the inception-v3 model, all tensor names are the op name with a ":0" appended to it, so `pool_3:0` is the tensor produced by the final pooling op.) |
Adding list items to the end of items in another list | 35,339,484 | 2 | 2016-02-11T12:34:42Z | 35,339,540 | 7 | 2016-02-11T12:37:24Z | [
"python",
"list",
"list-comprehension"
] | I have:
```
foo = ['/directory/1/', '/directory/2']
bar = ['1.txt', '2.txt']
```
I want:
```
faa = ['/directory/1/1.txt', '/directory/2/2.txt']
```
I can only seem to call operations that are trying to add strings to lists which result in a type error. | ```
>>> [os.path.join(a, b) for a, b in zip(foo, bar)]
['/directory/1/1.txt', '/directory/2/2.txt']
``` |
AWS Lambda import module error in python | 35,340,921 | 5 | 2016-02-11T13:40:31Z | 35,355,800 | 7 | 2016-02-12T05:58:49Z | [
"python",
"amazon-web-services",
"aws-lambda"
] | I am creating a AWS Lambda python deployment package. I am using one external dependency requests . I installed the external dependency using the AWS documentation <http://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html>. Below is my python code.
```
import requests
print('Loading function')
s3 = boto3.client('s3')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
# Get the object from the event and show its content type
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']).decode('utf8')
try:
response = s3.get_object(Bucket=bucket, Key=key)
s3.download_file(bucket,key, '/tmp/data.txt')
lines = [line.rstrip('\n') for line in open('/tmp/data.txt')]
for line in lines:
col=line.split(',')
print(col[5],col[6])
print("CONTENT TYPE: " + response['ContentType'])
return response['ContentType']
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
```
Created the Zip the content of the project-dir directory and uploaded to the lambda(Zip the directory content, not the directory). When I am execute the function I am getting the below mentioned error.
```
START RequestId: 9e64e2c7-d0c3-11e5-b34e-75c7fb49d058 Version: $LATEST
**Unable to import module 'lambda_function': No module named lambda_function**
END RequestId: 9e64e2c7-d0c3-11e5-b34e-75c7fb49d058
REPORT RequestId: 9e64e2c7-d0c3-11e5-b34e-75c7fb49d058 Duration: 19.63 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 9 MB
```
Kindly help me to debug the error. | Error was due to file name of the lambda function. While creating the lambda function it will ask for Lambda function handler. You have to name it as your **Python\_File\_Name.Method\_Name**. In this scenario I named it as lambda.lambda\_handler (lambda.py is the file name).
Please find below the snapshot.
[](http://i.stack.imgur.com/Ribcl.png) |
How to read the csv file properly if each row contains different number of fields (number quite big)? | 35,344,282 | 7 | 2016-02-11T16:07:59Z | 35,345,090 | 9 | 2016-02-11T16:42:19Z | [
"python",
"csv",
"pandas"
] | I have a text file from amazon, containing the following info:
```
# user item time rating review text (the header is added by me for explanation, not in the text file
disjiad123 TYh23hs9 13160032 5 I love this phone as it is easy to use
hjf2329ccc TGjsk123 14423321 3 Suck restaurant
```
As you see, the data is separated by space and there are different number of columns in each row. However, so it is the text content.
Here is the code I have tried:
```
pd.read_csv(filename, sep = " ", header = None, names = ["user","item","time","rating", "review"], usecols = ["user", "item", "rating"])#I'd like to skip the text review part
```
And such an error occurs:
```
ValueError: Passed header names mismatches usecols
```
When I tried to read all the columns:
```
pd.read_csv(filename, sep = " ", header = None)
```
And the error this time is:
```
Error tokenizing data. C error: Expected 229 fields in line 3, saw 320
```
And given the review text is so long in many rows , the method of adding header names for each column in this [question](http://stackoverflow.com/questions/27020216/import-csv-with-different-number-of-columns-per-row-using-pandas) can not work.
I wonder how to read the csv file if I want to keep the review text and skip them respectively. Thank you in advance!
EDIT:
The problem has been solved by Martin Evans perfectly. But now I am playing with another data set with similar but different format. Now the order of the data is converse:
```
# review text user item time rating (the header is added by me for explanation, not in the text file
I love this phone as it is easy to used isjiad123 TYh23hs9 13160032 5
Suck restaurant hjf2329ccc TGjsk123 14423321 3
```
Do you have any idea to read it properly? It would be appreciated for any help! | As suggested, `DictReader` could also be used as follows to create a list of rows. This could then be imported as a frame in pandas:
```
import pandas as pd
import csv
rows = []
csv_header = ['user', 'item', 'time', 'rating', 'review']
frame_header = ['user', 'item', 'rating', 'review']
with open('input.csv', 'rb') as f_input:
for row in csv.DictReader(f_input, delimiter=' ', fieldnames=csv_header[:-1], restkey=csv_header[-1], skipinitialspace=True):
try:
rows.append([row['user'], row['item'], row['rating'], ' '.join(row['review'])])
except KeyError, e:
rows.append([row['user'], row['item'], row['rating'], ' '])
frame = pd.DataFrame(rows, columns=frame_header)
print frame
```
This would display the following:
```
user item rating review
0 disjiad123 TYh23hs9 5 I love this phone as it is easy to use
1 hjf2329ccc TGjsk123 3 Suck restaurant
```
---
If the review appears at the start of the row, then one approach would be to parse the line in reverse as follows:
```
import pandas as pd
import csv
rows = []
frame_header = ['rating', 'time', 'item', 'user', 'review']
with open('input.csv', 'rb') as f_input:
for row in f_input:
cols = [col[::-1] for col in row[::-1][2:].split(' ') if len(col)]
rows.append(cols[:4] + [' '.join(cols[4:][::-1])])
frame = pd.DataFrame(rows, columns=frame_header)
print frame
```
This would display:
```
rating time item user \
0 5 13160032 TYh23hs9 isjiad123
1 3 14423321 TGjsk123 hjf2329ccc
review
0 I love this phone as it is easy to used
1 Suck restaurant
```
`row[::-1]` is used to reverse the text of the whole line, the `[2:]` skips over the line ending which is now at the start of the line. Each line is then split on spaces. A list comprehension then re-reverses each split entry. Finally `rows` is appended to first by taking the fixed 5 column entries (now at the start). The remaining entries are then joined back together with a space and added as the final column.
The benefit of this approach is that it does not rely on your input data being in an exactly fixed width format, and you don't have to worry if the column widths being used change over time. |
Reading input sound signal using Python | 35,344,649 | 9 | 2016-02-11T16:22:19Z | 35,390,981 | 11 | 2016-02-14T11:01:05Z | [
"python",
"python-2.7",
"audio",
"soundcard"
] | I need to get a sound signal from a jack-connected microphone and use the data for immediate processing in Python.
The processing and subsequent steps are clear. I am lost only in getting the signal from the program.
The number of channels is irrelevant, one is enough. I am not going to play the sound back so there should be no need for ASIO on soundcard.
My question is: how can I capture Jack audio from Python?
(It would be great if there were a package, well documented and niches examples :-). | Have you tried [pyaudio](https://people.csail.mit.edu/hubert/pyaudio/)?
To install: python -m pip install pyaudio
Recording example, from official website:
```
"""PyAudio example: Record a few seconds of audio and save to a WAVE file."""
import pyaudio
import wave
CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 44100
RECORD_SECONDS = 5
WAVE_OUTPUT_FILENAME = "output.wav"
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
print("* recording")
frames = []
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
print("* done recording")
stream.stop_stream()
stream.close()
p.terminate()
wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(b''.join(frames))
wf.close()
```
This example works on my laptop with Python 2.7.11 (and 3.5.1) in Windows 8.1, pyaudio 0.2.9. |
How to find elements with two possible class names by XPath? | 35,344,780 | 4 | 2016-02-11T16:27:39Z | 35,344,853 | 7 | 2016-02-11T16:31:35Z | [
"python",
"selenium",
"xpath"
] | How to find elements with two possible class names using an `XPath` expression?
I'm working in Python with `Selenium` and I want to find all elements which `class` has one of two possible names.
1. class="item ng-scope highlight"
2. class="item ng-scope"
`'//div[@class="list"]/div[@class="item ng-scope highlight"]//h3/a[@class="ng-binding"]'`
Of course I can do two separate searches and concat results into one list. But there is a more simple and efficient way. Maybe by using `|`. | You can use `or`:
```
//div[@class="list"]/div[@class="item ng-scope highlight" or @class="item ng-scope"]//h3/a[@class="ng-binding"]
```
Note that `ng-scope` in general is not a good class name to rely on, because it is a "pure technical" AngularJS specific class (same goes for the `ng-binding` actually) that angular elements have. Please see if using `contains()` and checking the `item` class only would be enough to cover the use case:
```
//div[@class="list"]/div[contains(@class, "item")]//h3/a[@class="ng-binding"]
```
FYI, note how concise a CSS selector could be in your case:
```
div.list > div.item h3 a
``` |
CSV Writing to File Difficulties | 35,354,039 | 9 | 2016-02-12T02:55:18Z | 35,354,255 | 8 | 2016-02-12T03:20:05Z | [
"python",
"python-2.7",
"csv",
"python-2.x",
"file-writing"
] | I am supposed to add a specific label to my `CSV` file based off conditions. The `CSV` file has 10 columns and the third, fourth, and fifth columns are the ones that affect the conditions the most and I add my label on the tenth column. I have code here which ended in an infinite loop:
```
import csv
import sys
w = open(sys.argv[1], 'w')
r = open(sys.argv[1], 'r')
reader = csv.reader(r)
writer = csv.writer(w)
for row in reader:
if row[2] or row[3] or row[4] == '0':
row[9] == 'Label'
writer.writerow(row)
w.close()
r.close()
```
I do not know why it would end in an infinite loop.
EDIT: I made a mistake and my original infinite loop program had this line:
```
w = open(sys.argv[1], 'a')
```
I changed `'a'` to `'w'` but this ended up erasing the entire `CSV` file itself. So now I have a different problem. | You have a problem here `if row[2] or row[3] or row[4] == '0':` and here `row[9] == 'Label'`, you can use [**`any`**](https://docs.python.org/2/library/functions.html#any) to check several variables equal to the same value, and use `=` to assign a value, also i would recommend to use [**`with open`**](https://docs.python.org/2/tutorial/inputoutput.html#methods-of-file-objects).
Additionally you can't read and write at the same time in `csv` file, so you need to save your changes to a new csv file, you can remove the original one after and rename the new one using [**`os.remove`**](https://docs.python.org/2/library/os.html#os.remove) and [**`os.rename`**](https://docs.python.org/2/library/os.html#os.rename):
```
import csv
import sys
import os
with open('some_new_file.csv', 'w') as w, open(sys.argv[1], 'r') as r:
reader, writer = csv.reader(r), csv.writer(w)
for row in reader:
if any(x == '0' for x in (row[2], row[3], row[4])):
row[9] = 'Label'
writer.writerow(row)
os.remove('{}'.format(sys.argv[1]))
os.rename('some_new_file.csv', '{}'.format(sys.argv[1]))
``` |
Python list order | 35,359,009 | 12 | 2016-02-12T09:28:40Z | 35,359,102 | 28 | 2016-02-12T09:34:26Z | [
"python",
"python-2.7"
] | In the small script I wrote, the .append() function adds the entered item to the beginning of the list, instead of the end of that list. (As you can clearly understand, am quite new to the Python, so go easy on me)
> `list.append(x)`
> Add an item to the end of the list; equivalent to `a[len(a):] = [x]`.
That's what is says in <https://docs.python.org/2/tutorial/datastructures.html>.
You can see my code below:
```
user_input = []
def getting_text(entered_text):
if entered_text == "done":
print "entering the texts are done!"
else:
getting_text(raw_input("Enter the text or write done to finish entering "))
user_input.append(entered_text)
getting_text(raw_input("Enter the first text "))
print user_input
```
Am I misunderstanding something here, because the print function prints `c,b,a` instead of `a,b,c` (the order I entered the input is `a,b,c`) | Ok, this is what's happening.
When your text isn't `"done"`, you've programmed it so that you **immediately** call the function again (i.e, recursively call it). Notice how you've actually set it to append an item to the list AFTER you do the `getting_text(raw_input("Enter the text or write done to finish entering "))` line.
So basically, when you add your variables, it's going to add all of the variables AFTER it's done with the recursive function.
Hence, when you type `a`, then it calls the function again (hasn't inputted anything to the list yet). Then you type `b`, then `c`. When you type `done`, the recursive bit is finished. NOW, it does `user_input.append(...`. HOWEVER, the order is reversed because it deals with `c` first since that was the latest thing.
This can be shown when you print the list inside the function:
```
>>> def getting_text(entered_text):
... print user_input
... if entered_text == "done":
... print "entering the texts are done!"
... else:
... getting_text(raw_input("Enter the text or write done to finish entering "))
... user_input.append(entered_text)
...
>>>
>>> getting_text(raw_input("Enter the first text "))
Enter the first text a
[]
Enter the text or write done to finish entering b
[]
Enter the text or write done to finish entering c
[]
Enter the text or write done to finish entering done
[]
entering the texts are done!
>>> user_input
['c', 'b', 'a']
```
Note the print statement line 2.
---
So how do you fix this? Simple: append to the list before you recursively call.
```
>>> user_input = []
>>> def getting_text(entered_text):
... if entered_text == "done":
... print "entering the texts are done!"
... else:
... user_input.append(entered_text)
... getting_text(raw_input("Enter the text or write done to finish entering "))
...
>>> user_input = []
>>> getting_text(raw_input("Enter the first text "))
Enter the first text a
Enter the text or write done to finish entering b
Enter the text or write done to finish entering c
Enter the text or write done to finish entering done
entering the texts are done!
>>> user_input
['a', 'b', 'c']
``` |
Individual words in a list then printing the positions of those words | 35,366,457 | 2 | 2016-02-12T15:36:08Z | 35,366,613 | 10 | 2016-02-12T15:43:47Z | [
"python"
] | I need help with a program that identifies individual words in a sentence, stores these in a list and replaces each word in the original sentence with the position of that word in the list. Here is what I have so far.
for example:
```
'ASK NOT WHAT YOUR COUNTRY CAN DO FOR YOU ASK WHAT YOU CAN DO FOR YOUR' COUNTRY
```
would recreated as 1,2,3,4,5,6,7,8,9,1,3,9,6,7,8,4,5
```
from collections import OrderedDict
sentence = input("Please input a sentence without punctuation").upper()
punctuation = ("`1234567890-=¬!£$%^&*()_+\|[];'#,./{}:@~<>?")
FilteredSentence = ("")
for char in sentence:
if char not in punctuation:
FilteredSentence = FilteredSentence+char
FilteredSentence = FilteredSentence.split(" ")
refined = list(OrderedDict.fromkeys(FilteredSentence))
```
I have managed to identify the individual words in the list however I work out how to replace the words in the original list with the positions of the individual words. | Like this? Just do a list-comprehension to get all the indices of all the words.
```
In [77]: sentence = "ASK NOT WHAT YOUR COUNTRY CAN DO FOR YOU ASK WHAT YOU CAN DO FOR YOUR COUNTRY"
In [78]: words = sentence.split()
In [79]: [words.index(s)+1 for s in words]
Out[79]: [1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 3, 9, 6, 7, 8, 4, 5]
``` |
Apply the order of list to another lists | 35,368,775 | 8 | 2016-02-12T17:30:45Z | 35,368,833 | 12 | 2016-02-12T17:34:10Z | [
"python",
"list",
"python-2.7"
] | I have a list, with a specific order:
```
L = [1, 2, 5, 8, 3]
```
And some sub lists with elements of the main list, but with an different order:
```
L1 = [5, 3, 1]
L2 = [8, 1, 5]
```
How can I apply the order of `L` to `L1` and `L2`?
For example, the correct order after the processing should be:
```
L1 = [1, 5, 3]
L2 = [1, 5, 8]
```
I am trying something like this, but I am struggling how to set the new list with the correct order.
```
new_L1 = []
for i in L1:
if i in L:
print L.index(i) #get the order in L
``` | Looks like you just want to sort `L1` and `L2` according to the index where the value falls in `L`.
```
L = [1, 2, 5, 8, 3]
L1 = [5, 3, 1]
L2 = [8, 1, 5]
L1.sort(key = lambda x: L.index(x))
L2.sort(key = lambda x: L.index(x))
``` |
Is there an easy one liner to convert a text file to a dictionary in Python without using CSV? | 35,372,569 | 2 | 2016-02-12T21:35:31Z | 35,372,611 | 8 | 2016-02-12T21:38:40Z | [
"python",
"file",
"dictionary",
"text"
] | I have a 10,000 line file called "number\_file" like this with four columns of numbers.
```
12123 12312321 12312312 12312312
12123 12312321 12312312 12312312
12123 12312321 12312312 12312312
12123 12312321 12312312 12312312
```
I need to convert the file to a dictionary where the first column numbers are the keys and the entire line are the values
So far, I tried this but it didn't work.
```
dict((line.strip().split('\t')[0] for line in file(number_file)))
```
How do I fix this one liner so that it converts the file to a dictionary? | You could use the following dict comprehension:
```
with open(number_file) as fileobj:
result = {row[0]: row[1:] for line in fileobj for row in (line.split(),)}
```
where the `for row in (one_element_tuple,)` is effectively an assignment.
Or you could use a nested generator expression to handle the splitting of each line:
```
with open(number_file) as fileobj:
result = {row[0]: row[1:] for row in (line.split() for line in fileobj)}
```
However, if your file is really tab-delimited, don't fear the `csv` module:
```
import csv
with open(number_file) as fileobj:
result = {row[0]: row[1:] for row in csv.reader(fileobj, delimiter='\t')}
``` |
In Python, how to sum nested lists: [[1,0], [1,1], [1,0]] â [3,1] | 35,377,466 | 3 | 2016-02-13T07:50:51Z | 35,377,502 | 9 | 2016-02-13T07:55:50Z | [
"python",
"list"
] | I have an array in the form of
```
a = [[1, 0], [1, 1], [0, 0], [0, 1], [1, 0]]
```
and I need to sum all values of the same index in the nested lists so that the above yields
```
[3,2]
```
This could get archieved by the following code
```
b = [0]*len(a[0])
for x in a:
b = map(sum, zip(b,x))
```
Since `a` contains hundreds of lists, I wonder if there is a better way to do this. These nested lists have always the same length per run, in the example above 2, but it could well be just 1 or 3 or more, hence the initialization of `b` to `[0]*len(a[0])`.
Examples for the different lengths would be:
```
# nested lists always have 3 items
a = [[1, 0, 1], [1, 1, 1], [0, 0, 1], [0, 1, 1], [1, 0, 0]]
# result: [3, 2, 4]
# nested lists always have 1 item
a = [[1], [1], [0], [0], [1]]
# result: [3]
# mixed lengths would never happen
a = [[1], [0,1], [0], [0,1,1]] # no, this not!
``` | You can simply transpose your initial matrix and sum each row:
```
b = [sum(e) for e in zip(*a)]
``` |
Convert empty dictionary to empty string | 35,389,648 | 5 | 2016-02-14T08:03:36Z | 35,389,716 | 8 | 2016-02-14T08:14:22Z | [
"python",
"string",
"dictionary"
] | ```
>>> d = {}
>>> s = str(d)
>>> print s
{}
```
I need an empty string instead. | An empty dict object is `False` when you try to convert it to a bool object. But if there's something in it, it would be `True`. Like empty list, empty string, empty set, and other objects:
```
>>> d = {}
>>> d
{}
>>> bool(d)
False
>>> d['foo'] = 'bar'
>>> bool(d)
True
```
So it's simple:
```
>>> s = str(d) if d else ''
>>> s
"{'foo': 'bar'}"
>>> d = {}
>>> s = str(d) if d else ''
>>> s
''
```
Or just `if not d: s = ''` if you don't need `s` be string of the dict when there's something in the dict. |
Convert empty dictionary to empty string | 35,389,648 | 5 | 2016-02-14T08:03:36Z | 35,389,969 | 15 | 2016-02-14T08:52:24Z | [
"python",
"string",
"dictionary"
] | ```
>>> d = {}
>>> s = str(d)
>>> print s
{}
```
I need an empty string instead. | You can do it with the shortest way as below, since the empty dictionary is `False`, and do it through [Boolean Operators](https://docs.python.org/2/library/stdtypes.html#boolean-operations-and-or-not).
```
>>> d = {}
>>> str(d or '')
''
```
Or without `str`
```
>>> d = {}
>>> d or ''
''
```
If `d` is not an empty dictionary, convert it to string with `str()`
```
>>> d['f'] = 12
>>> str(d or '')
"{'f': 12}"
``` |
What is the value of None in memory? | 35,391,734 | 7 | 2016-02-14T12:27:36Z | 35,391,758 | 10 | 2016-02-14T12:30:47Z | [
"python"
] | `None` in Python is a reserved word, just a question crossed my mind about the exact value of `None` in memory. What I'm holding in my mind is this, `None`'s representation in memory is either 0 or a pointer pointing to a heap. But neither the pointer pointing to an empty area in the memory will make sense neither the zero value. Because when I tested the following:
```
>>> None.__sizeof__()
16
```
It turns out that `None` consumes 16 bytes and that's actually too much for simply an empty value.
> So what does `None` actually represents in memory? | `None` is singleton object which doesn't provide (almost) none methods and attributes and its only purpose is to signify the fact that there is no value for some specific operation.
As a real object it still needs to have some headers, some reflection information and things like that so it takes the minimum memory occupied by every python object. |
Smoothing sympy plots | 35,399,145 | 4 | 2016-02-14T23:07:39Z | 35,399,190 | 7 | 2016-02-14T23:12:48Z | [
"python",
"plot",
"sympy"
] | If I have a function such as `sin(1/x)` I want to plot and show close to 0, how would I smooth out the results in the plot? The default number of sample points is relatively small for what I'm trying to show.
Here's the code:
```
from sympy import symbols, plot, sin
x = symbols('x')
y = sin(1/x)
plot(y, (x, 0, 0.5))
```
[](http://i.stack.imgur.com/JKFnw.png)
As `x` approaches 0, the line becomes more jagged and less "curvy". Is there some way to fix this? | You can set the number of points used manually:
```
plot(y, (x, 0.001, 0.5), adaptive=False, nb_of_points=300000)
```
[](http://i.stack.imgur.com/95NPB.png)
Note: I expected to get ZeroDivisionError when using the exact question-code (that is, having x go from 0 to something), but i don't get an error (strange). I *do* get the error though as soon as I use `adaptive=False, nb_of_points=300000`, so this is why I set xmin to a non zero value (`0.001`). |
Why does division near to zero have different behaviors in python? | 35,400,114 | 5 | 2016-02-15T01:17:45Z | 35,400,208 | 7 | 2016-02-15T01:28:57Z | [
"python",
"floating-point",
"division"
] | This is not actually a problem, it is more something curious about floating-point arithmetic on Python implementation.
Could someone explain the following behavior?
```
>>> 1/1e-308
1e+308
>>> 1/1e-309
inf
>>> 1/1e-323
inf
>>> 1/1e-324
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: float division by zero
```
It seems that 1 divided by a number near to zero is `inf` and if it is nearer to a `ZeroDivisionError` is thrown. It seems an odd behavior.
The same output for python 2.x/3.x.
---
**EDIT**: my main question here is why we get `inf` for some range and not `ZeroDivisionError` assuming that python seems to consider as zero 1e-309 | This is related to the IEEE754 floating-point format itself, not so much Python's implementation of it.
Generally speaking, floats can represent smaller negative exponents than large positive ones, because of [denormal numbers](https://en.wikipedia.org/wiki/Denormal_number). This is where the mantissa part of the float is no longer implicitly assumed to begin with a 1, but rather describes the entire mantissa, and begins with zeroes. In case you don't know what that is, I'd suggest you read up about how floats are represented, perhaps starting [here](https://en.wikipedia.org/wiki/Double-precision_floating-point_format).
Because of this, when you invert a denormal number, you may end up with a positive exponent too large to represent. The computer then gives you `inf` in its place. The `1e-308` in your example is actually also denormal, but still not small to overflow when inverted (because among normal numbers, the standard actually allows for slightly larger positive than negative exponents).
In case of `1e-324`, that number is simply too small to be represented even as a denormal, so that float literal is effectively equal to zero. That's why you get division by zero. The smallest representable 64-bit float is (slightly below) `5e-324`. |
Correctly extract Emojis from a Unicode string | 35,404,144 | 14 | 2016-02-15T07:58:26Z | 35,462,951 | 10 | 2016-02-17T16:56:25Z | [
"python",
"unicode",
"python-2.x",
"emoji"
] | I am working in Python 2 and I have a string containing emojis as well as other unicode characters. I need to convert it to a list where each entry in the list is a single character/emoji.
```
x = u'í ½í¸í ½í¸xyzí ½í¸í ½í¸'
char_list = [c for c in x]
```
The desired output is:
```
['í ½í¸', 'í ½í¸', 'x', 'y', 'z', 'í ½í¸', 'í ½í¸']
```
The actual output is:
```
[u'\ud83d', u'\ude18', u'\ud83d', u'\ude18', u'x', u'y', u'z', u'\ud83d', u'\ude0a', u'\ud83d', u'\ude0a']
```
How can I achieve the desired output? | First of all, in Python2, you need to use Unicode strings (`u'<...>'`) for Unicode characters to be seen as Unicode characters. And [correct source encoding](http://stackoverflow.com/questions/728891/correct-way-to-define-python-source-code-encoding) if you want to use the chars themselves rather than the `\UXXXXXXXX` representation in source code.
Now, as per [Python: getting correct string length when it contains surrogate pairs](http://stackoverflow.com/questions/12907022/python-getting-correct-string-length-when-it-contains-surrogate-pairs) and [Python returns length of 2 for single Unicode character string](http://stackoverflow.com/questions/29109944/python-returns-length-of-2-for-single-unicode-character-string), in Python2 "narrow" builds (with `sys.maxunicode==65535`), 32-bit Unicode characters are represented as [surrogate pairs](http://unicode.org/faq/utf_bom.html#utf16-2), and this is not transparent to string functions. This has only been fixed in 3.3 ([PEP0393](https://www.python.org/dev/peps/pep-0393/)).
**The simplest resolution (save for migrating to 3.3+) is to compile a Python "wide" build from source as outlined on the 3rd link.** In it, Unicode characters are all 4-byte (thus are a potential memory hog) but if you need to routinely handle wide Unicode chars, this is probably an acceptable price.
**The solution for a "narrow" build** is **to make a custom set of string functions** (`len`, `slice`; maybe as a subclass of `unicode`) that would detect surrogate pairs and handle them as a single character. I couldn't readily find an existing one (which is strange), but it's not too hard to write:
* as per [UTF-16#U+10000 to U+10FFFF - Wikipedia](https://en.wikipedia.org/wiki/UTF-16#U.2B10000_to_U.2B10FFFF),
+ the 1st character *(high surrogate)* is in range `0xD800..0xDBFF`
+ the 2nd character *(low surrogate)* - in range `0xDC00..0xDFFF`
+ these ranges are reserved and thus cannot occur as regular characters
So here's the code to detect a surrogate pair:
```
def is_surrogate(s,i):
if 0xD800 <= ord(s[i]) <= 0xDBFF:
try:
l = s[i+1]
except IndexError:
return False
if 0xDC00 <= ord(l) <= 0xDFFF:
return True
else:
raise ValueError("Illegal UTF-16 sequence: %r" % s[i:i+2])
else:
return False
```
And a function that returns a simple slice:
```
def slice(s,start,end):
l=len(s)
i=0
while i<start and i<l:
if is_surrogate(s,i):
start+=1
end+=1
i+=1
i+=1
while i<end and i<l:
if is_surrogate(s,i):
end+=1
i+=1
i+=1
return s[start:end]
```
Here, the price you pay is performance, as these functions are much slower than built-ins:
```
>>> ux=u"a"*5000+u"\U00100000"*30000+u"b"*50000
>>> timeit.timeit('slice(ux,10000,100000)','from __main__ import slice,ux',number=1000)
46.44128203392029 #msec
>>> timeit.timeit('ux[10000:100000]','from __main__ import slice,ux',number=1000000)
8.814016103744507 #usec
``` |
SSLError: sslv3 alert handshake failure | 35,405,092 | 9 | 2016-02-15T08:59:24Z | 35,477,533 | 9 | 2016-02-18T09:30:16Z | [
"python",
"ssl",
"openssl",
"branch.io"
] | I'm making the following call to branch.io
```
import requests
req = requests.get('https://bnc.lt/m/H3XKyKB3Tq', verify=False)
```
It works fine in my local machine but fails in the server.
```
SSLError: [Errno 1] _ssl.c:504: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
```
**Openssl versions:**
local: OpenSSL 0.9.8zg 14 July 2015
server: OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
**Python**:
local: 2.7.10
server: 2.7.6
**Branch io server connection**:
Chrome verified that DigiCert SHA2 Secure Server CA issued this website's certificate. The server did not supply any Certificate Transparency information.
I tried urllib2, but the result was same. Any help? | [Jyo de Lys](http://stackoverflow.com/users/5172809/jyo-de-lys) has identified the problem. The problem is described [here](http://docs.python-requests.org/en/master/community/faq/#what-are-hostname-doesn-t-match-errors) and the solution is [here](https://stackoverflow.com/questions/18578439/using-requests-with-tls-doesnt-give-sni-support/18579484#18579484). Following is what I did to get this working:
1. easy\_install pyOpenSSL
2. easy\_install ndg-httpsclient |
Reading a file line by line -- impact on disk? | 35,406,505 | 4 | 2016-02-15T10:09:13Z | 35,406,524 | 7 | 2016-02-15T10:10:12Z | [
"python",
"operating-system",
"filesystems",
"ssd",
"hdd"
] | I'm currently writing a python script that processes very large (> 10GB) files. As loading the whole file into memory is not an option, I'm right now reading and processing it line by line:
```
for line in f:
....
```
Once the script is finished it will run fairly often, so I'm starting to think about what impact that sort of reading will have on my disks lifespan.
Will the script *actually* read line by line or is there some kind of OS-powered buffering happening? If not, should I implement some kind of intermediary buffer myself? Is hitting the disk that often actually harmful? I remember reading something about BitTorrent wearing out disks quickly exactly because of that kind of bitwise reading/writing rather than operating with larger chunks of data.
I'm using both a HDD and an SSD in my test environment, so answers would be interesting for both systems. | Both your OS and Python use buffers to read data in larger chunks, for performance reasons. Your disk will not be materially impacted by reading a file line by line from Python.
Specifically, Python cannot give you individual lines without scanning ahead to find the line separators, so it'll read chunks, parse out individual lines, and each iteration will take lines from the buffer until another chunk must be read to find the next set of lines. The OS uses a [buffer cache](http://www.tldp.org/LDP/sag/html/buffer-cache.html) to help speed up I/O in general. |
_corrupt_record error when reading a JSON file into Spark | 35,409,539 | 7 | 2016-02-15T12:34:39Z | 35,409,851 | 10 | 2016-02-15T12:50:57Z | [
"python",
"json",
"dataframe",
"pyspark"
] | I've got this JSON file
```
{
"a": 1,
"b": 2
}
```
which has been obtained with Python json.dump method.
Now, I want to read this file into a DataFrame in Spark, using pyspark. Following documentation, I'm doing this
> sc = SparkContext()
>
> sqlc = SQLContext(sc)
>
> df = sqlc.read.json('my\_file.json')
>
> print df.show()
The print statement spits out this though:
```
+---------------+
|_corrupt_record|
+---------------+
| {|
| "a": 1, |
| "b": 2|
| }|
+---------------+
```
Anyone knows what's going on and why it is not interpreting the file correctly? | You need to have one json object per row in your input file, see <http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameReader.json>
If your json file looks like this it will give you the expected dataframe:
```
{ "a": 1, "b": 2 }
{ "a": 3, "b": 4 }
....
df.show()
+---+---+
| a| b|
+---+---+
| 1| 2|
| 3| 4|
+---+---+
``` |
TensorFlow: PlaceHolder error when using tf.merge_all_summaries() | 35,413,618 | 2 | 2016-02-15T15:52:35Z | 35,424,017 | 11 | 2016-02-16T04:51:34Z | [
"python",
"neural-network",
"tensorflow"
] | I am getting a placeholder error.
I do not know what it means, because I am mapping correctly on `sess.run(..., {_y: y, _X: X})`... I provide here a fully functional MWE reproducing the error:
```
import tensorflow as tf
import numpy as np
def init_weights(shape):
return tf.Variable(tf.random_normal(shape, stddev=0.01))
class NeuralNet:
def __init__(self, hidden):
self.hidden = hidden
def __del__(self):
self.sess.close()
def fit(self, X, y):
_X = tf.placeholder('float', [None, None])
_y = tf.placeholder('float', [None, 1])
w0 = init_weights([X.shape[1], self.hidden])
b0 = tf.Variable(tf.zeros([self.hidden]))
w1 = init_weights([self.hidden, 1])
b1 = tf.Variable(tf.zeros([1]))
self.sess = tf.Session()
self.sess.run(tf.initialize_all_variables())
h = tf.nn.sigmoid(tf.matmul(_X, w0) + b0)
self.yp = tf.nn.sigmoid(tf.matmul(h, w1) + b1)
C = tf.reduce_mean(tf.square(self.yp - y))
o = tf.train.GradientDescentOptimizer(0.5).minimize(C)
correct = tf.equal(tf.argmax(_y, 1), tf.argmax(self.yp, 1))
accuracy = tf.reduce_mean(tf.cast(correct, "float"))
tf.scalar_summary("accuracy", accuracy)
tf.scalar_summary("loss", C)
merged = tf.merge_all_summaries()
import shutil
shutil.rmtree('logs')
writer = tf.train.SummaryWriter('logs', self.sess.graph_def)
for i in xrange(1000+1):
if i % 100 == 0:
res = self.sess.run([o, merged], feed_dict={_X: X, _y: y})
else:
self.sess.run(o, feed_dict={_X: X, _y: y})
return self
def predict(self, X):
yp = self.sess.run(self.yp, feed_dict={_X: X})
return (yp >= 0.5).astype(int)
X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1]])
y = np.array([[0],[1],[1],[0]]])
m = NeuralNet(10)
m.fit(X, y)
yp = m.predict(X)[:, 0]
print accuracy_score(y, yp)
```
The error:
```
I tensorflow/core/common_runtime/local_device.cc:40] Local device intra op parallelism threads: 8
I tensorflow/core/common_runtime/direct_session.cc:58] Direct session inter op parallelism threads: 8
0.847222222222
W tensorflow/core/common_runtime/executor.cc:1076] 0x2340f40 Compute status: Invalid argument: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float
[[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
W tensorflow/core/common_runtime/executor.cc:1076] 0x2340f40 Compute status: Invalid argument: You must feed a value for placeholder tensor 'Placeholder' with dtype float
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Traceback (most recent call last):
File "neuralnet.py", line 64, in <module>
m.fit(X[tr], y[tr, np.newaxis])
File "neuralnet.py", line 44, in fit
res = self.sess.run([o, merged], feed_dict={self._X: X, _y: y})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 368, in run
results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 444, in _do_run
e.code)
tensorflow.python.framework.errors.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float
[[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op u'Placeholder_1', defined at:
File "neuralnet.py", line 64, in <module>
m.fit(X[tr], y[tr, np.newaxis])
File "neuralnet.py", line 16, in fit
_y = tf.placeholder('float', [None, 1])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 673, in placeholder
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 463, in _placeholder
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 664, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1834, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1043, in __init__
self._traceback = _extract_stack()
```
If I remove the `tf.merge_all_summaries()` or remove `merged` from `self.sess.run([o, merged], ...)` then it runs okay.
This looks similar to this post:
[Error when computing summaries in TensorFlow](http://stackoverflow.com/questions/35114376/error-when-computing-summaries-in-tensorflow)
However, I am not using iPython... | The `tf.merge_all_summaries()` function is convenient, but also somewhat dangerous: it merges **all summaries in the default graph**, which includes any summaries from previous—apparently unconnected—invocations of code that also added summary nodes to the default graph. If old summary nodes depend on an old placeholder, you will get errors like the one you have shown in your question (and like [previous](http://stackoverflow.com/q/35114376/3574081) [questions](http://stackoverflow.com/q/35116566/3574081) as well).
There are two independent workarounds:
1. Ensure that you explicitly collect the summaries that you wish to compute. This is as simple as using the explicit [`tf.merge_summary()`](https://www.tensorflow.org/versions/v0.6.0/api_docs/python/train.html#merge_summary) op in your example:
```
accuracy_summary = tf.scalar_summary("accuracy", accuracy)
loss_summary = tf.scalar_summary("loss", C)
merged = tf.merge_summary([accuracy_summary, loss_summary])
```
2. Ensure that each time you create a new set of summaries, you do so in a new graph. The recommended style is to use an explicit default graph:
```
with tf.Graph().as_default():
# Build model and create session in this scope.
#
# Only summary nodes created in this scope will be returned by a call to
# `tf.merge_all_summaries()`
```
Alternatively, if you are using the latest open-source version of TensorFlow (or the forthcoming 0.7.0 release), you can call [`tf.reset_default_graph()`](https://www.tensorflow.org/versions/master/api_docs/python/framework.html#reset_default_graph) to reset the state of the graph and remove any old summary nodes. |
Why is floating-point division in Python faster with smaller numbers? | 35,418,974 | 2 | 2016-02-15T21:00:11Z | 35,419,065 | 7 | 2016-02-15T21:04:54Z | [
"python",
"performance",
"math",
"floating-point",
"division"
] | In the process of answering [this question](http://stackoverflow.com/questions/35418072/why-does-division-become-faster-with-very-large-numbers/35418303), I came across something I couldn't explain.
Given the following Python 3.5 code:
```
import time
def di(n):
for i in range(10000000): n / 101
i = 10
while i < 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000:
start = time.clock()
di(i)
end = time.clock()
print("On " + str(i) + " " + str(end-start))
i *= 10000
```
The output is:
```
On 10 0.546889
On 100000 0.545004
On 1000000000 0.5454929999999998
On 10000000000000 0.5519709999999998
On 100000000000000000 1.330797
On 1000000000000000000000 1.31053
On 10000000000000000000000000 1.3393129999999998
On 100000000000000000000000000000 1.3524339999999997
On 1000000000000000000000000000000000 1.3817269999999997
On 10000000000000000000000000000000000000 1.3412670000000002
On 100000000000000000000000000000000000000000 1.3358929999999987
On 1000000000000000000000000000000000000000000000 1.3773859999999996
On 10000000000000000000000000000000000000000000000000 1.3326890000000002
On 100000000000000000000000000000000000000000000000000000 1.3704769999999993
On 1000000000000000000000000000000000000000000000000000000000 1.3235019999999995
On 10000000000000000000000000000000000000000000000000000000000000 1.357647
On 100000000000000000000000000000000000000000000000000000000000000000 1.3341190000000012
On 1000000000000000000000000000000000000000000000000000000000000000000000 1.326544000000002
On 10000000000000000000000000000000000000000000000000000000000000000000000000 1.3671139999999973
On 100000000000000000000000000000000000000000000000000000000000000000000000000000 1.3630120000000012
On 1000000000000000000000000000000000000000000000000000000000000000000000000000000000 1.3600200000000022
On 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000 1.3189189999999975
On 100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 1.3503469999999993
```
As you can see, there are roughly two times: one for smaller numbers, and one for larger numbers.
The same result happens with Python 2.7 using the following function to preserve semantics:
```
def di(n):
for i in xrange(10000000): n / 101.0
```
On the same machine, I get:
```
On 10 0.617427
On 100000 0.61805
On 1000000000 0.6366
On 10000000000000 0.620919
On 100000000000000000 0.616695
On 1000000000000000000000 0.927353
On 10000000000000000000000000 1.007156
On 100000000000000000000000000000 0.98597
On 1000000000000000000000000000000000 0.99258
On 10000000000000000000000000000000000000 0.966753
On 100000000000000000000000000000000000000000 0.992684
On 1000000000000000000000000000000000000000000000 0.991711
On 10000000000000000000000000000000000000000000000000 0.994703
On 100000000000000000000000000000000000000000000000000000 0.978877
On 1000000000000000000000000000000000000000000000000000000000 0.982035
On 10000000000000000000000000000000000000000000000000000000000000 0.973266
On 100000000000000000000000000000000000000000000000000000000000000000 0.977911
On 1000000000000000000000000000000000000000000000000000000000000000000000 0.996857
On 10000000000000000000000000000000000000000000000000000000000000000000000000 0.972555
On 100000000000000000000000000000000000000000000000000000000000000000000000000000 0.985676
On 1000000000000000000000000000000000000000000000000000000000000000000000000000000000 0.987412
On 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0.997207
On 100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0.970129
```
Why is there this consistent difference between floating point division of smaller vs. larger numbers? Does it have to do with Python internally using floats for smaller numbers and doubles for larger ones? | It has more to do with Python storing exact integers as Bignums.
In Python 2.7, computation of **integer *a*** / **float *fb***, starts by converting the integer to a float. If the integer is stored as a Bignum [Note 1] then this takes longer. So it's not the division that has differential cost; it is the conversion of the integer (possibly a Bignum) to a double.
Python 3 does the same computation for **integer *a*** / **float *fb***, but with **integer *a*** / **integer *b***, it tries to compute the closest representable result, which might differ slightly from the naive `float(a) / float(b)`. (This is similar to the classic double-rounding problem.)
If both `float(a)` and `float(b)` are precise (that is, both `a` and `b` are no larger than 53 bits), then the naive solution works, and the result only requires the division of two double-precision floats.
Otherwise, a multiprecision division is performed to generate the correct 53-bit mantissa (the exponent is computed separately), and the result is converted precisely to a floating point number. There are two possibilities for this division: a fast-track if `b` is small enough to fit in a single Bignum unit (which applies to the benchmark in the OP), and a slower, general Bignum division when `b` is larger.
In none of the above cases is the speed difference observed related to the speed with which the hardware performs floating point division. For the original Python 3.5 test, the difference relates to whether floating point or Bignum division is performed; for the Python 2.7 case, the difference relates to the necessity to convert a Bignum to a double.
Thanks to @MarkDickinson for the clarification, and the pointer to [the source code (with a long and useful comment)](https://github.com/python/cpython/blob/master/Objects/longobject.c#L3773-L3793) which implements the algorithm.
---
### Notes
1. In Python 3, integers are *always* stored as Bignums. Python 2 has separate types for `int` (64-bit integers) and `long` (Bignums). In practice, since Python 3 often uses optimized algorithms when the Bignum has only one "leg", the difference between "small" and "big" integers is still noticeable. |
Indexing Elements in a list of tuples | 35,419,979 | 2 | 2016-02-15T22:06:44Z | 35,420,068 | 9 | 2016-02-15T22:12:47Z | [
"python",
"indexing"
] | I have the following list, created by using the zip function on two separate lists of numbers:
```
[(20, 6),
(21, 4),
(22, 4),
(23, 2),
(24, 8),
(25, 3),
(26, 4),
(27, 4),
(28, 6),
(29, 2),
(30, 8)]
```
I would like to know if there is a way to iterate through this list and receive the number on the LHS that corresponds to the maximum value on the RHS, i.e. in this case I would like to get 24 and 30, which both have a value of 8 (the max value of the RHS). I have tried:
```
## get max of RHS
max_1 = max([i[1] for i in data])
## for max of LHR, index this location
## on RHS column
number = [i[1] for i in data].index(max_1)
```
but it doesn't work. | After:
```
max_1 = max([i[1] for i in data])
```
Try:
```
>>> number = [i[0] for i in data if i[1]==max_1]
>>> number
[24, 30]
``` |
Speeding up reading of very large netcdf file in python | 35,422,862 | 6 | 2016-02-16T02:57:38Z | 35,507,245 | 10 | 2016-02-19T14:06:56Z | [
"python",
"numpy",
"netcdf",
"dask",
"python-xarray"
] | I have a very large netCDF file that I am reading using netCDF4 in python
I cannot read this file all at once since its dimensions (1200 x 720 x 1440) are too big for the entire file to be in memory at once. The 1st dimension represents time, and the next 2 represent latitude and longitude respectively.
```
import netCDF4
nc_file = netCDF4.Dataset(path_file, 'r', format='NETCDF4')
for yr in years:
nc_file.variables[variable_name][int(yr), :, :]
```
However, reading one year at a time is excruciatingly slow. How do I speed this up for the use cases below?
--EDIT
The chunksize is 1
1. I can read a range of years: nc\_file.variables[variable\_name][0:100, :, :]
2. There are several use-cases:
for yr in years:
```
numpy.ma.sum(nc_file.variables[variable_name][int(yr), :, :])
```
---
```
# Multiply each year by a 2D array of shape (720 x 1440)
for yr in years:
numpy.ma.sum(nc_file.variables[variable_name][int(yr), :, :] * arr_2d)
```
---
```
# Add 2 netcdf files together
for yr in years:
numpy.ma.sum(nc_file.variables[variable_name][int(yr), :, :] +
nc_file2.variables[variable_name][int(yr), :, :])
``` | I highly recommend that you take a look at the [`xarray`](https://github.com/pydata/xarray) and [`dask`](https://github.com/dask/dask) projects. Using these powerful tools will allow you to easily split up the computation in chunks. This brings up two advantages: you can compute on data which does not fit in memory, and you can use all of the cores in your machine for better performance. You can optimize the performance by appropriately choosing the chunk size (see [documentation](http://xarray.pydata.org/en/stable/dask.html)).
You can load your data from netCDF by doing something as simple as
```
import xarray as xr
ds = xr.open_dataset(path_file)
```
If you want to chunk your data in years along the time dimension, then you specify the `chunks` parameter (assuming that the year coordinate is named 'year'):
```
ds = xr.open_dataset(path_file, chunks={'year': 10})
```
Since the other coordinates do not appear in the `chunks` dict, then a single chunk will be used for them. (See more details in the documentation [here](http://xarray.pydata.org/en/stable/dask.html).). This will be useful for your first requirement, where you want to multiply each year by a 2D array. You would simply do:
```
ds['new_var'] = ds['var_name'] * arr_2d
```
Now, `xarray` and `dask` are computing your result *lazily*. In order to trigger the actual computation, you can simply ask `xarray` to save your result back to netCDF:
```
ds.to_netcdf(new_file)
```
The computation gets triggered through `dask`, which takes care of splitting the processing out in chunks and thus enables working with data that does not fit in memory. In addition, `dask` will take care of using all your processor cores for computing chunks.
The `xarray` and `dask` projects still don't handle nicely situations where chunks do not "align" well for parallel computation. Since in this case we chunked only in the 'year' dimension, we expect to have no issues.
If you want to add two different netCDF files together, it is as simple as:
```
ds1 = xr.open_dataset(path_file1, chunks={'year': 10})
ds2 = xr.open_dataset(path_file2, chunks={'year': 10})
(ds1 + ds2).to_netcdf(new_file)
```
I have provided a fully working example using [a dataset available online](https://www.unidata.ucar.edu/software/netcdf/examples/ECMWF_ERA-40_subset.nc).
```
In [1]:
import xarray as xr
import numpy as np
# Load sample data and strip out most of it:
ds = xr.open_dataset('ECMWF_ERA-40_subset.nc', chunks = {'time': 4})
ds.attrs = {}
ds = ds[['latitude', 'longitude', 'time', 'tcw']]
ds
Out[1]:
<xarray.Dataset>
Dimensions: (latitude: 73, longitude: 144, time: 62)
Coordinates:
* latitude (latitude) float32 90.0 87.5 85.0 82.5 80.0 77.5 75.0 72.5 ...
* longitude (longitude) float32 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 ...
* time (time) datetime64[ns] 2002-07-01T12:00:00 2002-07-01T18:00:00 ...
Data variables:
tcw (time, latitude, longitude) float64 10.15 10.15 10.15 10.15 ...
In [2]:
arr2d = np.ones((73, 144)) * 3.
arr2d.shape
Out[2]:
(73, 144)
In [3]:
myds = ds
myds['new_var'] = ds['tcw'] * arr2d
In [4]:
myds
Out[4]:
<xarray.Dataset>
Dimensions: (latitude: 73, longitude: 144, time: 62)
Coordinates:
* latitude (latitude) float32 90.0 87.5 85.0 82.5 80.0 77.5 75.0 72.5 ...
* longitude (longitude) float32 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 ...
* time (time) datetime64[ns] 2002-07-01T12:00:00 2002-07-01T18:00:00 ...
Data variables:
tcw (time, latitude, longitude) float64 10.15 10.15 10.15 10.15 ...
new_var (time, latitude, longitude) float64 30.46 30.46 30.46 30.46 ...
In [5]:
myds.to_netcdf('myds.nc')
xr.open_dataset('myds.nc')
Out[5]:
<xarray.Dataset>
Dimensions: (latitude: 73, longitude: 144, time: 62)
Coordinates:
* latitude (latitude) float32 90.0 87.5 85.0 82.5 80.0 77.5 75.0 72.5 ...
* longitude (longitude) float32 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 ...
* time (time) datetime64[ns] 2002-07-01T12:00:00 2002-07-01T18:00:00 ...
Data variables:
tcw (time, latitude, longitude) float64 10.15 10.15 10.15 10.15 ...
new_var (time, latitude, longitude) float64 30.46 30.46 30.46 30.46 ...
In [6]:
(myds + myds).to_netcdf('myds2.nc')
xr.open_dataset('myds2.nc')
Out[6]:
<xarray.Dataset>
Dimensions: (latitude: 73, longitude: 144, time: 62)
Coordinates:
* time (time) datetime64[ns] 2002-07-01T12:00:00 2002-07-01T18:00:00 ...
* latitude (latitude) float32 90.0 87.5 85.0 82.5 80.0 77.5 75.0 72.5 ...
* longitude (longitude) float32 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 ...
Data variables:
tcw (time, latitude, longitude) float64 20.31 20.31 20.31 20.31 ...
new_var (time, latitude, longitude) float64 60.92 60.92 60.92 60.92 ...
``` |
finding streaks in pandas dataframe | 35,427,298 | 6 | 2016-02-16T08:24:08Z | 35,428,677 | 7 | 2016-02-16T09:33:02Z | [
"python",
"pandas",
"dataframe"
] | I have a pandas dataframe as follows:
```
time winner loser stat
1 A B 0
2 C B 0
3 D B 1
4 E B 0
5 F A 0
6 G A 0
7 H A 0
8 I A 1
```
each row is a match result. the first column is the time of the match, second and third column contain winner/loser and the fourth column is one stat from the match.
I want to detect streaks of zeros for this stat per loser.
The expected result should look like this:
```
time winner loser stat streak
1 A B 0 1
2 C B 0 2
3 D B 1 0
4 E B 0 1
5 F A 0 1
6 G A 0 2
7 H A 0 3
8 I A 1 0
```
In pseudocode the algorithm should work like this:
* `.groupby` `loser` column.
* then iterate over each row of each `loser` group
* in each row, look at the `stat` column: if it contains `0`, then increment the `streak` value from the previous row by `0`. if it is not `0`, then start a new `streak`, that is, put `0` into the `streak` column.
So the `.groupby` is clear. But then I would need some sort of `.apply` where I can look at the previous row? this is where I am stuck. | You can [`apply`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.apply.html) custom function `f`, then [`cumsum`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cumsum.html), [`cumcount`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html) and [`astype`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html):
```
def f(x):
x['streak'] = x.groupby( (x['stat'] != 0).cumsum()).cumcount() +
( (x['stat'] != 0).cumsum() == 0).astype(int)
return x
df = df.groupby('loser', sort=False).apply(f)
print df
time winner loser stat streak
0 1 A B 0 1
1 2 C B 0 2
2 3 D B 1 0
3 4 E B 0 1
4 5 F A 0 1
5 6 G A 0 2
6 7 H A 0 3
7 8 I A 1 0
```
For better undestanding:
```
def f(x):
x['c'] = (x['stat'] != 0).cumsum()
x['a'] = (x['c'] == 0).astype(int)
x['b'] = x.groupby( 'c' ).cumcount()
x['streak'] = x.groupby( 'c' ).cumcount() + x['a']
return x
df = df.groupby('loser', sort=False).apply(f)
print df
time winner loser stat c a b streak
0 1 A B 0 0 1 0 1
1 2 C B 0 0 1 1 2
2 3 D B 1 1 0 0 0
3 4 E B 0 1 0 1 1
4 5 F A 0 0 1 0 1
5 6 G A 0 0 1 1 2
6 7 H A 0 0 1 2 3
7 8 I A 1 1 0 0 0
``` |
Get the number of all keys in a dictionary of dictionaries in Python | 35,427,814 | 33 | 2016-02-16T08:50:37Z | 35,427,962 | 9 | 2016-02-16T08:59:00Z | [
"python",
"python-2.7",
"dictionary"
] | I have a dictionary of dictionaries in Python 2.7.
I need to quickly count the number of all keys, including the keys within each of the dictionaries.
So in this example I would need the number of all keys to be 6:
```
dict_test = {'key2': {'key_in3': 'value', 'key_in4': 'value'}, 'key1': {'key_in2': 'value', 'key_in1': 'value'}}
```
I know I can iterate through each key with for loops, but I am looking for a quicker way to do this, since I will have thousands/millions of keys and doing this is just ineffective:
```
count_the_keys = 0
for key in dict_test.keys():
for key_inner in dict_test[key].keys():
count_the_keys += 1
# something like this would be more effective
# of course .keys().keys() doesn't work
print len(dict_test.keys()) * len(dict_test.keys().keys())
``` | How about
```
n = sum([len(v)+1 for k, v in dict_test.items()])
```
What you are doing is iterating over all keys k and values v. The values v are your subdictionaries. You get the length of those dictionaries and add one to include the key used to index the subdictionary.
Afterwards you sum over the list to get the complete number of keys.
EDIT:
To clarify, this snippet works only for dictionaries of dictionaries as asked. Not dictionaries of dictionaries of dictionaries...
So do not use it for nested example :) |
Get the number of all keys in a dictionary of dictionaries in Python | 35,427,814 | 33 | 2016-02-16T08:50:37Z | 35,428,077 | 9 | 2016-02-16T09:05:01Z | [
"python",
"python-2.7",
"dictionary"
] | I have a dictionary of dictionaries in Python 2.7.
I need to quickly count the number of all keys, including the keys within each of the dictionaries.
So in this example I would need the number of all keys to be 6:
```
dict_test = {'key2': {'key_in3': 'value', 'key_in4': 'value'}, 'key1': {'key_in2': 'value', 'key_in1': 'value'}}
```
I know I can iterate through each key with for loops, but I am looking for a quicker way to do this, since I will have thousands/millions of keys and doing this is just ineffective:
```
count_the_keys = 0
for key in dict_test.keys():
for key_inner in dict_test[key].keys():
count_the_keys += 1
# something like this would be more effective
# of course .keys().keys() doesn't work
print len(dict_test.keys()) * len(dict_test.keys().keys())
``` | As a more general way you can use a recursion function and generator expression:
```
>>> def count_keys(dict_test):
... return sum(1+count_keys(v) if isinstance(v,dict) else 1 for _,v in dict_test.iteritems())
...
```
Example:
```
>>> dict_test = {'a': {'c': '2', 'b': '1', 'e': {'f': {1: {5: 'a'}}}, 'd': '3'}}
>>>
>>> count(dict_test)
8
```
*Note*: In python 3.X use `dict.items()` method instead of `iteritems()`.
A benchmark with accepted answer which shows that this function is faster than accepted answer:
```
from timeit import timeit
s1 = """
def sum_keys(d):
return 0 if not isinstance(d, dict) else len(d) + sum(sum_keys(v) for v in d.itervalues())
sum_keys(dict_test)
"""
s2 = """
def count_keys(dict_test):
return sum(1+count_keys(v) if isinstance(v,dict) else 1 for _,v in dict_test.iteritems())
count_keys(dict_test)
"""
print '1st: ', timeit(stmt=s1,
number=1000000,
setup="dict_test = {'a': {'c': '2', 'b': '1', 'e': {'f': {1: {5: 'a'}}}, 'd': '3'}}")
print '2nd : ', timeit(stmt=s2,
number=1000000,
setup="dict_test = {'a': {'c': '2', 'b': '1', 'e': {'f': {1: {5: 'a'}}}, 'd': '3'}}")
```
result:
```
1st: 4.65556812286
2nd : 4.09120802879
``` |
Get the number of all keys in a dictionary of dictionaries in Python | 35,427,814 | 33 | 2016-02-16T08:50:37Z | 35,428,134 | 24 | 2016-02-16T09:07:36Z | [
"python",
"python-2.7",
"dictionary"
] | I have a dictionary of dictionaries in Python 2.7.
I need to quickly count the number of all keys, including the keys within each of the dictionaries.
So in this example I would need the number of all keys to be 6:
```
dict_test = {'key2': {'key_in3': 'value', 'key_in4': 'value'}, 'key1': {'key_in2': 'value', 'key_in1': 'value'}}
```
I know I can iterate through each key with for loops, but I am looking for a quicker way to do this, since I will have thousands/millions of keys and doing this is just ineffective:
```
count_the_keys = 0
for key in dict_test.keys():
for key_inner in dict_test[key].keys():
count_the_keys += 1
# something like this would be more effective
# of course .keys().keys() doesn't work
print len(dict_test.keys()) * len(dict_test.keys().keys())
``` | **Keeping it Simple**
If we know all the values are dictionaries, and do not wish to check that any of their values are also dictionaries, then it is as simple as:
```
len(dict_test) + sum(len(v) for v in dict_test.itervalues())
```
Refining it a little, to actually check that the values are dictionaries before counting them:
```
len(dict_test) + sum(len(v) for v in dict_test.itervalues() if isinstance(v, dict))
```
And finally, if you wish to do an arbitrary depth, something like the following:
```
def sum_keys(d):
return (0 if not isinstance(d, dict)
else len(d) + sum(sum_keys(v) for v in d.itervalues())
print sum_keys({'key2': {'key_in3': 'value', 'key_in4': 'value'},
'key1': {'key_in2': 'value',
'key_in1': dict(a=2)}})
# => 7
```
In this last case, we define a function that will be called recursively. Given a value `d`, we return either:
* `0` if that value is not a dictionary; or
* the number of keys in the dictionary, plus the total of keys in all of our children.
**Making it Faster**
The above is a succinct and easily understood approach. We can get a little faster using a generator:
```
def _counter(d):
# how many keys do we have?
yield len(d)
# stream the key counts of our children
for v in d.itervalues():
if isinstance(v, dict):
for x in _counter(v):
yield x
def count_faster(d):
return sum(_counter(d))
```
This gets us a bit more performance:
```
In [1]: %timeit sum_keys(dict_test)
100000 loops, best of 3: 4.12 µs per loop
In [2]: %timeit count_faster(dict_test)
100000 loops, best of 3: 3.29 µs per loop
``` |
How to get latest offset for a partition for a kafka topic? | 35,432,326 | 3 | 2016-02-16T12:13:02Z | 35,438,403 | 7 | 2016-02-16T16:53:50Z | [
"python",
"apache-kafka",
"kafka-consumer-api",
"python-kafka"
] | I am using the Python high level consumer for Kafka and want to know the latest offsets for each partition of a topic. However I cannot get it to work.
```
from kafka import TopicPartition
from kafka.consumer import KafkaConsumer
con = KafkaConsumer(bootstrap_servers = brokers)
ps = [TopicPartition(topic, p) for p in con.partitions_for_topic(topic)]
con.assign(ps)
for p in ps:
print "For partition %s highwater is %s"%(p.partition,con.highwater(p))
print "Subscription = %s"%con.subscription()
print "con.seek_to_beginning() = %s"%con.seek_to_beginning()
```
But the output I get is
```
For partition 0 highwater is None
For partition 1 highwater is None
For partition 2 highwater is None
For partition 3 highwater is None
For partition 4 highwater is None
For partition 5 highwater is None
....
For partition 96 highwater is None
For partition 97 highwater is None
For partition 98 highwater is None
For partition 99 highwater is None
Subscription = None
con.seek_to_beginning() = None
con.seek_to_end() = None
```
I have an alternate approach using `assign` but the result is the same
```
con = KafkaConsumer(bootstrap_servers = brokers)
ps = [TopicPartition(topic, p) for p in con.partitions_for_topic(topic)]
con.assign(ps)
for p in ps:
print "For partition %s highwater is %s"%(p.partition,con.highwater(p))
print "Subscription = %s"%con.subscription()
print "con.seek_to_beginning() = %s"%con.seek_to_beginning()
print "con.seek_to_end() = %s"%con.seek_to_end()
```
It seems from some of the documentation that I might get this behaviour if a `fetch` has not been issued. But I cannot find a way to force that. What am I doing wrong?
Or is there a different/simpler way to get the latest offsets for a topic? | Finally after spending a day on this and several false starts, I was able to find a solution and get it working. Posting it her so that others may refer to it.
```
from kafka import KafkaClient
from kafka.protocol.offset import OffsetRequest, OffsetResetStrategy
from kafka.common import OffsetRequestPayload
client = KafkaClient(brokers)
partitions = client.topic_partitions[topic]
offset_requests = [OffsetRequestPayload(topic, p, -1, 1) for p in partitions.keys()]
offsets_responses = client.send_offset_request(offset_requests)
for r in offsets_responses:
print "partition = %s, offset = %s"%(r.partition, r.offsets[0])
``` |
Python set intersection is faster then Rust HashSet intersection | 35,439,376 | 7 | 2016-02-16T17:38:19Z | 35,440,146 | 10 | 2016-02-16T18:18:22Z | [
"python",
"rust",
"hashset"
] | Here is my Python code:
```
len_sums = 0
for i in xrange(100000):
set_1 = set(xrange(1000))
set_2 = set(xrange(500, 1500))
intersection_len = len(set_1.intersection(set_2))
len_sums += intersection_len
print len_sums
```
And here is my Rust code:
```
use std::collections::HashSet;
fn main() {
let mut len_sums = 0;
for _ in 0..100000 {
let set_1: HashSet<i32> = (0..1000).collect();
let set_2: HashSet<i32> = (500..1500).collect();
let intersection_len = set_1.intersection(&set_2).count();
len_sums += intersection_len;
}
println!("{}", len_sums);
}
```
I believe these are roughly equivalent. I get the following performance results:
```
time python set_performance.py
50000000
real 0m11.757s
user 0m11.736s
sys 0m0.012s
```
and
```
rustc set_performance.rs -O
time ./set_performance 50000000
real 0m17.580s
user 0m17.533s
sys 0m0.032s
```
(building with `cargo` and `--release` give the same result).
I realize that Python's `set` is implemented in C, and so is expected to be fast, but I did not expect it to be faster than Rust. Wouldn't it have to do extra type checking that Rust would not?
Perhaps I'm missing something in the way I compile my Rust program, are there any other optimizations flags that I should be using?
Another possibility is that the code is not really equivalent, and Rust is doing unnecessary extra work, am I missing anything?
Python Version:
```
In [3]: import sys
In [4]: sys.version
Out[4]: '2.7.6 (default, Jun 22 2015, 17:58:13) \n[GCC 4.8.2]'
```
Rustc Version (I know 1.6 is out)
```
$ rustc --version
rustc 1.5.0 (3d7cd77e4 2015-12-04)
```
I am using `ubuntu 14.04` and my system architecture is x86\_64. | The performance problem boils down to the default hashing implementation of `HashMap` and `HashSet`. Rust's default hash algorithm is a good general-purpose one that also prevents against certain types of DOS attacks. However, it doesn't work great for very small or very large amounts of data.
Some profiling showed that `make_hash<i32, std::collections::hash::map::RandomState>` was taking up about 41% of the total runtime. As of [Rust 1.7](http://blog.rust-lang.org/2016/03/02/Rust-1.7.html), you can choose which hashing algorithm to use. Switching to the [FNV hashing algorithm](https://crates.io/crates/fnv) speeds up the program considerably:
```
extern crate fnv;
use std::collections::HashSet;
use std::hash::BuildHasherDefault;
use fnv::FnvHasher;
fn main() {
let mut len_sums = 0;
for _ in 0..100000 {
let set_1: HashSet<i32, BuildHasherDefault<FnvHasher>> = (0..1000).collect();
let set_2: HashSet<i32, BuildHasherDefault<FnvHasher>> = (500..1500).collect();
let intersection_len = set_1.intersection(&set_2).count();
len_sums += intersection_len;
}
println!("{}", len_sums);
}
```
On my machine, this takes 2.714s compared to Python's 9.203s.
If you make the same [changes to move the set building out of the loop](http://stackoverflow.com/a/35440350/155423), the Rust code takes 0.829s compared to the Python code's 3.093s. |
Python set intersection is faster then Rust HashSet intersection | 35,439,376 | 7 | 2016-02-16T17:38:19Z | 35,440,350 | 9 | 2016-02-16T18:29:06Z | [
"python",
"rust",
"hashset"
] | Here is my Python code:
```
len_sums = 0
for i in xrange(100000):
set_1 = set(xrange(1000))
set_2 = set(xrange(500, 1500))
intersection_len = len(set_1.intersection(set_2))
len_sums += intersection_len
print len_sums
```
And here is my Rust code:
```
use std::collections::HashSet;
fn main() {
let mut len_sums = 0;
for _ in 0..100000 {
let set_1: HashSet<i32> = (0..1000).collect();
let set_2: HashSet<i32> = (500..1500).collect();
let intersection_len = set_1.intersection(&set_2).count();
len_sums += intersection_len;
}
println!("{}", len_sums);
}
```
I believe these are roughly equivalent. I get the following performance results:
```
time python set_performance.py
50000000
real 0m11.757s
user 0m11.736s
sys 0m0.012s
```
and
```
rustc set_performance.rs -O
time ./set_performance 50000000
real 0m17.580s
user 0m17.533s
sys 0m0.032s
```
(building with `cargo` and `--release` give the same result).
I realize that Python's `set` is implemented in C, and so is expected to be fast, but I did not expect it to be faster than Rust. Wouldn't it have to do extra type checking that Rust would not?
Perhaps I'm missing something in the way I compile my Rust program, are there any other optimizations flags that I should be using?
Another possibility is that the code is not really equivalent, and Rust is doing unnecessary extra work, am I missing anything?
Python Version:
```
In [3]: import sys
In [4]: sys.version
Out[4]: '2.7.6 (default, Jun 22 2015, 17:58:13) \n[GCC 4.8.2]'
```
Rustc Version (I know 1.6 is out)
```
$ rustc --version
rustc 1.5.0 (3d7cd77e4 2015-12-04)
```
I am using `ubuntu 14.04` and my system architecture is x86\_64. | When I move the set-building out of the loop and only repeat the intersection, for both cases of course, Rust is faster than Python 2.7.
I've only been reading Python 3 [(setobject.c)](https://github.com/python/cpython/blob/master/Objects/setobject.c#L1274), but Python's implementation has some things going for it.
It uses the fact that both Python set objects use the same hash function, so it does not recompute the hash. Rust `HashSet`s have instance-unique keys for their hash functions, so during intersection they must rehash keys from one set with the other set's hash function.
On the other hand, Python must call out to a dynamic key comparison function like `PyObject_RichCompareBool` for each matching hash, while the Rust code uses generics and will specialize the hash function and comparison code for `i32`. The code for hashing an `i32` in Rust looks relatively cheap, and much of the hashing algorithm (handling longer input than 4 bytes) is removed.
---
It appears it's the construction of the sets that *sets* Python and Rust apart. And in fact not just construction, there's some significant code running to destruct the Rust `HashSet`s as well. (This can be improved, filed bug here: [#31711](https://github.com/rust-lang/rust/issues/31711)) |
How to save in *.xlsx long URL in cell using Pandas | 35,440,528 | 7 | 2016-02-16T18:38:29Z | 35,492,577 | 8 | 2016-02-18T21:16:27Z | [
"python",
"excel",
"pandas"
] | For example I read excel file into DataFrame with 2 columns(id and URL). URLs in input file are like text(without hyperlinks):
```
input_f = pd.read_excel("input.xlsx")
```
Watch what inside this DataFrame - everything was successfully read, all URLs are ok in `input_f`. After that when I wan't to save this file to\_excel
```
input_f.to_excel("output.xlsx", index=False)
```
I got warning.
> **Path**\worksheet.py:836: UserWarning: Ignoring URL **'http:// here long URL'** with
> link or location/anchor > 255 characters since it exceeds Excel's
> limit for URLS force\_unicode(url))
And in output.xlsx cells with long URL were empty, and URLs become hyperlinks.
How to fix this? | You can create an ExcelWriter object with the option not to convert strings to urls:
```
writer = pandas.ExcelWriter(r'file.xlsx', engine='xlsxwriter',options={'strings_to_urls': False})
df.to_excel(writer)
writer.close()
``` |
How do I only print whole numbers in python script | 35,447,001 | 2 | 2016-02-17T02:53:02Z | 35,447,023 | 7 | 2016-02-17T02:55:48Z | [
"python",
"python-3.x"
] | I have a python script that when run will take the number `2048` and divide it by a range of numbers from `2` all of the up to `129` (but not including) and will print what it equals. So as we know some numbers go into `2048` evenly but some do not so my question here is how can I make it so my script only prints out whole numbers.
I have been able to figure it out but I felt like it was not the best way of doing it and it had some drawbacks. If you want to I can put that code into the question but like I said I did not feel like it was the most logical way of doing it.
**Script.py**
```
user_input = 2048
user_input = str(user_input)
if user_input.isdigit():
user_input = int(user_input)
for num in range(2, 129):
y = user_input
x = user_input / num
x = str(x)
print(y, "/", num, "=", x)
else:
print("Please enter a whole number")
```
**Output of Script.py**
```
2048 / 2 = 1024.0
2048 / 3 = 682.6666666666666
2048 / 4 = 512.0
2048 / 5 = 409.6
2048 / 6 = 341.3333333333333
2048 / 7 = 292.57142857142856
2048 / 8 = 256.0
2048 / 9 = 227.55555555555554
2048 / 10 = 204.8
2048 / 11 = 186.1818181818182
2048 / 12 = 170.66666666666666
2048 / 13 = 157.53846153846155
2048 / 14 = 146.28571428571428
2048 / 15 = 136.53333333333333
2048 / 16 = 128.0
2048 / 17 = 120.47058823529412
2048 / 18 = 113.77777777777777
2048 / 19 = 107.78947368421052
2048 / 20 = 102.4
2048 / 21 = 97.52380952380952
2048 / 22 = 93.0909090909091
2048 / 23 = 89.04347826086956
2048 / 24 = 85.33333333333333
2048 / 25 = 81.92
2048 / 26 = 78.76923076923077
2048 / 27 = 75.85185185185185
2048 / 28 = 73.14285714285714
2048 / 29 = 70.62068965517241
2048 / 30 = 68.26666666666667
2048 / 31 = 66.06451612903226
2048 / 32 = 64.0
2048 / 33 = 62.06060606060606
2048 / 34 = 60.23529411764706
2048 / 35 = 58.51428571428571
2048 / 36 = 56.888888888888886
2048 / 37 = 55.351351351351354
2048 / 38 = 53.89473684210526
2048 / 39 = 52.51282051282051
2048 / 40 = 51.2
2048 / 41 = 49.951219512195124
2048 / 42 = 48.76190476190476
2048 / 43 = 47.627906976744185
2048 / 44 = 46.54545454545455
2048 / 45 = 45.51111111111111
2048 / 46 = 44.52173913043478
2048 / 47 = 43.57446808510638
2048 / 48 = 42.666666666666664
2048 / 49 = 41.795918367346935
2048 / 50 = 40.96
2048 / 51 = 40.15686274509804
2048 / 52 = 39.38461538461539
2048 / 53 = 38.64150943396226
2048 / 54 = 37.925925925925924
2048 / 55 = 37.236363636363635
2048 / 56 = 36.57142857142857
2048 / 57 = 35.92982456140351
2048 / 58 = 35.310344827586206
2048 / 59 = 34.71186440677966
2048 / 60 = 34.13333333333333
2048 / 61 = 33.57377049180328
2048 / 62 = 33.03225806451613
2048 / 63 = 32.507936507936506
2048 / 64 = 32.0
2048 / 65 = 31.50769230769231
2048 / 66 = 31.03030303030303
2048 / 67 = 30.567164179104477
2048 / 68 = 30.11764705882353
2048 / 69 = 29.681159420289855
2048 / 70 = 29.257142857142856
2048 / 71 = 28.845070422535212
2048 / 72 = 28.444444444444443
2048 / 73 = 28.054794520547944
2048 / 74 = 27.675675675675677
2048 / 75 = 27.30666666666667
2048 / 76 = 26.94736842105263
2048 / 77 = 26.5974025974026
2048 / 78 = 26.256410256410255
2048 / 79 = 25.924050632911392
2048 / 80 = 25.6
2048 / 81 = 25.28395061728395
2048 / 82 = 24.975609756097562
2048 / 83 = 24.674698795180724
2048 / 84 = 24.38095238095238
2048 / 85 = 24.094117647058823
2048 / 86 = 23.813953488372093
2048 / 87 = 23.54022988505747
2048 / 88 = 23.272727272727273
2048 / 89 = 23.01123595505618
2048 / 90 = 22.755555555555556
2048 / 91 = 22.505494505494507
2048 / 92 = 22.26086956521739
2048 / 93 = 22.021505376344088
2048 / 94 = 21.78723404255319
2048 / 95 = 21.557894736842105
2048 / 96 = 21.333333333333332
2048 / 97 = 21.11340206185567
2048 / 98 = 20.897959183673468
2048 / 99 = 20.68686868686869
2048 / 100 = 20.48
2048 / 101 = 20.277227722772277
2048 / 102 = 20.07843137254902
2048 / 103 = 19.883495145631066
2048 / 104 = 19.692307692307693
2048 / 105 = 19.504761904761907
2048 / 106 = 19.32075471698113
2048 / 107 = 19.14018691588785
2048 / 108 = 18.962962962962962
2048 / 109 = 18.788990825688074
2048 / 110 = 18.618181818181817
2048 / 111 = 18.45045045045045
2048 / 112 = 18.285714285714285
2048 / 113 = 18.123893805309734
2048 / 114 = 17.964912280701753
2048 / 115 = 17.808695652173913
2048 / 116 = 17.655172413793103
2048 / 117 = 17.504273504273506
2048 / 118 = 17.35593220338983
2048 / 119 = 17.210084033613445
2048 / 120 = 17.066666666666666
2048 / 121 = 16.925619834710744
2048 / 122 = 16.78688524590164
2048 / 123 = 16.650406504065042
2048 / 124 = 16.516129032258064
2048 / 125 = 16.384
2048 / 126 = 16.253968253968253
2048 / 127 = 16.125984251968504
2048 / 128 = 16.0
```
**Expected output**: I would like the numbers that are not convertible to integers to just be omited. | If `user_input / num` gives a remainder of `0`, that `user_input` is evenly divisible by `num`. You can check this with the `%` (modulo) operator:
```
if user_input % num == 0:
print("{} / {} = {}".format(user_input, num, user_input // num))
```
---
Also, use `input()` to get the input as a string:
```
user_input = input("Enter a number:")
```
And, use `try` to try converting it to an integer, and `sys.exit` to exit the script if it isn't:
```
try:
user_input = int(user_input)
except ValueError:
print("Please input a whole number")
import sys
sys.exit(1)
``` |
What can `__init__` do that `__new__` cannot? | 35,452,178 | 28 | 2016-02-17T09:09:05Z | 35,452,302 | 20 | 2016-02-17T09:13:33Z | [
"python",
"initialization"
] | In Python, `__new__` is used to initialize immutable types and `__init__` typically initializes mutable types. If `__init__` were removed from the language, what could no longer be done (easily)?
For example,
```
class A:
def __init__(self, *, x, **kwargs):
super().__init__(**kwargs)
self.x = x
class B(A):
def __init__(self, y=2, **kwargs):
super().__init__(**kwargs)
self.y = y
```
Could be rewritten using `__new__` like this:
```
class A_N:
def __new__(cls, *, x, **kwargs):
obj = super().__new__(cls, **kwargs)
obj.x = x
return obj
class B_N(A_N):
def __new__(cls, y=2, **kwargs):
obj = super().__new__(cls, **kwargs)
obj.y = y
return obj
```
---
Clarification for scope of question: This is not a question about how `__init__` and `__new__` are used or what is the difference between them. This is a question about what would happen if `__init__` were removed from the language. Would anything break? Would anything become a lot harder or impossible to do? | ## Note about difference between \_\_new\_\_ and \_\_init\_\_
Before explaining missing functionality let's get back to definition of `__new__` and `__init__`:
**\_\_new\_\_** is the first step of instance creation. It's called first, and is **responsible for returning a new instance** of your class.
However, **\_\_init\_\_ doesn't return anything**; it's only responsible for
initializing the instance after it's been created.
## Consequences of replacing \_\_init\_\_ to \_\_new\_\_
Mainly you would loose in flexability. You would get a lot of semantics headaches and loose separation of initializatin and construction (by joining `__new__ and`**init** we are to joining construction and initialization into one step...).
Let's take a look on snippet below:
```
class A(object):
some_property = 'some_value'
def __new__(cls, *args, **kwargs):
obj = object.__new__(cls, *args, **kwargs)
obj.some_property = cls.some_property
return obj
class B(A):
some_property = 2
def __new__(cls, *args, **kwargs):
obj = super(B, cls).__new__(cls)
return obj
```
Consequences of moving `__init__` actions into `__new__`:
1. Initialize `B` before `A`: When you are using `__new__` method instead of `__init__` your first step of creating new instance of B is calling `A.__new__` as side effect you cannot initialize `B` before `A` is initialized ( access and assign some properties to new B instance). Using of **`__init__`** gives you such flexability.
2. Loose control on initializing order: let's imagine that you have `B_N` inherited from two classes (`A_N1`, `A_N2`), now you would miss controlling of order of initializing new instance of `B_N`(what is the order you are going to initialize instances ? it could be matter... what is weird.)
3. Properties and methods mess: you would miss access to `A.some_property` (`cls` would be equal to `B` while instantiating new instance of `B`. However directly accessing of `A.some_property` is possible, but my guess it's at least weird to access properties within class throught class name and not by using `classmethods`).
4. You cannot re-initialize an existed instance without creating new one or implementation special logic for this ( thanks to **@platinhom** for idea )
## What can `__init__` do that `__new__` cannot?
There are no actions that cannot be done in `__new__` and can in `__init__`, because actions that `__init__` performs **is a subset** of the actions that can be performed by `__new__`.
An interesting moment from [Python Docs, Pickling and unpickling normal class instances#object.**getinitargs**](https://docs.python.org/2/library/pickle.html#object.__getinitargs__) regarding when `__init__` could be usefull:
> When a pickled class instance is unpickled, its **init**() method is normally not invoked. |
ImportError: cannot import name generic | 35,454,154 | 7 | 2016-02-17T10:32:29Z | 35,454,302 | 9 | 2016-02-17T10:39:30Z | [
"python",
"django",
"generics",
"django-1.9"
] | I am working with eav-django(entity-attribute-value) in django 1.9. Whenever I was executing the command `./manage.py runserver` , I got the error :
```
Unhandled exception in thread started by <function wrapper at 0x10385b500>
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/Django-1.9-py2.7.egg/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/Library/Python/2.7/site-packages/Django-1.9-py2.7.egg/django/core/management/commands/runserver.py", line 109, in inner_run
autoreload.raise_last_exception()
File "/Library/Python/2.7/site-packages/Django-1.9-py2.7.egg/django/utils/autoreload.py", line 249, in raise_last_exception
six.reraise(*_exception)
File "/Library/Python/2.7/site-packages/Django-1.9-py2.7.egg/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/Library/Python/2.7/site-packages/Django-1.9-py2.7.egg/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Library/Python/2.7/site-packages/Django-1.9-py2.7.egg/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/Library/Python/2.7/site-packages/Django-1.9-py2.7.egg/django/apps/config.py", line 202, in import_models
self.models_module = import_module(models_module_name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/shakil_grofers/src/django-eav/eav/models.py", line 42, in <module>
from django.contrib.contenttypes import generic
```
I tried to import generic by adding:
```
from django.contrib.contenttypes import generic
```
in models.py. Then after few research I found out that generic has been deprecated in Django 1.7 and is no more in Django 1.9. Can anyone tell me in which other library this functionality has been added in Django 1.9 and how to use it? | Instead of `django.contrib.contenttypes.generic.GenericForeignKey` you can now use `django.contrib.contenttypes.fields.GenericForeignKey`:
<https://docs.djangoproject.com/en/1.9/ref/contrib/contenttypes/#generic-relations> |
Find all possible substrings begining with characters from capturing group | 35,457,288 | 9 | 2016-02-17T12:52:15Z | 35,457,628 | 13 | 2016-02-17T13:08:19Z | [
"python",
"regex"
] | I have for example the string `BANANA` and want to find all possible substrings beginning with a vowel. The result I need looks like this:
```
"A", "A", "A", "AN", "AN", "ANA", "ANA", "ANAN", "ANANA"
```
I tried this: `re.findall(r"([AIEOU]+\w*)", "BANANA")`
but it only finds `"ANANA"` which seems to be the longest match.
How can I find all the other possible substrings? | ```
s="BANANA"
vowels = 'AIEOU'
sorted(s[i:j] for i, x in enumerate(s) for j in range(i + 1, len(s) + 1) if x in vowels)
``` |
Setting up the EB CLI - error nonetype get_frozen_credentials | 35,462,346 | 12 | 2016-02-17T16:27:56Z | 35,463,220 | 13 | 2016-02-17T17:09:39Z | [
"python",
"amazon-web-services",
"command-line-interface",
"osx-elcapitan",
"aws-ec2"
] | ```
Select a default region
1) us-east-1 : US East (N. Virginia)
2) us-west-1 : US West (N. California)
3) us-west-2 : US West (Oregon)
4) eu-west-1 : EU (Ireland)
5) eu-central-1 : EU (Frankfurt)
6) ap-southeast-1 : Asia Pacific (Singapore)
7) ap-southeast-2 : Asia Pacific (Sydney)
8) ap-northeast-1 : Asia Pacific (Tokyo)
9) ap-northeast-2 : Asia Pacific (Seoul)
10) sa-east-1 : South America (Sao Paulo)
11) cn-north-1 : China (Beijing)
(default is 3):5
```
When I choose a number or just leave it blank.. the following error appears:
> ERROR: AttributeError :: 'NoneType' object has no attribute
> 'get\_frozen\_credentials'
after running eb init --debug:
> Traceback (most recent call last): File "/usr/local/bin/eb", line 11,
> in
> sys.exit(main()) File "/Library/Python/2.7/site-packages/ebcli/core/ebcore.py", line 149, in
> main
> app.run() File "/Library/Python/2.7/site-packages/cement/core/foundation.py", line
> 694, in run
> self.controller.\_dispatch()
> File "/Library/Python/2.7/site-packages/cement/core/controller.py", line
> 455, in \_dispatch
> return func()
> File "/Library/Python/2.7/site-packages/cement/core/controller.py", line
> 461, in \_dispatch
> return func()
> File "/Library/Python/2.7/site-packages/ebcli/core/abstractcontroller.py",
> line 57, in default
> self.do\_command()
> File "/Library/Python/2.7/site-packages/ebcli/controllers/initialize.py",
> line 67, in do\_command
> self.set\_up\_credentials()
> File "/Library/Python/2.7/site-packages/ebcli/controllers/initialize.py",
> line 152, in set\_up\_credentials
> if not initializeops.credentials\_are\_valid():
> File "/Library/Python/2.7/site-packages/ebcli/operations/initializeops.py",
> line 24, in credentials\_are\_valid
> elasticbeanstalk.get\_available\_solution\_stacks()
> File "/Library/Python/2.7/site-packages/ebcli/lib/elasticbeanstalk.py",
> line 239, in get\_available\_solution\_stacks
> result = \_make\_api\_call('list\_available\_solution\_stacks')
> File "/Library/Python/2.7/site-packages/ebcli/lib/elasticbeanstalk.py",
> line 37, in \_make\_api\_call
> \*\*operation\_options)
> File "/Library/Python/2.7/site-packages/ebcli/lib/aws.py", line 207, in make\_api\_call
> response\_data = operation(\*\*operation\_options)
> File "/Library/Python/2.7/site-packages/botocore/client.py", line 310, in \_api\_call
> return self.\_make\_api\_call(operation\_name, kwargs)
> File "/Library/Python/2.7/site-packages/botocore/client.py", line 396, in \_make\_api\_call
> operation\_model, request\_dict)
> File "/Library/Python/2.7/site-packages/botocore/endpoint.py", line 111, in make\_request
> return self.\_send\_request(request\_dict, operation\_model)
> File "/Library/Python/2.7/site-packages/botocore/endpoint.py", line 136, in \_send\_request
> request = self.create\_request(request\_dict, operation\_model)
> File "/Library/Python/2.7/site-packages/botocore/endpoint.py", line 120, in create\_request
> operation\_name=operation\_model.name)
> File "/Library/Python/2.7/site-packages/botocore/hooks.py", line 226, in emit
> return self.\_emit(event\_name, kwargs)
> File "/Library/Python/2.7/site-packages/botocore/hooks.py", line 209, in \_emit
> response = handler(\*\*kwargs)
> File "/Library/Python/2.7/site-packages/botocore/signers.py", line 90, in handler
> return self.sign(operation\_name, request)
> File "/Library/Python/2.7/site-packages/botocore/signers.py", line 123, in sign
> signature\_version)
> File "/Library/Python/2.7/site-packages/botocore/signers.py", line 153, in get\_auth\_instance
> kwargs['credentials'] = self.\_credentials.get\_frozen\_credentials()
> AttributeError: 'NoneType' object has no attribute 'get\_frozen\_credentials' | You got this error because you didn't initialize your `AWS Access Key ID` and `AWS Secret Access Key`
you should install first awscli by runing `pip install awscli`.
After you need to configure aws:
`aws configure`
After this you can run `eb init` |
Pythonic way of converting 'None' string to None | 35,467,607 | 4 | 2016-02-17T20:58:24Z | 35,467,901 | 7 | 2016-02-17T21:16:07Z | [
"python",
"yaml",
"nonetype"
] | Let's say I have the following yaml file and the value in the 4th row should be `None`. How can I convert the string `'None'` to `None`?
```
CREDENTIALS:
USERNAME: 'USERNAME'
PASSWORD: 'PASSWORD'
LOG_FILE_PATH: None <----------
```
I already have this:
```
config = yaml.safe_load(open(config_path, "r"))
username, password, log_file_path = (config['CREDENTIALS']['USERNAME'],
config['CREDENTIALS']['PASSWORD'],
config['LOG_FILE_PATH'])
```
I would like to know if there is a pythonic way to do this, instead of simply doing:
```
if log_file_path == 'None':
log_file_path = None
``` | The `None` value in your YAML file is not really kosher YAML since it uses absence of a value to represent None. So if you just used proper YAML your troubles would be over:
```
In [7]: yaml.load("""
...: CREDENTIALS:
...: USERNAME: 'USERNAME'
...: PASSWORD: 'PASSWORD'
...: LOG_FILE_PATH:
...: """)
Out[7]:
{'CREDENTIALS': {'PASSWORD': 'PASSWORD', 'USERNAME': 'USERNAME'},
'LOG_FILE_PATH': None}
```
Notice how it read the absence of the `LOG_FILE_PATH` as `None` rather than `'None'`. |
Parsing CSV to chart stock ticker data | 35,475,722 | 2 | 2016-02-18T07:51:40Z | 35,476,044 | 14 | 2016-02-18T08:12:49Z | [
"python",
"parsing",
"csv"
] | I created a program that takes stock tickers, crawls the web to find a CSV of each ticker's historical prices, and plots them using matplotlib. Almost everything is working fine, but I've run into a problem parsing the CSV to separate out each price.
The error I get is:
> prices = [float(row[4]) for row in csv\_rows]
>
> IndexError: list index out of range
I get what the problem is here, I'm just not really sure how I should fix it.
(The issue is in the `parseCSV()` method)
```
# Loop to chart multiple stocks
def chartStocks(*tickers):
for ticker in tickers:
chartStock(ticker)
# Single chart stock method
def chartStock(ticker):
url = "http://finance.yahoo.com/q/hp?s=" + str(ticker) + "+Historical+Prices"
sourceCode = requests.get(url)
plainText = sourceCode.text
soup = BeautifulSoup(plainText, "html.parser")
csv = findCSV(soup)
parseCSV(csv)
# Find the CSV URL
def findCSV(soupPage):
CSV_URL_PREFIX = 'http://real-chart.finance.yahoo.com/table.csv?s='
links = soupPage.findAll('a')
for link in links:
href = link.get('href', '')
if href.startswith(CSV_URL_PREFIX):
return href
# Parse CSV for daily prices
def parseCSV(csv_text):
csv_rows = csv.reader(csv_text.split('\n'))
prices = [float(row[4]) for row in csv_rows]
days = list(range(len(prices)))
point = collections.namedtuple('Point', ['x', 'y'])
for price in prices:
i = 0
p = point(days[i], prices[i])
points = []
points.append(p)
print(points)
plotStock(points)
# Plot the data
def plotStock(points):
plt.plot(points)
plt.show()
``` | The problem is that `parseCSV()` expects a string containing CSV data, but it is actually being passed the *URL* of the CSV data, not the downloaded CSV data.
This is because `findCSV(soup)` returns the value of `href` for the CSV link found on the page, and then that value is passed to `parseCSV()`. The CSV reader finds a single undelimited row of data, so there is only one column, not the >4 that is expected.
At no point is the CSV data actually being downloaded.
You could write the first few lines of `parseCSV()` like this:
```
def parseCSV(csv_url):
r = requests.get(csv_url)
csv_rows = csv.reader(r.iter_lines())
``` |
Can I use index information inside the map function? | 35,481,061 | 20 | 2016-02-18T12:07:11Z | 35,481,114 | 9 | 2016-02-18T12:09:48Z | [
"python",
"python-2.7",
"map-function"
] | Let's assume there is a list `a = [1, 3, 5, 6, 8]`.
I want to apply some transformation on that list and I want to avoid doing it sequentially, so something like `map(someTransformationFunction, a)` would normally do the trick, but what if the transformation needs to have knowledge of the index of each object?
For example let's say that each element must be multiplied by its position. So the list should be transformed to `a = [0, 3, 10, 18, 32]`.
Is there a way to do that? | You can use `enumerate()`:
```
a = [1, 3, 5, 6, 8]
answer = map(lambda (idx, value): idx*value, enumerate(a))
print(answer)
```
**Output**
```
[0, 3, 10, 18, 32]
``` |
Can I use index information inside the map function? | 35,481,061 | 20 | 2016-02-18T12:07:11Z | 35,481,132 | 33 | 2016-02-18T12:10:18Z | [
"python",
"python-2.7",
"map-function"
] | Let's assume there is a list `a = [1, 3, 5, 6, 8]`.
I want to apply some transformation on that list and I want to avoid doing it sequentially, so something like `map(someTransformationFunction, a)` would normally do the trick, but what if the transformation needs to have knowledge of the index of each object?
For example let's say that each element must be multiplied by its position. So the list should be transformed to `a = [0, 3, 10, 18, 32]`.
Is there a way to do that? | Use the [`enumerate()`](https://docs.python.org/2/library/functions.html#enumerate) function to add indices:
```
map(function, enumerate(a))
```
Your function will be passed a *tuple*, with `(index, value)`. In Python 2, you can specify that Python unpack the tuple for you in the function signature:
```
map(lambda (i, el): i * el, enumerate(a))
```
Note the `(i, el)` tuple in the lambda argument specification. You can do the same in a `def` statement:
```
def mapfunction((i, el)):
return i * el
map(mapfunction, enumerate(a))
```
To make way for other function signature features such as annotations, tuple unpacking in function arguments has been removed from Python 3.
Demo:
```
>>> a = [1, 3, 5, 6, 8]
>>> def mapfunction((i, el)):
... return i * el
...
>>> map(lambda (i, el): i * el, enumerate(a))
[0, 3, 10, 18, 32]
>>> map(mapfunction, enumerate(a))
[0, 3, 10, 18, 32]
``` |
Django 1.9 JSONField update behavior | 35,483,301 | 4 | 2016-02-18T13:51:29Z | 35,484,277 | 10 | 2016-02-18T14:33:13Z | [
"python",
"json",
"django",
"postgresql"
] | I've recently updated to Django 1.9 and tried updating some of my model fields to use the built-in JSONField (I'm using PostgreSQL 9.4.5). As I was trying to create and update my object's fields, I came across something peculiar. Here is my model:
```
class Activity(models.Model):
activity_id = models.CharField(max_length=MAX_URL_LENGTH, db_index=True, unique=True)
my_data = JSONField(default=dict())
```
Here is an example of what I was doing:
```
>>> from proj import models
>>> test, created = models.Activity.objects.get_or_create(activity_id="foo")
>>> created
True
>>> test.my_data['id'] = "foo"
>>> test.save()
>>> test
<Activity: {"id": "foo"}>
>>> test2, created2 = models.Activity.objects.get_or_create(activity_id="bar")
>>> created2
True
>>> test2
<Activity: {"id": "foo"}>
>>> test2.activity_id
'bar'
>>> test.activity_id
'foo'
```
It seems whenever I update any field in `my_data`, the next object I create pre-populates itself with the data from `my_data` from the previous object. This happens whether I use `get_or_create` or just `create`. Can someone explain to me what is happening? | The problem is that you are using `default=dict()`. Python dictionaries are mutable. The default dictionary is created once when the models file is loaded. After that, any changes to `instance.my_data` alter the same instance, if they are using the default value.
The solution is to use the callable `dict` as the default instead of `dict()`.
```
class Activity(models.Model):
my_data = JSONField(default=dict)
```
The [JSONField docs](https://docs.djangoproject.com/es/1.9/ref/contrib/postgres/fields/#jsonfield) warn about this:
> If you give the field a `default`, ensure itâs a callable such as `dict` (for an empty default) or a callable that returns a `dict` (such as a function). Incorrectly using `default={}` creates a mutable default that is shared between all instances of `JSONField`. |
No module named 'polls.apps.PollsConfigdjango'; Django project tutorial 2 | 35,484,263 | 3 | 2016-02-18T14:32:46Z | 35,484,431 | 12 | 2016-02-18T14:39:09Z | [
"python",
"django"
] | So, I've been following the tutorial steps here <https://docs.djangoproject.com/en/1.9/intro/tutorial02/> and I got to the step where I am supposed to run this command: python manage.py makemigrations polls
When I run it, I get this error:
```
python manage.py makemigrations polls
Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 2218, in_find_and_load_unlocked
AttributeError: 'module' object has no attribute '__path__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/tgumm/pythonenv/tutorial/lib/python3.4/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/home/tgumm/pythonenv/tutorial/lib/python3.4/site-packages/django/core/management/__init__.py", line 327, in execute
django.setup()
File "/home/tgumm/pythonenv/tutorial/lib/python3.4/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/tgumm/pythonenv/tutorial/lib/python3.4/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/home/tgumm/pythonenv/tutorial/lib/python3.4/site-packages/django/apps/config.py", line 116, in create
mod = import_module(mod_path)
File "/home/tgumm/pythonenv/tutorial/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2212, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_an``d_load
File "<frozen importlib._bootstrap>", line 2221, in _find_and_load_unlocked
ImportError: No module named 'polls.apps.PollsConfigdjango'; 'polls.apps' is not a package
```
Sorry, I'm new here.
```
enter code herefrom django.db import models
# Create your models here.
from django.db import models
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
class Choice(models.Model):
question = models.ForeignKey(Question, on_delete=models.CASCADE)
choice_text = models.CharField(max_length=200)
votes = models.IntegerField(default=0)
``` | The first problem is this warning in the traceback:
```
No module named 'polls.apps.PollsConfigdjango'
```
That means that you are missing a comma after `'polls.apps.PollsConfig` in your `INSTALLED_APPS` setting. It should be:
```
INSTALLED_APPS = (
...
'polls.apps.PollsConfig',
'django....',
...
)
```
The second problem is the warning `'polls.apps' is not a package`. That suggests that you have installed Django 1.8, but you are following the Django 1.9 tutorial.
If you are using Django 1.8, then follow the 1.8 tutorial so that you don't hit problems like this. Adding the polls app to `INSTALLED_APPS` is [covered here](https://docs.djangoproject.com/en/1.8/intro/tutorial01/#activating-models) in the Django 1.8 tutorial. Note that it doesn't use `PollsConfig`.
```
INSTALLED_APPS = (
...
'polls',
)
``` |
Dictionary in Python which keeps the last x accessed keys | 35,489,367 | 2 | 2016-02-18T18:17:46Z | 35,489,422 | 7 | 2016-02-18T18:20:28Z | [
"python",
"python-3.x"
] | Is there a dictionary in python which will only keep the most recently accessed keys. Specifically, I am caching relatively large blobs of data in a dictionary, and I am looking for a way of preventing the dictionary from ballooning in size, and to drop to the variables which were only accessed a long time ago [i.e. to only keep the say the 1000 most recently accessed keys - and when a new key gets added, to drop the key that was accessed the longest ago].
I suspect this is not part of the standard dictionary class, but am hoping there is something analogous. | Sounds like you want a Least Recently Used (LRU) cache.
Here's a Python implementation already: <https://pypi.python.org/pypi/lru-dict/>
Here's another one: <https://www.kunxi.org/blog/2014/05/lru-cache-in-python/> |
Find the unique element in an unordered array consisting of duplicates | 35,496,145 | 3 | 2016-02-19T02:15:09Z | 35,496,175 | 7 | 2016-02-19T02:17:48Z | [
"python",
"arrays",
"algorithm",
"big-o"
] | For example, if L = [1,4,2,6,4,3,2,6,3], then we want 1 as the unique element. Here's pseudocode of what I had in mind:
initialize a dictionary to store number of occurrences of each element: ~O(n),
look through the dictionary to find the element whose value is 1: ~O(n)
This ensures that the total time complexity then stay to be O(n). Does this seem like the right idea?
Also, if the array was sorted, say for example, how would the time complexity change? I'm thinking it would be some variation of binary search which would reduce it to O(log n). | You can use [`collections.Counter`](https://docs.python.org/2/library/collections.html#collections.Counter)
```
from collections import Counter
uniques = [k for k, cnt in Counter(L).items() if cnt == 1]
```
Complexity will always be O(n). You only ever need to traverse the list once (which is what `Counter` is doing). Sorting doesn't matter, since dictionary assignment is always O(1). |
Find the unique element in an unordered array consisting of duplicates | 35,496,145 | 3 | 2016-02-19T02:15:09Z | 35,496,208 | 7 | 2016-02-19T02:21:43Z | [
"python",
"arrays",
"algorithm",
"big-o"
] | For example, if L = [1,4,2,6,4,3,2,6,3], then we want 1 as the unique element. Here's pseudocode of what I had in mind:
initialize a dictionary to store number of occurrences of each element: ~O(n),
look through the dictionary to find the element whose value is 1: ~O(n)
This ensures that the total time complexity then stay to be O(n). Does this seem like the right idea?
Also, if the array was sorted, say for example, how would the time complexity change? I'm thinking it would be some variation of binary search which would reduce it to O(log n). | There is a very simple-looking solution that is O(n): XOR elements of your sequence together using the `^` operator. The end value of the variable will be the value of the unique number.
The proof is simple: XOR-ing a number with itself yields zero, so since each number except one contains its own duplicate, the net result of XOR-ing them all would be zero. XOR-ing the unique number with zero yields the number itself. |
Merging lists of lists | 35,503,922 | 4 | 2016-02-19T11:13:32Z | 35,503,997 | 7 | 2016-02-19T11:17:15Z | [
"python",
"list"
] | I have two lists of lists that have equivalent numbers of items. The two lists look like this:
`L1 = [[1, 2], [3, 4], [5, 6, 7]]`
`L2 =[[a, b], [c, d], [e, f, g]]`
I am looking to create one list that looks like this:
`Lmerge = [[[a, 1], [b,2]], [[c,3], [d,4]], [[e,5], [f,6], [g,7]]]`
I was attempting to use `map()` :
`map(list.__add__, L1, L2)` but the output produces a flat list.
What is the best way to combine two lists of lists? Thanks in advance. | You can `zip` the lists and then `zip` the resulting tuples again...
```
>>> L1 = [[1, 2], [3, 4], [5, 6, 7]]
>>> L2 =[['a', 'b'], ['c', 'd'], ['e', 'f', 'g']]
>>> [list(zip(a,b)) for a,b in zip(L2, L1)]
[[('a', 1), ('b', 2)], [('c', 3), ('d', 4)], [('e', 5), ('f', 6), ('g', 7)]]
```
If you need lists all the way down, combine with `map:
```
>>> [list(map(list, zip(a,b))) for a,b in zip(L2, L1)]
[[['a', 1], ['b', 2]], [['c', 3], ['d', 4]], [['e', 5], ['f', 6], ['g', 7]]]
``` |
Break statement in finally block swallows exception | 35,505,624 | 15 | 2016-02-19T12:41:38Z | 35,505,895 | 25 | 2016-02-19T12:56:12Z | [
"python"
] | Consider:
```
def raiseMe( text="Test error" ):
raise Exception( text )
def break_in_finally_test():
for i in range(5):
if i==2:
try:
raiseMe()
except:
raise
else:
print "succeeded!"
finally:
print "testing this!"
break
if __name__=='__main__':
break_in_finally_test()
```
I expected to see `Exception( "Test error" )` to be raised, but instead only "testing this" is printed. The intention, of course, was to call `raiseMe()` only once, no matter if we succeed or not - but if it raises an exception, I would have wanted to see that!
Why does break swallow the exception that I explicitly raise? | From <https://docs.python.org/2.7/reference/compound_stmts.html#finally>:
> If finally is present, it specifies a âcleanupâ handler. The try clause is
> executed, including any except and else clauses. If an exception occurs in
> any of the clauses and is not handled, the exception is temporarily saved.
> The finally clause is executed. If there is a saved exception, it is
> re-raised at the end of the finally clause. **If the finally clause raises
> another exception or executes a return or break statement, the saved
> exception is discarded**
This also reflects the behaviour expected from the `try...finally` statement before [PEP341](https://www.python.org/dev/peps/pep-0341/):
This is how a try except finally block looked like pre PEP341:
```
try:
try:
raiseMe()
except:
raise
finally:
#here is where cleanup is supposed to happen before raising error
break
#after finally code: raise error
```
As the raising of errors never happens in the `finally` block it is never actually raised.
To maintain backwards compatability with python versions 2.4 and less it had to be done in this way. |
Python - is there a way to make all strings unicode in a project by default? | 35,506,776 | 5 | 2016-02-19T13:42:19Z | 35,506,803 | 7 | 2016-02-19T13:43:44Z | [
"python",
"unicode",
"internationalization"
] | Instead of typing u in front of each sting?
...and some more text to keep stackoverflow happy | Yes, use `from __future__ import unicode_literals`
```
>>> from __future__ import unicode_literals
>>> s = 'hi'
>>> type(s)
<type 'unicode'>
```
In Python 3, strings are unicode strings by default. |
Python endswith() | 35,510,787 | 3 | 2016-02-19T17:04:36Z | 35,510,830 | 11 | 2016-02-19T17:06:31Z | [
"python",
"ends-with"
] | I have a string:
```
myStr = "Chicago Blackhawks vs. New York Rangers"
```
I also have a list:
```
myList = ["Toronto Maple Leafs", "New York Rangers"]
```
Using the endswith() method, I want to write an if statement that checks to see if the myString has ends with either of the strings in the myList. I have the basic if statement, but I am confused on what I should put in the parentheses to check this.
```
if myStr.endswith():
print("Success")
``` | Just convert your list to `tuple` and pass it to `endswith()` method:
```
>>> myStr = "Chicago Blackhawks vs. New York Rangers"
>>>
>>> myList = ["Toronto Maple Leafs", "New York Rangers"]
>>>
>>> myStr.endswith(tuple(myList))
True
```
> [str.endswith(suffix[, start[, end]])](https://docs.python.org/3/library/stdtypes.html#str.endswith)
>
> Return True if the string ends with the specified suffix, otherwise return False. **suffix can also be a tuple of suffixes** to look
> for. With optional start, test beginning at that position. With
> optional end, stop comparing at that position. |
Python: Merge list with range list | 35,511,010 | 6 | 2016-02-19T17:15:28Z | 35,511,057 | 9 | 2016-02-19T17:18:29Z | [
"python",
"list",
"python-2.7",
"range",
"list-comprehension"
] | I have a `list`:
```
L = ['a', 'b']
```
I need create new `list` by concatenate original `list` with range from `1` to `k` this way:
```
k = 4
L1 = ['a1','b1', 'a2','b2','a3','b3','a4','b4']
```
I try:
```
l1 = L * k
print l1
#['a', 'b', 'a', 'b', 'a', 'b', 'a', 'b']
l = [ [x] * 2 for x in range(1, k + 1) ]
print l
#[[1, 1], [2, 2], [3, 3], [4, 4]]
l2 = [item for sublist in l for item in sublist]
print l2
#[1, 1, 2, 2, 3, 3, 4, 4]
print zip(l1,l2)
#[('a', 1), ('b', 1), ('a', 2), ('b', 2), ('a', 3), ('b', 3), ('a', 4), ('b', 4)]
print [x+ str(y) for x,y in zip(l1,l2)]
#['a1', 'b1', 'a2', 'b2', 'a3', 'b3', 'a4', 'b4']
```
But I think it is very complicated.
What is the fastest and more generic solution? | You can use a list comprehension:
```
L = ['a', 'b']
k = 4
L1 = ['{}{}'.format(x, y) for y in range(1, k+1) for x in L]
print(L1)
```
**Output**
```
['a1', 'b1', 'a2', 'b2', 'a3', 'b3', 'a4', 'b4']
``` |
Python: Read hex from file into list? | 35,516,183 | 6 | 2016-02-19T22:22:34Z | 35,516,257 | 7 | 2016-02-19T22:27:03Z | [
"python",
"hex"
] | Is there a simple way to, in Python, read a file's hexadecimal data into a list, say `hex`?
So `hex` would be this:
`hex = ['AA','CD','FF','0F']`
*I don't want to have to read into a string, then split. This is memory intensive for large files.* | ```
s = "Hello"
hex_list = ["{:02x}".format(ord(c)) for c in s]
```
Output
```
['48', '65', '6c', '6c', '6f']
```
Just change `s` to `open(filename).read()` and you should be good.
```
with open('/path/to/some/file', 'r') as fp:
hex_list = ["{:02x}".format(ord(c)) for c in fp.read()]
```
---
Or, if you do not want to keep the whole list in memory at once for large files.
```
hex_list = ("{:02x}".format(ord(c)) for c in fp.read())
```
and to get the values, keep calling
```
next(hex_list)
```
to get all the remaining values from the generator
```
list(hex_list)
``` |
Using Python Higher Order Functions to Manipulate Lists | 35,530,782 | 7 | 2016-02-21T00:24:42Z | 35,530,831 | 8 | 2016-02-21T00:29:56Z | [
"python",
"lambda",
"reduce"
] | I've made this list; each item is a string that contains commas (in some cases) and colon (always):
```
dinner = [
'cake,peas,cheese : No',
'duck,broccoli,onions : Maybe',
'motor oil : Definitely Not',
'pizza : Damn Right',
'ice cream : Maybe',
'bologna : No',
'potatoes,bacon,carrots,water: Yes',
'rats,hats : Definitely Not',
'seltzer : Yes',
'sleeping,whining,spitting : No Way',
'marmalade : No'
]
```
I would like to create a new list from the one above as follows:
```
['cake : No',
'peas : No',
'cheese : No',
'duck : Maybe',
'broccoli : Maybe',
'onions : Maybe',
'motor oil : Definitely Not',
'pizza : Damn Right',
'ice cream : Maybe',
'bologna : No',
'potatoes : Yes',
'bacon : Yes',
'carrots : Yes',
'water : Yes',
'rats : Definitely Not',
'hats : Definitely Not',
'seltzer : Yes',
'sleeping : No Way',
'whining : No Way',
'spitting : No Way',
'marmalade : No']
```
But I'd like to know if/ how it's possible to do so in a line or two of efficient code employing primarily Python's higher order functions. I've been attempting it:
`reduce(lambda x,y: x + y, (map(lambda x: x.split(':')[0].strip().split(','), dinner)))`
...produces this:
```
['cake',
'peas',
'cheese',
'duck',
'broccoli',
'onions',
'motor oil',
'pizza',
'ice cream',
'bologna',
'potatoes',
'bacon',
'carrots',
'water',
'rats',
'hats',
'seltzer',
'sleeping',
'whining',
'spitting',
'marmalade']
```
...but I'm struggling with appending the piece of each string after the colon back onto each item. | I would create a dict using, `zip`, `map` and [`itertools.repeat`](https://docs.python.org/3/library/itertools.html#itertools.repeat):
```
from itertools import repeat
data = ({k.strip(): v.strip() for _k, _v in map(lambda x: x.split(":"), dinner)
for k, v in zip(_k.split(","), repeat(_v))})
from pprint import pprint as pp
pp(data)
```
Output:
```
{'bacon': 'Yes',
'bologna': 'No',
'broccoli': 'Maybe',
'cake': 'No',
'carrots': 'Yes',
'cheese': 'No',
'duck': 'Maybe',
'hats': 'Definitely Not',
'ice cream': 'Maybe',
'marmalade': 'No',
'motor oil': 'Definitely Not',
'onions': 'Maybe',
'peas': 'No',
'pizza': 'Damn Right',
'potatoes': 'Yes',
'rats': 'Definitely Not',
'seltzer': 'Yes',
'sleeping': 'No Way',
'spitting': 'No Way',
'water': 'Yes',
'whining': 'No Way'}
```
Or using the dict constructor:
```
from itertools import repeat
data = dict(map(str.strip, t) for _k, _v in map(lambda x: x.split(":"), dinner)
for t in zip(_k.split(","), repeat(_v)))
from pprint import pprint as pp
pp(data)
```
If you really want a list of strings, we can do something similar using [`itertools.chain`](https://docs.python.org/3/library/itertools.html#itertools.chain) and joining the substrings:
```
from itertools import repeat, chain
data = chain.from_iterable(map(":".join, zip(_k.split(","), repeat(_v)))
for _k, _v in map(lambda x: x.split(":"), dinner))
from pprint import pprint as pp
pp(list(data))
```
Output:
```
['cake: No',
'peas: No',
'cheese : No',
'duck: Maybe',
'broccoli: Maybe',
'onions : Maybe',
'motor oil : Definitely Not',
'pizza : Damn Right',
'ice cream : Maybe',
'bologna : No',
'potatoes: Yes',
'bacon: Yes',
'carrots: Yes',
'water: Yes',
'rats: Definitely Not',
'hats : Definitely Not',
'seltzer : Yes',
'sleeping: No Way',
'whining: No Way',
'spitting : No Way',
'marmalade : No']
``` |
installing pip using get_pip.py SNIMissingWarning | 35,535,566 | 6 | 2016-02-21T11:33:35Z | 36,607,157 | 13 | 2016-04-13T18:54:51Z | [
"python",
"python-2.7",
"pip"
] | I am trying to install pip on my Mac Yosemite 10.10.5using get\_pip.py file but I am having the following issue
```
Bachirs-MacBook-Pro:Downloads bachiraoun$ sudo python get-pip.py
The directory '/Users/bachiraoun/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/bachiraoun/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting pip
/tmp/tmpOofplD/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:315: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning.
/tmp/tmpOofplD/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:120: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
Could not fetch URL https://pypi.python.org/simple/pip/: There was a problem confirming the ssl certificate: [Errno 1] _ssl.c:510: error:0D0890A1:asn1 encoding routines:ASN1_verify:unknown message digest algorithm - skipping
Could not find a version that satisfies the requirement pip (from versions: )
No matching distribution found for pip
```
According to the my error message and [urllib3](https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning) my problem is because I have a python installation version earlier than earlier than 2.7.9 but my python is 2.7.10 as you can see
```
Bachirs-MacBook-Pro:docs bachiraoun$ python
Python 2.7.10 (default, Jul 14 2015, 19:46:27)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.version_info
sys.version_info(major=2, minor=7, micro=10, releaselevel='final', serial=0)
>>>
```
I verified my openssl installed and it seems to be ok
```
Bachirs-MacBook-Pro:docs bachiraoun$ brew install openssl
Warning: openssl-1.0.2f already installed
```
not sure how to fix this, any idea ? | need install:
```
pip install pyopenssl ndg-httpsclient pyasn1
```
link:
<http://urllib3.readthedocs.org/en/latest/security.html#openssl-pyopenssl>
By default, we use the standard libraryâs ssl module. Unfortunately, there are several limitations which are addressed by PyOpenSSL:
(Python 2.x) SNI support.
(Python 2.x-3.2) Disabling compression to mitigate CRIME attack.
To use the Python OpenSSL bindings instead, youâll need to install the required packages:
```
pip install pyopenssl ndg-httpsclient pyasn1
``` |
Seaborn boxplot + stripplot: duplicate legend | 35,538,882 | 6 | 2016-02-21T16:48:03Z | 35,539,098 | 8 | 2016-02-21T17:05:56Z | [
"python",
"matplotlib",
"legend",
"seaborn"
] | One of the coolest things you can easily make in `seaborn` is `boxplot` + `stripplot` combination:
```
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
tips = sns.load_dataset("tips")
sns.stripplot(x="day", y="total_bill", hue="smoker",
data=tips, jitter=True,
palette="Set2", split=True,linewidth=1,edgecolor='gray')
sns.boxplot(x="day", y="total_bill", hue="smoker",
data=tips,palette="Set2",fliersize=0)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.);
```
[](http://i.stack.imgur.com/IxAVf.png)
Unfortunately, as you can see above, it produced double legend, one for boxplot, one for stripplot. Obviously, it looks ridiculous and redundant. But I cannot seem to find a way to get rid of `stripplot` legend and only leave `boxplot` legend. Probably, I can somehow delete items from `plt.legend`, but I cannot find it in the documentation. | You can [get what handles/labels should exist](http://matplotlib.org/users/legend_guide.html#controlling-the-legend-entries) in the legend before you actually draw the legend itself. You then draw the legend only with the specific ones you want.
```
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
tips = sns.load_dataset("tips")
sns.stripplot(x="day", y="total_bill", hue="smoker",
data=tips, jitter=True,
palette="Set2", split=True,linewidth=1,edgecolor='gray')
# Get the ax object to use later.
ax = sns.boxplot(x="day", y="total_bill", hue="smoker",
data=tips,palette="Set2",fliersize=0)
# Get the handles and labels. For this example it'll be 2 tuples
# of length 4 each.
handles, labels = ax.get_legend_handles_labels()
# When creating the legend, only use the first two elements
# to effectively remove the last two.
l = plt.legend(handles[0:2], labels[0:2], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
```
[](http://i.stack.imgur.com/04ISd.png) |
Is this an acceptable way to flatten a list of dicts? | 35,539,596 | 4 | 2016-02-21T17:44:58Z | 35,539,645 | 8 | 2016-02-21T17:49:05Z | [
"python",
"list",
"dictionary"
] | I'm looking at a proper way to flatten something like this
```
a = [{'name': 'Katie'}, {'name': 'Katie'}, {'name': 'jerry'}]
```
having
```
d = {}
```
Using a double map like this:
```
map(lambda x: d.update({x:d[x]+1}) if x in d else d.update({x:1}),map(lambda x: x["name"] ,a))
```
I get the result i want:
```
>>> d
{'jerry': 1, 'Katie': 2}
```
But I feel it could be done better..not with list comprehensions tho , I feel that's what we have map reduce. | I don't really like your solution because it is hard to read and has sideeffects.
For the sample data your provided, using a [`Counter`](https://docs.python.org/3/library/collections.html#collections.Counter) (which is a subclass of the built-in dictionary) is a better approach.
```
>>> Counter(d['name'] for d in a)
Counter({'Katie': 2, 'jerry': 1})
``` |
How to print this pattern? I cannot get the logic for eliminating the middle part | 35,547,290 | 10 | 2016-02-22T06:43:32Z | 35,547,350 | 10 | 2016-02-22T06:47:21Z | [
"python",
"python-2.7",
"python-3.x"
] | Write a program that asks the user for an input `n` (assume that the user enters a positive integer) and prints only the boundaries of the triangle using asterisks `'*'` of height `n`.
For example if the user enters 6 then the height of the triangle should be 6 as shown below and there should be no spaces between the asterisks on the top
line:
```
******
* *
* *
* *
**
*
```
I cannot understand how to print the part between top and end of pattern? This is my code:
```
n = int(input("Enter a positive integer value: "))
for i in range (n, 0, -1):
print ("*" * i)
```
The `for` loop is for printing the reverse asterisks triangle. Obstacle is to print the middle part. | Try the following, it avoids using an `if` statement within the `for` loop:
```
n = int(input("Enter a positive integer value: "))
print('*' * n)
for i in range (n-3, -1, -1):
print ("*{}*".format(' ' * i))
print('*')
```
For 6, you will get the following output:
```
******
* *
* *
* *
**
*
```
You could also handle the special case of `1` as follows:
```
n = int(input("Enter a positive integer value: "))
if n == 1:
print '*'
else:
print('*' * n)
for i in range (n-3, -1, -1):
print ("*{}*".format(' ' * i))
print('*')
``` |
How to print this pattern? I cannot get the logic for eliminating the middle part | 35,547,290 | 10 | 2016-02-22T06:43:32Z | 35,547,354 | 14 | 2016-02-22T06:47:38Z | [
"python",
"python-2.7",
"python-3.x"
] | Write a program that asks the user for an input `n` (assume that the user enters a positive integer) and prints only the boundaries of the triangle using asterisks `'*'` of height `n`.
For example if the user enters 6 then the height of the triangle should be 6 as shown below and there should be no spaces between the asterisks on the top
line:
```
******
* *
* *
* *
**
*
```
I cannot understand how to print the part between top and end of pattern? This is my code:
```
n = int(input("Enter a positive integer value: "))
for i in range (n, 0, -1):
print ("*" * i)
```
The `for` loop is for printing the reverse asterisks triangle. Obstacle is to print the middle part. | In every iteration of the `for` loop You print one line of the pattern and it's length is `i`. So, in the first and the last line of the pattern You will have `"*" * i`.
In every other line of the pattern you have to print one `*` at start of the line, one `*` at the end, and `(i - 2)` spaces in the middle because 2 stars were already printed out, which results in `"*" + (" " * (i - 2)) + "*"`. After combining that two cases, we get the following code:
```
n = int(input("Enter a positive integer value: "))
for i in range(n, 0, -1):
if i == 1 or i == n:
print("*" * i)
else:
print("*" + (" " * (i - 2)) + "*")
``` |
python 3.4 list comprehension - calling a temp variable within list | 35,548,737 | 4 | 2016-02-22T08:15:48Z | 35,549,105 | 7 | 2016-02-22T08:39:40Z | [
"python",
"list"
] | I have a list of dictionaries and I would like to extract certain data based on certain conditions. I would like to extract only the currency (as int/float) if the currency is showing USD and more than 0.
```
curr = [{'currency': '6000.0000,EUR', 'name': 'Bob'},
{'currency': '0.0000,USD', 'name': 'Sara'},
{'currency': '2500.0000,USD', 'name': 'Kenny'},
{'currency': '0.0000,CND', 'name': 'Debbie'},
{'currency': '2800.0000,USD', 'name': 'Michael'},
{'currency': '1800.0000,CND', 'name': 'Aaron'},
{'currency': '2500.0000,EUR', 'name': 'Peter'}]
```
Results:
```
usd_curr = [2500.0000, 2800.0000]
```
This is what I have done.
```
usd_curr = [line for line in data if ',USD' in line['currency']]
usd_curr = [float(elem['currency'].split(',')[0]) for elem in curr if float(elem['currency'].split(',')[0]) > 0]
```
The list works but my question is really this - is there a better way to use a variable inside the list comprehension so it will look something like this:
```
usd_curr = [var = float(elem['currency'].split(',')[0]) for elem in curr if var > 0]
``` | There's no nice syntax to do that using comprehensions. You could use an inner generator to generate the values to cut the repetition, but it'll get unreadable real quick the more complex it gets.
```
usd_curr = [
float(val)
for val, val_type in (elem['currency'].split(',') for elem in curr)
if val_type == 'USD' and float(val) > 0
]
```
I'd suggest using a named generator instead.
```
def get_currency_by_type(curr, curr_type):
for elem in curr:
val, val_type = elem['currency'].split(',')
if val_type == curr_type and float(val) > 0:
yield float(val)
usd_curr = list(get_currency_by_type(curr, 'USD'))
``` |
check if two lists are equal by type Python | 35,554,208 | 11 | 2016-02-22T12:51:24Z | 35,554,280 | 18 | 2016-02-22T12:54:29Z | [
"python",
"python-2.7",
"types"
] | I want to check if two lists have the same type of items for every index. For example if I have
```
y = [3, "a"]
x = [5, "b"]
z = ["b", 5]
```
the check should be `True` for `x` and `y`.
The check should be `False` for `y` and `z` because the types of the elements at the same positions are not equal. | Just [`map`](https://docs.python.org/2.7/library/functions.html#map) the elements to their respective [`type`](https://docs.python.org/2.7/library/functions.html#type) and compare those:
```
>>> x = [5, "b"]
>>> y = [3, "a"]
>>> z = ["b", 5]
>>> map(type, x) == map(type, y)
True
>>> map(type, x) == map(type, z)
False
```
For Python 3, you will also have to turn the `map` generators into proper lists, either by using the [`list`](https://docs.python.org/3/library/functions.html#func-list) function or with a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions):
```
>>> list(map(type, x)) == list(map(type, y))
True
>>> [type(i) for i in x] == [type(i) for i in z]
False
```
---
I did some timing analysis, comparing the above solution to that of [@timgeb](http://stackoverflow.com/a/35554318/1639625), using `all` and `izip` and inputs with the first non-matching type in different positions. As expected, the time taken for the `map` solution is almost exactly the same for each input, while the `all` + `izip` solution can be *very* fast or take three times as long, depending on the position of the first difference.
```
In [52]: x = [1] * 1000 + ["s"] * 1000
In [53]: y = [2] * 1000 + ["t"] * 1000 # same types as x
In [54]: z = ["u"] * 1000 + [3] * 1000 # difference at first element
In [55]: u = [4] * 2000 # difference after first half
In [56]: %timeit map(type, x) == map(type, y)
10000 loops, best of 3: 129 µs per loop
In [58]: %timeit all(type(i) == type(j) for i, j in izip(x, y))
1000 loops, best of 3: 342 µs per loop
In [59]: %timeit all(type(i) == type(j) for i, j in izip(x, z))
1000000 loops, best of 3: 748 ns per loop
In [60]: %timeit all(type(i) == type(j) for i, j in izip(x, u))
10000 loops, best of 3: 174 µs per loop
``` |
check if two lists are equal by type Python | 35,554,208 | 11 | 2016-02-22T12:51:24Z | 35,554,318 | 18 | 2016-02-22T12:56:10Z | [
"python",
"python-2.7",
"types"
] | I want to check if two lists have the same type of items for every index. For example if I have
```
y = [3, "a"]
x = [5, "b"]
z = ["b", 5]
```
the check should be `True` for `x` and `y`.
The check should be `False` for `y` and `z` because the types of the elements at the same positions are not equal. | Lazy evaluation with `all`:
```
>>> from itertools import izip
>>> all(type(a) == type(b) for a,b in izip(x,y))
True
```
Use regular `zip` in Python 3, it already returns a generator.
If the lists can have different lengths, just check the length upfront. Checking the length is a very fast O(1) operation:
```
>>> len(x) == len(y) and all(type(a) == type(b) for a,b in izip(x,y))
True
>>> x = [5,"b",'foo']
>>> len(x) == len(y) and all(type(a) == type(b) for a,b in izip(x,y))
False
```
The `and` will be short-circuit evaluated, that means `all` won't even be called if the lengths differ. |
My function returns a list with a single integer in it, how can I make it return only the integer? | 35,556,910 | 6 | 2016-02-22T15:02:48Z | 35,556,985 | 13 | 2016-02-22T15:05:46Z | [
"python",
"list",
"list-comprehension",
"indexof"
] | How do I remove the brackets from the result while keeping the function a single line of code?
```
day_list = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]
def day_to_number(inp):
return [day for day in range(len(day_list)) if day_list[day] == inp]
print day_to_number("Sunday")
print day_to_number("Monday")
print day_to_number("Tuesday")
print day_to_number("Wednesday")
print day_to_number("Thursday")
print day_to_number("Friday")
print day_to_number("Saturday")
```
Output:
```
[0]
[1]
[2]
[3]
[4]
[5]
[6]
``` | The list comprehension is overkill. If your list does not contain duplicates (as your sample data shows, just do)
```
>>> def day_to_number(inp):
... return day_list.index(inp)
...
>>> day_to_number("Sunday")
0
```
I would also advice to make the `day_list` an argument of the function, i.e.:
```
>>> def day_to_number(inp, days):
... return days.index(inp)
...
>>> day_to_number("Sunday", day_list)
0
```
Looking it up in the global name space is a bit ugly.
And to make the whole thing more efficient (`list.index` is O(n)) use a dictionary:
```
>>> days = dict(zip(day_list, range(len(day_list))))
>>> days
{'Monday': 1, 'Tuesday': 2, 'Friday': 5, 'Wednesday': 3, 'Thursday': 4, 'Sunday': 0, 'Saturday': 6}
>>>
>>> def day_to_number(inp, days):
... return days[inp]
...
>>> day_to_number("Sunday", days)
0
``` |
How to import Azure BlobService in python? | 35,558,463 | 3 | 2016-02-22T16:13:20Z | 35,592,905 | 8 | 2016-02-24T03:51:01Z | [
"python",
"azure",
"azure-storage-blobs"
] | We are able to import azure.storage, but not access the BlobService attribute
The documentation says to use the following import statement:
```
from azure.storage import BlobService
```
But that get's the following error:
```
ImportError: cannot import name BlobService
```
We tried the following:
```
import azure.storage
...
foo = azure.storage.BlobService(...)
```
But that received the following error:
```
AttributeError: âmoduleâ object has no attribute âBlobServiceâ
```
We also tried all of the above with "azure.storage.blob" instead of "azure.storage"
We tried updating azure-storage package but it is up to date (version 0.30.0)
We also tried uninstalling azure-storage and installing the entire azure package, but we got the same results. We tried installing them with both pip and conda, but same results both times.
I am aware that the output suggests that this version of azure.storage has no BlobService attribute, but the documentation clearly states to import from there.
<https://azure.microsoft.com/en-us/documentation/articles/machine-learning-data-science-create-features-blob/> | ya, if you want to use `BlobService`, you could install package `azure.storage 0.20.0`, there is `BlobService` in that version. In the latest `azure.storage 0.30.0` , BlobSrvice is split into `BlockBlobService, AppendBlobService, PageBlobService` object, you could use `BlockBlobService` replace `BlobService`. There are many articles need to update the content. |
What is the pythonic way to this dict to list conversion? | 35,559,978 | 4 | 2016-02-22T17:25:48Z | 35,560,049 | 8 | 2016-02-22T17:29:05Z | [
"python",
"python-2.7"
] | For example, convert
```
d = {'a.b1': [1,2,3], 'a.b2': [3,2,1], 'b.a1': [2,2,2]}
```
to
```
l = [['a','b1',1,2,3], ['a','b2',3,2,1], ['b','a1',2,2,2]]
```
What I do now
```
l = []
for k,v in d.iteritems():
a = k.split('.')
a.extend(v)
l.append(a)
```
is definitely not a pythonic way. | Python 2:
```
d = {'a.b1': [1,2,3], 'a.b2': [3,2,1], 'b.a1': [2,2,2]}
l = [k.split('.') + v for k, v in d.iteritems()]
```
Python 3:
```
d = {'a.b1': [1,2,3], 'a.b2': [3,2,1], 'b.a1': [2,2,2]}
l = [k.split('.') + v for k, v in d.items()]
```
These are called [list comprehensions](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions). |
How can I plot a confusion matrix? | 35,572,000 | 2 | 2016-02-23T08:06:07Z | 35,572,247 | 8 | 2016-02-23T08:19:12Z | [
"python",
"matrix",
"scikit-learn"
] | I am using scikit-learn for classification of text documents(22000) to 100 classes. I use scikit-learn's confusion matrix method for computing the confusion matrix.
```
model1 = LogisticRegression()
model1 = model1.fit(matrix, labels)
pred = model1.predict(test_matrix)
cm=metrics.confusion_matrix(test_labels,pred)
print(cm)
plt.imshow(cm, cmap='binary')
```
This is how my confusion matrix looks like:
```
[[3962 325 0 ..., 0 0 0]
[ 250 2765 0 ..., 0 0 0]
[ 2 8 17 ..., 0 0 0]
...,
[ 1 6 0 ..., 5 0 0]
[ 1 1 0 ..., 0 0 0]
[ 9 0 0 ..., 0 0 9]]
```
However, I do not receive a clear or legible plot. Is there a better way to do this? | [](http://i.stack.imgur.com/bYbgo.png)
you can use `plt.matshow()` instead of `plt.imshow()` or you can use seaborn module's `heatmap` to plot the confusion matrix
```
import seaborn as sn
array = [[33,2,0,0,0,0,0,0,0,1,3],
[3,31,0,0,0,0,0,0,0,0,0],
[0,4,41,0,0,0,0,0,0,0,1],
[0,1,0,30,0,6,0,0,0,0,1],
[0,0,0,0,38,10,0,0,0,0,0],
[0,0,0,3,1,39,0,0,0,0,4],
[0,2,2,0,4,1,31,0,0,0,2],
[0,1,0,0,0,0,0,36,0,2,0],
[0,0,0,0,0,0,1,5,37,5,1],
[3,0,0,0,0,0,0,0,0,39,0],
[0,0,0,0,0,0,0,0,0,0,38]]
df_cm = pd.DataFrame(array, index = [i for i in "'ABCDEFGHIJK'"],
columns = [i for i in "'ABCDEFGHIJK'"])
plt.figure(figsize = (10,7))
sn.heatmap(df_cm, annot=True)
``` |
Using python decorator with or without parentheses | 35,572,663 | 20 | 2016-02-23T08:43:00Z | 35,572,746 | 19 | 2016-02-23T08:46:40Z | [
"python",
"decorator"
] | What is the difference in `Python` when using the same decorator *with or without parentheses*? For example:
Without parentheses
```
@someDecorator
def someMethod():
pass
```
With parentheses
```
@someDecorator()
def someMethod():
pass
``` | `someDecorator` in the first code snippet is a regular decorator:
```
@someDecorator
def someMethod():
pass
```
is equivalent to
```
someMethod = someDecorator(someMethod)
```
On the other hand, `someDecorator` in the second code snippet is a callable that returns a decorator:
```
@someDecorator()
def someMethod():
pass
```
is equivalent to
```
someMethod = someDecorator()(someMethod)
```
As pointed out by Duncan in comments, some decorators are designed to work both ways. Here's a pretty basic implementation of such decorator:
```
def someDecorator(arg=None):
def decorator(func):
def wrapper(*a, **ka):
return func(*a, **ka)
return wrapper
if callable(arg):
return decorator(arg) # return 'wrapper'
else:
return decorator # ... or 'decorator'
```
[`pytest.fixture`](https://github.com/pytest-dev/pytest/blob/2e04771893f17b571e6ab48f6884a6a70cde8fc8/_pytest/python.py#L132) is a more complex example. |
Python - If not statement with 0.0 | 35,572,698 | 19 | 2016-02-23T08:44:25Z | 35,572,950 | 18 | 2016-02-23T08:56:29Z | [
"python",
"python-2.7",
"if-statement",
"control-structure"
] | I have a question regarding `if not` statement in `Python 2.7`.
I have written some code and used `if not` statements. In one part of the code I wrote, I refer to a function which includes an `if not` statement to determine whether an optional keyword has been entered.
It works fine, except when `0.0` is the keyword's value. I understand this is because `0` is one of the things that is considered 'not'. My code is probably too long to post, but this is an analogous (albeit simplified) example:
```
def square(x=None):
if not x:
print "you have not entered x"
else:
y=x**2
return y
list=[1, 3, 0 ,9]
output=[]
for item in list:
y=square(item)
output.append(y)
print output
```
However, in this case I got left with:
```
you have not entered x
[1, 9, None, 81]
```
Where as I would like to get:
```
[1, 9, 0, 81]
```
In the above example I could use a list comprehension, but assuming I wanted to use the function and get the desired output how could I do this?
One thought I had was:
```
def square(x=None):
if not x and not str(x).isdigit():
print "you have not entered x"
else:
y=x**2
return y
list=[1, 3, 0 ,9]
output=[]
for item in list:
y=square(item)
output.append(y)
print output
```
This works, but seems like a bit of a clunky way of doing it. If anyone has another way that would be nice I would be very appreciative. | ### Problem
You understand it right. `not 0` (and also `not 0.0`) returns `True` in `Python`. Simple test can be done to see this:
```
a = not 0
print(a)
Result: True
```
Thus, the problem is explained. This line:
```
if not x:
```
Must be changed to something else.
---
### Solutions
There are couple of ways which can be done to fix the issue. I am just going to *list* them from what I think is the best solution down to the last possible solutions:
1. **To handle all possible valid cases**.
Since `square` should naturally *expect a number input with the exclusion of complex number* and should `return` an error otherwise, I think the best solution is to evaluate using `if not isinstance(x, numbers.Number) or isinstance(x, numbers.Complex):`
```
def square(x=None):
if not isinstance(x, numbers.Number) or isinstance(x, numbers.Complex): # this sums up every number type, with the exclusion of complex number
print ("you have not entered x")
else:
y=x**2
return y
list=[1, 3, 0 ,9]
output=[]
for item in list:
y=square(item)
output.append(y)
print (output)
```
[numbers.Number](https://docs.python.org/2/library/numbers.html) is *the* abstract class to check if argument `x` is a number (credit to [Copperfield](http://stackoverflow.com/users/5644961/copperfield) for pointing this out).
Excerpt from [Python Standard Library Documentation](https://docs.python.org/2/library/numbers.html) explains *just* what you need - with the exception of complex number:
> *class* **numbers.Number**
>
> The **root of the numeric hierarchy**. If you just want to check if an
> argument x is a **number, without caring what kind**, use **isinstance(x,
> Number)**.
But, you don't want the input to be complex number. So, just omit it using `or isinstance(x, numbers.Complex)`
> This way, you write the `definition` of `square` *exactly* the way you
> want it. This solution, I think, is the **best** solution by the virtue of
> its ***comprehensiveness***.
2. **To handle just the data types you want to handle**.
If you have a list valid inpug data types you, you could also put up *just those specific data types* you want to handle. That is, you don't want to handle the cases for data types other than what you have specified. Examples:
```
if not instance(x, int): #just handle int
if not instance(x, (int, float)): #just handle int and float
if not instance(x, (numbers.Integral, numbers.Rational)): #just handle integral and rational, not real or complex
```
> You may change/extend the condition above *easily* for different data
> types that you want to include or to excluded - according to your
> need. This solution, I think, is the **second best** by the virtue of its
> ***customization for its validity checking***.
(Code above is done in more *Pythonical* way, as suggested by [cat](http://stackoverflow.com/users/4532996/cat))
3. **Not handling impossible cases: you know what the users would *not* put up as input**.
Think it more loosely, if you know - not the data types you want to handle like in the second solution - but the data types which the user would *not* put, then you can have looser condition check like this:
```
if not isinstance(x, numbers.Number): # this is ok, because the user would not put up complex number
```
> This solution, I think, is the **third best** by the virtue of being ***one
> of the simplest yet powerful checking***.
The only downside of this solution is that you don't handle complex type. Therefore can only be implementing by owing to the fact that the users would not have complex number as the input.
4. **To handle input errors only for the known possible inputs which can cause the errors**.
For example, if you know that x is always `int` or `None` - and thus the only possible input error is `None` - then we can simply write the logic to avoid `y` being evaluated *only* when `x` is `None` like this:
```
def square(x=None):
if x is None:
print ("you have not entered x")
else:
y=x**2
return y
list=[1, 3, 0 ,9]
output=[]
for item in list:
y=square(item)
output.append(y)
print (output)
```
> This solution has the virtue of being ***the simplest***.
...and yet the most **dangerous** for being used *if* you do not know *exactly* what the users would put up for the input. Otherwise, this solution is fine and is also the simplest.
Your solution, I think more or less belongs to this category. You know what input the user will give and what the user will not. Thus, using this solution or your own solution:
```
if not x and not str(x).isdigit():
```
Is fine, except that the example solution is simpler
---
Given your case, you can use any *solution* above to get:
```
[1, 9, 0, 81]
```
*(Side Note: I try to format the solutions to look like "canonical solutions" for ease of reading purpose. This way, those who have the same questions and who visit this page in the future may be able to find the solutions more comprehensive and readable)* |
Python - If not statement with 0.0 | 35,572,698 | 19 | 2016-02-23T08:44:25Z | 35,573,076 | 8 | 2016-02-23T09:02:56Z | [
"python",
"python-2.7",
"if-statement",
"control-structure"
] | I have a question regarding `if not` statement in `Python 2.7`.
I have written some code and used `if not` statements. In one part of the code I wrote, I refer to a function which includes an `if not` statement to determine whether an optional keyword has been entered.
It works fine, except when `0.0` is the keyword's value. I understand this is because `0` is one of the things that is considered 'not'. My code is probably too long to post, but this is an analogous (albeit simplified) example:
```
def square(x=None):
if not x:
print "you have not entered x"
else:
y=x**2
return y
list=[1, 3, 0 ,9]
output=[]
for item in list:
y=square(item)
output.append(y)
print output
```
However, in this case I got left with:
```
you have not entered x
[1, 9, None, 81]
```
Where as I would like to get:
```
[1, 9, 0, 81]
```
In the above example I could use a list comprehension, but assuming I wanted to use the function and get the desired output how could I do this?
One thought I had was:
```
def square(x=None):
if not x and not str(x).isdigit():
print "you have not entered x"
else:
y=x**2
return y
list=[1, 3, 0 ,9]
output=[]
for item in list:
y=square(item)
output.append(y)
print output
```
This works, but seems like a bit of a clunky way of doing it. If anyone has another way that would be nice I would be very appreciative. | Since you are using `None` to signal "this parameter is not set", then that is exactly what you should check for using the `is` keyword:
```
def square(x=None):
if x is None:
print "you have not entered x"
else:
y=x**2
return y
```
Checking for type is cumbersome and error prone since you would have to check for *all possible input types*, not just `int`. |
performance loss after vectorization in numpy | 35,578,145 | 6 | 2016-02-23T12:53:34Z | 35,585,617 | 8 | 2016-02-23T18:43:10Z | [
"python",
"performance",
"numpy",
"vectorization",
"linear-algebra"
] | I am writing a time consuming program. To reduce the time, I have tried my best to use `numpy.dot` instead of `for` loops.
However, I found vectorized program to have much worse performance than the for loop version:
```
import numpy as np
import datetime
kpt_list = np.zeros((10000,20),dtype='float')
rpt_list = np.zeros((1000,20),dtype='float')
h_r = np.zeros((20,20,1000),dtype='complex')
r_ndegen = np.zeros(1000,dtype='float')
r_ndegen.fill(1)
# setup completed
# this is a the vectorized version
r_ndegen_tile = np.tile(r_ndegen.reshape(1000, 1), 10000)
start = datetime.datetime.now()
phase = np.exp(1j * np.dot(rpt_list, kpt_list.T))/r_ndegen_tile
kpt_data_1 = h_r.dot(phase)
end = datetime.datetime.now()
print((end-start).total_seconds())
# the result is 19.302483
# this is the for loop version
kpt_data_2 = np.zeros((20, 20, 10000), dtype='complex')
start = datetime.datetime.now()
for i in range(10000):
kpt = kpt_list[i, :]
phase = np.exp(1j * np.dot(kpt, rpt_list.T))/r_ndegen
kpt_data_2[:, :, i] = h_r.dot(phase)
end = datetime.datetime.now()
print((end-start).total_seconds())
# the result is 7.74583
```
What is happening here? | The first thing I suggest you do is break your script down into separate functions to make profiling and debugging easier:
```
def setup(n1=10000, n2=1000, n3=20, seed=None):
gen = np.random.RandomState(seed)
kpt_list = gen.randn(n1, n3).astype(np.float)
rpt_list = gen.randn(n2, n3).astype(np.float)
h_r = (gen.randn(n3, n3,n2) + 1j*gen.randn(n3, n3,n2)).astype(np.complex)
r_ndegen = gen.randn(1000).astype(np.float)
return kpt_list, rpt_list, h_r, r_ndegen
def original_vec(*args, **kwargs):
kpt_list, rpt_list, h_r, r_ndegen = setup(*args, **kwargs)
r_ndegen_tile = np.tile(r_ndegen.reshape(1000, 1), 10000)
phase = np.exp(1j * np.dot(rpt_list, kpt_list.T)) / r_ndegen_tile
kpt_data = h_r.dot(phase)
return kpt_data
def original_loop(*args, **kwargs):
kpt_list, rpt_list, h_r, r_ndegen = setup(*args, **kwargs)
kpt_data = np.zeros((20, 20, 10000), dtype='complex')
for i in range(10000):
kpt = kpt_list[i, :]
phase = np.exp(1j * np.dot(kpt, rpt_list.T)) / r_ndegen
kpt_data[:, :, i] = h_r.dot(phase)
return kpt_data
```
I would also highly recommend using random data rather than all-zero or all-one arrays, unless that's what your actual data looks like (!). This makes it much easier to check the correctness of your code - for example, if your last step is to multiply by a matrix of zeros then your output will always be all-zeros, regardless of whether or not there is a mistake earlier on in your code.
---
Next, I would run these functions through [`line_profiler`](https://github.com/rkern/line_profiler) to see where they are spending most of their time. In particular, for `original_vec`:
```
In [1]: %lprun -f original_vec original_vec()
Timer unit: 1e-06 s
Total time: 23.7598 s
File: <ipython-input-24-c57463f84aad>
Function: original_vec at line 12
Line # Hits Time Per Hit % Time Line Contents
==============================================================
12 def original_vec(*args, **kwargs):
13
14 1 86498 86498.0 0.4 kpt_list, rpt_list, h_r, r_ndegen = setup(*args, **kwargs)
15
16 1 69700 69700.0 0.3 r_ndegen_tile = np.tile(r_ndegen.reshape(1000, 1), 10000)
17 1 1331947 1331947.0 5.6 phase = np.exp(1j * np.dot(rpt_list, kpt_list.T)) / r_ndegen_tile
18 1 22271637 22271637.0 93.7 kpt_data = h_r.dot(phase)
19
20 1 4 4.0 0.0 return kpt_data
```
You can see that it spends 93% of its time computing the dot product between `h_r` and `phase`. Here, `h_r` is a `(20, 20, 1000)` array and `phase` is `(1000, 10000)`. We're computing a sum product over the last dimension of `h_r` and the first dimension of `phase` (you could write this in `einsum` notation as `ijk,kl->ijl`).
---
The first two dimensions of `h_r` don't really matter here - we could just as easily reshape `h_r` into a `(20*20, 1000)` array before taking the dot product. It turns out that this reshaping operation by itself gives a huge performance improvement:
```
In [2]: %timeit h_r.dot(phase)
1 loop, best of 3: 22.6 s per loop
In [3]: %timeit h_r.reshape(-1, 1000).dot(phase)
1 loop, best of 3: 1.04 s per loop
```
I'm not entirely sure why this should be the case - I would have hoped that numpy's `dot` function would be smart enough to apply this simple optimization automatically. On my laptop the second case seems to use multiple threads whereas the first one doesn't, suggesting that it might not be calling multithreaded BLAS routines.
---
Here's a vectorized version that incorporates the reshaping operation:
```
def new_vec(*args, **kwargs):
kpt_list, rpt_list, h_r, r_ndegen = setup(*args, **kwargs)
phase = np.exp(1j * np.dot(rpt_list, kpt_list.T)) / r_ndegen[:, None]
kpt_data = h_r.reshape(-1, phase.shape[0]).dot(phase)
return kpt_data.reshape(h_r.shape[:2] + (-1,))
```
The `-1` indices tell numpy to infer the size of those dimensions according to the other dimensions and the number of elements in the array. I've also used [broadcasting](http://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html) to divide by `r_ndegen`, which eliminates the need for `np.tile`.
By using the same random input data, we can check that the new version gives the same result as the original:
```
In [4]: ans1 = original_loop(seed=0)
In [5]: ans2 = new_vec(seed=0)
In [6]: np.allclose(ans1, ans2)
Out[6]: True
```
Some performance benchmarks:
```
In [7]: %timeit original_loop()
1 loop, best of 3: 13.5 s per loop
In [8]: %timeit original_vec()
1 loop, best of 3: 24.1 s per loop
In [5]: %timeit new_vec()
1 loop, best of 3: 2.49 s per loop
```
---
### Update:
I was curious about why `np.dot` was so much slower for the original `(20, 20, 1000)` `h_r` array, so I dug into the numpy source code. The logic implemented in [`multiarraymodule.c`](https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/multiarraymodule.c#L934-L940) turns out to be shockingly simple:
```
#if defined(HAVE_CBLAS)
if (PyArray_NDIM(ap1) <= 2 && PyArray_NDIM(ap2) <= 2 &&
(NPY_DOUBLE == typenum || NPY_CDOUBLE == typenum ||
NPY_FLOAT == typenum || NPY_CFLOAT == typenum)) {
return cblas_matrixproduct(typenum, ap1, ap2, out);
}
#endif
```
In other words numpy just checks whether either of the input arrays has > 2 dimensions, and immediately falls back on a non-BLAS implementation of matrix-matrix multiplication. It seems like it shouldn't be too difficult to check whether the inner dimensions of the two arrays are compatible, and if so treat them as 2D and perform `*gemm` matrix-matrix multiplication on them. In fact [there's an open feature request for this dating back to 2012](https://github.com/numpy/numpy/issues/619), if any numpy devs are reading...
In the meantime, it's a nice performance trick to be aware of when multiplying tensors.
---
### Update 2:
I forgot about [`np.tensordot`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.tensordot.html). Since it calls the same underlying BLAS routines as `np.dot` on a 2D array, it can achieve the same performance bump, but without all those ugly `reshape` operations:
```
In [6]: %timeit np.tensordot(h_r, phase, axes=1)
1 loop, best of 3: 1.05 s per loop
``` |
Why does bit-wise shift left return different results in Python and Java? | 35,578,435 | 12 | 2016-02-23T13:07:33Z | 35,578,505 | 21 | 2016-02-23T13:10:45Z | [
"java",
"python",
"bitwise-operators",
"bit-shift"
] | I'm trying to port some functionality from a Java app to Python.
In Java,
```
System.out.println(155 << 24);
```
Returns: -1694498816
In Python:
```
print(155 << 24)
```
Returns 2600468480
Many other bitwise operations have worked in the same way in both languages. Why is there a different result in these two operations?
---
EDIT: I'm trying to create a function in python to replicate how the left shift operator works in Java. Something along the lines of:
```
def lshift(val, n):
return (int(val) << n) - 0x100000000
```
However this doesn't seem right as (I think) it turns all numbers negatives?
---
EDIT2: Several hours later, I've decided it is probably not the best idea to use Python for this job and will take part of the Java application and use it as a micro service for the existing Python app. | Java has 32-bit fixed width integers, so `155 << 24` shifts the uppermost set bit of `155` (which is bit 7, counting bits from zero, because 155 is greater than 27 but less than 28) into the sign bit (bit 31) and you end up with a negative number.
Python has arbitrary-precision integers, so `155 << 24` is numerically equal to the positive number 155 Ã 224 |
How do I convert user input into a list? | 35,582,959 | 8 | 2016-02-23T16:32:18Z | 35,582,986 | 9 | 2016-02-23T16:33:46Z | [
"python",
"list",
"python-3.x"
] | I'm wondering how to take user input and make a list of every character in it.
```
magicInput = input('Type here: ')
```
And say you entered "python rocks"
I want a to make it a list something like this
```
magicList = [p,y,t,h,o,n, ,r,o,c,k,s]
```
But if I do this:
```
magicInput = input('Type here: ')
magicList = [magicInput]
```
The magicList is just
```
['python rocks']
``` | Use the built-in `list()` function:
```
magicInput = input('Type here: ')
magicList = list(magicInput)
print(magicList)
```
**Output**
```
['p', 'y', 't', 'h', 'o', 'n', ' ', 'r', 'o', 'c', 'k', 's']
``` |
TK Framework double implementation issue | 35,593,602 | 10 | 2016-02-24T04:59:44Z | 37,222,391 | 7 | 2016-05-14T04:03:52Z | [
"python",
"python-2.7",
"tkinter",
"python-imaging-library",
"anaconda"
] | I am testing out creating a GUI using the Tkinter module. I was trying to add an image to the GUI using PIL. My code looks like this:
```
import Tkinter as tk
from PIL import Image, ImageTk
root = tk.Tk()
root.title('background image')
imfile = "foo.png"
im = Image.open(imfile)
im1 = ImageTk.PhotoImage(im)
```
When I run this code, I come up with some errors that lead to a segfault.
```
objc[5431]: Class TKApplication is implemented in both/Users/sykeoh/anaconda/lib/libtk8.5.dylib and /System/Library/Frameworks/Tk.framework/Versions/8.5/Tk. One of the two will be used. Which one is undefined.
objc[5431]: Class TKMenu is implemented in both /Users/sykeoh/anaconda/lib/libtk8.5.dylib and /System/Library/Frameworks/Tk.framework/Versions/8.5/Tk. One of the two will be used. Which one is undefined.
objc[5431]: Class TKContentView is implemented in both /Users/sykeoh/anaconda/lib/libtk8.5.dylib and /System/Library/Frameworks/Tk.framework/Versions/8.5/Tk. One of the two will be used. Which one is undefined.
objc[5431]: Class TKWindow is implemented in both /Users/sykeoh/anaconda/lib/libtk8.5.dylib and /System/Library/Frameworks/Tk.framework/Versions/8.5/Tk. One of the two will be used. Which one is undefined.
Segmentation fault: 11
```
I've looked online and it looks to be an issue with the Tk framework in my Systems library and the other in the anaconda library. However, none of the solutions really seemed to work. Any possible solutions or workarounds?
The issue comes with running ImageTk.Photoimage. If I remove that line of code, there is no issues. | I know I created the bounty, but I got impatient, decided to investigate, and now I've got something that worked for me. I have a very similar python example to yours, which pretty much does nothing other than try to use Tkinter to display an image passed on the command line, like so:
```
calebhattingh $ python imageview.py a.jpg
objc[84696]: Class TKApplication is implemented in both /Users/calebhattingh/anaconda/envs/py35/lib/libtk8.5.dylib and /System/Library/Frameworks/Tk.framework/Versions/8.5/Tk. One of the two will be used. Which one is undefined.
objc[84696]: Class TKMenu is implemented in both /Users/calebhattingh/anaconda/envs/py35/lib/libtk8.5.dylib and /System/Library/Frameworks/Tk.framework/Versions/8.5/Tk. One of the two will be used. Which one is undefined.
objc[84696]: Class TKContentView is implemented in both /Users/calebhattingh/anaconda/envs/py35/lib/libtk8.5.dylib and /System/Library/Frameworks/Tk.framework/Versions/8.5/Tk. One of the two will be used. Which one is undefined.
objc[84696]: Class TKWindow is implemented in both /Users/calebhattingh/anaconda/envs/py35/lib/libtk8.5.dylib and /System/Library/Frameworks/Tk.framework/Versions/8.5/Tk. One of the two will be used. Which one is undefined.
Segmentation fault: 11
```
What's happening is that the *binary* file, `~/anaconda/envs/py35/lib/python3.5/site-packages/PIL/_imagingtk.so` has been linked to a framework, and not the Tcl/Tk libs in the env. You can see this by using `otool` to see the linking setup:
```
(py35)í ¼í¾¨ ~/anaconda/envs/py35/lib/python3.5/site-packages/PIL
calebhattingh $ otool -L _imagingtk.so
_imagingtk.so:
/System/Library/Frameworks/Tcl.framework/Versions/8.5/Tcl (compatibility version 8.5.0, current version 8.5.9)
/System/Library/Frameworks/Tk.framework/Versions/8.5/Tk (compatibility version 8.5.0, current version 8.5.9)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.1.0)
/usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0)
```
See those two "framework" lines? With anaconda we don't want that. We want to use the libraries in the env. So let's change them!
First make a backup of your binary (in case you want to revert):
```
$ cp _imagingtk.so _imagingtk.so.bak
```
Now run this on the command-line (assuming you are in the same folder as your `envname/lib`):
```
$ install_name_tool -change "/System/Library/Frameworks/Tk.framework/Versions/8.5/Tk" "@rpath/libtk8.5.dylib" _imagingtk.so
$ install_name_tool -change "/System/Library/Frameworks/Tcl.framework/Versions/8.5/Tcl" "@rpath/libtcl8.5.dylib" _imagingtk.so
```
You see that `@rpath` bit in there? That means *whichever one you find on the path*. Which works great for anaconda. The linking in the `_imagingtk.so` library now looks like this:
```
(py35)í ¼í¾¨ ~/anaconda/envs/py35/lib/python3.5/site-packages/PIL
calebhattingh $ otool -L _imagingtk.so
_imagingtk.so:
@rpath/libtcl8.5.dylib (compatibility version 8.5.0, current version 8.5.9)
@rpath/libtk8.5.dylib (compatibility version 8.5.0, current version 8.5.9)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.1.0)
/usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0)
```
After this, your code will run. Someone should probably try to get this upstream.
*Addendum*: The Tkinter binding in the python distribution, i.e., the currently-active conda env, has the following linking:
```
í ¼í¾¨ ~/anaconda/envs/py35/lib/python3.5/lib-dynload
calebhattingh $ otool -L _tkinter.cpython-35m-darwin.so
_tkinter.cpython-35m-darwin.so:
@loader_path/../../libtcl8.5.dylib (compatibility version 8.5.0, current version 8.5.18)
@loader_path/../../libtk8.5.dylib (compatibility version 8.5.0, current version 8.5.18)
/usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 111.0.0)
```
If you prefer, you can rather use `install_name_tool` to use `@loader_path/../../` instead of what I used above, i.e. `@rpath/`. That will *probably* also work, and might even be better. |
Loop while checking if element in a list in Python | 35,605,836 | 18 | 2016-02-24T15:13:33Z | 35,605,902 | 18 | 2016-02-24T15:16:23Z | [
"python",
"python-2.7",
"loops",
"python-3.x",
"interpreter"
] | Let's say I have a simple piece of code like this:
```
for i in range(1000):
if i in [150, 300, 500, 750]:
print(i)
```
Does the list `[150, 300, 500, 750]` get created every iteration of the loop? Or can I assume that the interpreter (say, CPython 2.7) is smart enough to optimize this away? | You can view the bytecode using [`dis.dis`](https://docs.python.org/2/library/dis.html#dis.dis). Here's the output for CPython 2.7.11:
```
2 0 SETUP_LOOP 40 (to 43)
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (1000)
9 CALL_FUNCTION 1
12 GET_ITER
>> 13 FOR_ITER 26 (to 42)
16 STORE_FAST 0 (i)
3 19 LOAD_FAST 0 (i)
22 LOAD_CONST 6 ((150, 300, 500, 750))
25 COMPARE_OP 6 (in)
28 POP_JUMP_IF_FALSE 13
4 31 LOAD_FAST 0 (i)
34 PRINT_ITEM
35 PRINT_NEWLINE
36 JUMP_ABSOLUTE 13
39 JUMP_ABSOLUTE 13
>> 42 POP_BLOCK
>> 43 LOAD_CONST 0 (None)
46 RETURN_VALUE
```
Hence, the list creation is optimized to the loading of a constant tuple (byte 22). The list (which is in reality a tuple in this case) is not created anew on each iteration. |
How do I download Pygame for Python 3.5.1? | 35,609,592 | 4 | 2016-02-24T17:59:28Z | 35,665,890 | 8 | 2016-02-27T04:58:56Z | [
"python",
"python-3.x",
"pygame"
] | I am unable to find a pygame download for Python 3.5 and the ones I have downloaded don't seem to work when I import to the shell. Help?
This is the message I receive on the shell:
> > > import pygame
> > > Traceback (most recent call last):
> > > File "", line 1, in
> > > import pygame
> > > ImportError: No module named 'pygame' | I'm gonna guess your using Windows. If you are not then there is no special version of pygame for Python 3+. If you do have Windows then read below.
You will need pygame to be part of your path to do this. This is so you can use this in the command prompt. Make sure you use it as an admin when doing this.
First you need to find out what bit version of Python you have. Open your Python shell and at the top of the window it should say something like "Pygame V(some number) (bit number)" You want the bit number.
Now you ned to open the command prompt. Use the "windows key + r key" to open the run menu, and type "cmd" and press enter. Or you can just search your PC for "cmd" and right click on it and select "run as admin" to open as an admin.
Python comes with a special path command called "pip." I am not gonna get into this module too much, but in short it is used to install addition Python modules. The first thing you need to do is this command...
```
pip install wheel
```
The screen should print some stuff off while doing this. You can tell if the module installed correctly because it should print something like "wheel installed successfully." We are gonna need this later.
Now you need to get your pygame file. Go [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#pygame) and find the pygame section. If you have python 32 bit download you should download this "pygame-1.9.2a0-cp35-none-win32.whl" or if you have 64 bit Python download "pygame-1.9.2a0-cp35-none-win\_amd64.whl". I am pretty sure these are the ones you need for your bit version, but I installed pygame on my Windows 10 a few months ago so they may be different now.
Once you have this downloaded go back to the command prompt. Enter this command...
```
pip install (filename)
```
Make sure it includes the .whl extension. If you get an error then specify the path to the folder the file is in (which should be the downloads folder). Once again you should see a message similar to "pygame installed successfully."
Once all this is done open your Python shell and type...
```
import pygame
```
If it works you now have pygame available for use. If not then there are a few more things you can try...
1. Try restarting your PC. Sometimes these things don't take affect until a system restart.
2. Try installing a different version of pygame from the website listed above. It may just be a simple issue due to bit version differences.
3. Make sure you actually installed the pygame module from the file. It may of thrown an error that appeared to be an actual successful installation. It always pays to double-check.
Like I said before I installed pygame on my Windows 10 with Python 3.4 64 bit a few months ago in the same way I told you here so it should work, but may be outdated. Anyways I hope this helps you with your pygame installation issues and the best of luck to you! |
Python beginner - list of dictionary variable names | 35,615,229 | 2 | 2016-02-24T23:20:42Z | 35,615,278 | 7 | 2016-02-24T23:23:50Z | [
"python",
"list",
"dictionary",
"printing"
] | **My assignment is:**
Make several dictionaries, where the name of each dictionary is the name of a pet. In each dictionary, include the kind of animal and the ownerâs name. Store these dictionaries in a list called pets. Next, loop through your list and as you do print everything you know about each pet.
**What I have so far:**
```
rover = {'type': 'dog', 'owner': 'joe'}
blackie = {'type': 'cat', 'owner': 'gail'}
polly = {'type': 'bird', 'owner': 'paul'}
seth = {'type': 'snake', 'owner': 'stan'}
pets = [rover, blackie, polly, seth]
for pet in pets:
print("\nPet Name:", "\nType:", pet['type'].title(), "\nPet Owner:", pet['owner'].title())
```
**Output so far:**
Pet Name:
Type: Dog
Pet Owner: Joe
Pet Name:
Type: Cat
Pet Owner: Gail
Pet Name:
Type: Bird
Pet Owner: Paul
Pet Name:
Type: Snake
Pet Owner: Stan
**My Question:**
What do I need to add to my code to have the output include the Pet Name?
**Desired Output:**
Pet Name: Rover
Type: Dog
Pet Owner: Joe
Pet Name: Blackie
Type: Cat
Pet Owner: Gail
Pet Name: Polly
Type: Bird
Pet Owner: Paul
Pet Name: Seth
Type: Snake
Pet Owner: Stan | I would store the name in the dictionary.
```
rover = {'name' : 'rover', 'type': 'dog', 'owner': 'joe'}
``` |
How to transform string into dict | 35,618,307 | 8 | 2016-02-25T04:24:40Z | 35,618,453 | 9 | 2016-02-25T04:36:42Z | [
"python",
"list",
"loops",
"iteration"
] | I've got a string with the words that are separated by spaces. I turn this string into list:
```
out = str.split()
```
And count how many values are created:
```
print len(out) # Says 192
```
Then I try to delete everything from the list:
```
for x in out:
out.remove(x)
```
And then count again:
```
print len(out) # Says 96
```
Can someone explain please why it says 96 instead of 0 ???
MORE INFO
My string looks like this: #one cat #two dogs #three birds
There is no duplicates in the string, all words are unique.
So, what I am doing is:
```
for x in out:
if '#' in x:
ind = out.index(x) # Get current index
nextValue = out[ind+1] # Get next value
myDictionary[x] = nextValue
out.remove(nextValue)
out.remove(x)
```
The problem is I cannot move all value-pairs into a dictionary since I only iterate through 96 items.
Thank you everyone! | I think you actually want something like this:
```
s = '#one cat #two dogs #three birds'
out = s.split()
entries = dict([(x, y) for x, y in zip(out[::2], out[1::2])])
```
What is this code doing? Let's break it down. First, we split `s` by whitespace into `out` as you had.
Next we iterate over the pairs in `out`, calling them "`x, y`". Those pairs become a `list` of tuple/pairs. `dict()` accepts a list of size two tuples and treats them as `key, val`.
Here's what I get when I tried it:
```
$ cat tryme.py
s = '#one cat #two dogs #three birds'
out = s.split()
entries = dict([(x, y) for x, y in zip(out[::2], out[1::2])])
from pprint import pprint
pprint(entries)
$ python tryme.py
{'#one': 'cat', '#three': 'birds', '#two': 'dogs'}
``` |
How to transform string into dict | 35,618,307 | 8 | 2016-02-25T04:24:40Z | 35,618,686 | 12 | 2016-02-25T04:57:18Z | [
"python",
"list",
"loops",
"iteration"
] | I've got a string with the words that are separated by spaces. I turn this string into list:
```
out = str.split()
```
And count how many values are created:
```
print len(out) # Says 192
```
Then I try to delete everything from the list:
```
for x in out:
out.remove(x)
```
And then count again:
```
print len(out) # Says 96
```
Can someone explain please why it says 96 instead of 0 ???
MORE INFO
My string looks like this: #one cat #two dogs #three birds
There is no duplicates in the string, all words are unique.
So, what I am doing is:
```
for x in out:
if '#' in x:
ind = out.index(x) # Get current index
nextValue = out[ind+1] # Get next value
myDictionary[x] = nextValue
out.remove(nextValue)
out.remove(x)
```
The problem is I cannot move all value-pairs into a dictionary since I only iterate through 96 items.
Thank you everyone! | As for what actually happened in the **for** loop:
> **From the** [**Python for statement documentation**](https://docs.python.org/2/reference/compound_stmts.html#the-for-statement):
>
> The expression list is evaluated *once*; it should yield an iterable
> object. An iterator is created for the result of the `expression_list`.
> The suite is then executed *once* for each item provided by the
> iterator, **in the order of ascending indices**. Each item in turn is
> assigned to the target *list* using the standard rules for assignments,
> and then the suite is executed. **When the items are exhausted** (which is
> immediately when the sequence is **empty**), the suite in the `else` clause,
> if present, is executed, and the `loop` **terminates**.
I think it is best shown with the aid of an **illustration**.
Now, suppose you have an `iterable object` (such as `list`) like this:
```
out = [a, b, c, d, e, f]
```
What happen when you do `for x in out` is that it **creates internal indexer** which goes like this (I illustrate it with the symbol `^`):
```
[a, b, c, d, e, f]
^ <-- here is the indexer
```
What normally happen is that: as you finish one cycle of your loop, **the indexer moves forward** like this:
```
[a, b, c, d, e, f] #cycle 1
^ <-- here is the indexer
[a, b, c, d, e, f] #cycle 2
^ <-- here is the indexer
[a, b, c, d, e, f] #cycle 3
^ <-- here is the indexer
[a, b, c, d, e, f] #cycle 4
^ <-- here is the indexer
[a, b, c, d, e, f] #cycle 5
^ <-- here is the indexer
[a, b, c, d, e, f] #cycle 6
^ <-- here is the indexer
#finish, no element is found anymore!
```
> As you can see, the indexer **keeps moving forward till the end of your
> list, regardless of what happened to the list**!
Thus when you do `remove`, this is what happened internally:
```
[a, b, c, d, e, f] #cycle 1
^ <-- here is the indexer
[b, c, d, e, f] #cycle 1 - a is removed!
^ <-- here is the indexer
[b, c, d, e, f] #cycle 2
^ <-- here is the indexer
[c, d, e, f] #cycle 2 - c is removed
^ <-- here is the indexer
[c, d, e, f] #cycle 3
^ <-- here is the indexer
[c, d, f] #cycle 3 - e is removed
^ <-- here is the indexer
#the for loop ends
```
Notice that there are only **3 cycles** there instead of **6 cycles**(!!) (which is the number of the elements in the original list). And that's why you left with **half** `len` of your original `len`, because that is the number of cycles it takes to complete the loop when you remove one element from it for each cycle.
---
If you want to clear the list, simply do:
```
if (out != []):
out.clear()
```
Or, alternatively, to remove the element one by one, you need to do it **the other way around - from the end to the beginning**. Use `reversed`:
```
for x in reversed(out):
out.remove(x)
```
---
Now, why would the `reversed` work? If the indexer keeps moving forward, wouldn't `reversed` also should not work because the number of element is reduced by one per cycle anyway?
No, it is not like that,
> Because `reversed` method **changes** the way to the internal indexer
> works! What happened when you use `reversed` method is **to make the
> internal indexer moves backward** (from the end) instead of
> **forward**.
To illustrate, this is what normally happens:
```
[a, b, c, d, e, f] #cycle 1
^ <-- here is the indexer
[a, b, c, d, e, f] #cycle 2
^ <-- here is the indexer
[a, b, c, d, e, f] #cycle 3
^ <-- here is the indexer
[a, b, c, d, e, f] #cycle 4
^ <-- here is the indexer
[a, b, c, d, e, f] #cycle 5
^ <-- here is the indexer
[a, b, c, d, e, f] #cycle 6
^ <-- here is the indexer
#finish, no element is found anymore!
```
And thus when you do one removal per cycle, it doesn't affect how the indexer works:
```
[a, b, c, d, e, f] #cycle 1
^ <-- here is the indexer
[a, b, c, d, e] #cycle 1 - f is removed
^ <-- here is the indexer
[a, b, c, d, e] #cycle 2
^ <-- here is the indexer
[a, b, c, d] #cycle 2 - e is removed
^ <-- here is the indexer
[a, b, c, d] #cycle 3
^ <-- here is the indexer
[a, b, c] #cycle 3 - d is removed
^ <-- here is the indexer
[a, b, c] #cycle 4
^ <-- here is the indexer
[a, b] #cycle 4 - c is removed
^ <-- here is the indexer
[a, b] #cycle 5
^ <-- here is the indexer
[a] #cycle 5 - b is removed
^ <-- here is the indexer
[a] #cycle 6
^ <-- here is the indexer
[] #cycle 6 - a is removed
^ <-- here is the indexer
```
Hope the illustration helps you to understand what's going on internally... |
Merging Key-Value Pairings in Dictionary | 35,619,544 | 20 | 2016-02-25T06:01:23Z | 35,619,820 | 11 | 2016-02-25T06:20:49Z | [
"python",
"algorithm",
"dictionary"
] | I have a dictionary that consists of employee-manager as key-value pairs:
```
{'a': 'b', 'b': 'd', 'c': 'd', 'd': 'f'}
```
I want to show the relations between employee-manager at all levels (employee's boss, his boss's boss, his boss's boss's boss etc.) using a dictionary. The desired output is:
```
{'a': [b,d,f], 'b': [d,f], 'c': [d,f], 'd': [f] }
```
Here is my attempt which only shows the first level:
```
for key, value in data.items():
if (value in data.keys()):
data[key] = [value]
data[key].append(data[value])
```
I can do another conditional statement to add the next level but this would be the wrong way to go about it. I'm not very familiar with dictionaries so what would be a better approach? | ```
>>> D = {'a': 'b', 'b': 'd', 'c': 'd', 'd': 'f'}
>>> res = {}
>>> for k in D:
... res[k] = [j] = [D[k]]
... while j in D:
... j = D[j]
... res[k].append(j)
...
>>> res
{'b': ['d', 'f'], 'c': ['d', 'f'], 'd': ['f'], 'a': ['b', 'd', 'f']}
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.